TOXICITY DETECTION IN ONLINE GEORGIAN DISCUSSIONS

Toxicity detection in online Georgian discussions

Toxicity detection in online Georgian discussions

Blog Article

Online social platforms have become omnipresent.While these environments are beneficial for sharing messages, ideas, or information of any kind, they also expose cyber-bullying, verbal harassment, or humiliation.Admittedly and regrettably, the latter actions are rampant, urging further research to restrain malicious activities.

Even though this topic prolock fittings has been explored in several languages, there is no prior work in Georgian toxic comment analysis and detection.In this work, we extracted data from the Tbilisi forum, an online platform for public discussions.Data containing 10,000 comments were labeled as toxic/non-toxic.

After data preprocessing, we pass generated vectors to our models.We developed multiple deep learning architectures: NCP, biRNN, CNN, biGRU-CNN, biLSTM, biGRU, transformer, and a baseline vista 5 vl5 NB-SVM.We took a novel approach in toxic comment classification via employing a brain-inspired NCP model.

Each model, including NCP, showed satisfactory results.Our best-performing model was CNN with 0.888 ACC and 0.

942 AUC.

Report this page