TY - GEN
T1 - A BERT-based Hate Speech Classifier from Transcribed Online Short-Form Videos
AU - Hernandez Urbano, Rommel
AU - Uy Ajero, Jeffrey
AU - Legaspi Angeles, Angelic
AU - Hacar Quintos, Maria Nikki
AU - Regalado Imperial, Joseph Marvin
AU - Llabanes Rodriguez, Ramon
PY - 2021/8/21
Y1 - 2021/8/21
N2 - With the rise of human-centric technologies such as social media platforms, the amount of hate also continues to grow proportionally with the increasing number of users worldwide. TikTok is one of the most-used social media platforms due to its feature that allows users to express themselves via creating and sharing short-form videos based on any desired topic and content. In addition, it has also become a platform for political discourse and mudslinging as users can freely express an opinion and indirectly debate with random people online. In this study, we propose the use of BERT, a complex bidirectional transformer-based model, for the task of automatic hate speech detection from speech transcribed from Tagalog TikTok videos. Results of our experiments show that a BERT-based hate speech classifier scores 61% F1. We also extended the task beyond several algorithms such as LSTM, Naïve Bayes, and Decision Tree and found out that traditional methods such as a simple Bernoulli Naïve Bayes approach remain at par with the BERT model.
AB - With the rise of human-centric technologies such as social media platforms, the amount of hate also continues to grow proportionally with the increasing number of users worldwide. TikTok is one of the most-used social media platforms due to its feature that allows users to express themselves via creating and sharing short-form videos based on any desired topic and content. In addition, it has also become a platform for political discourse and mudslinging as users can freely express an opinion and indirectly debate with random people online. In this study, we propose the use of BERT, a complex bidirectional transformer-based model, for the task of automatic hate speech detection from speech transcribed from Tagalog TikTok videos. Results of our experiments show that a BERT-based hate speech classifier scores 61% F1. We also extended the task beyond several algorithms such as LSTM, Naïve Bayes, and Decision Tree and found out that traditional methods such as a simple Bernoulli Naïve Bayes approach remain at par with the BERT model.
KW - Bidirectional Encoder Representations from Transformers (BERT)
KW - Filipino Language
KW - Hate Speech
KW - TikTok
UR - http://www.scopus.com/inward/record.url?scp=85122281001&partnerID=8YFLogxK
U2 - 10.1145/3485768.3485806
DO - 10.1145/3485768.3485806
M3 - Chapter in a published conference proceeding
AN - SCOPUS:85122281001
T3 - ACM International Conference Proceeding Series
SP - 186
EP - 192
BT - ICSET 2021 - 2021 5th International Conference on E-Society, E-Education and E-Technology
PB - Association for Computing Machinery
CY - U. S. A.
T2 - 5th International Conference on E-Society, E-Education and E-Technology, ICSET 2021
Y2 - 21 August 2021 through 23 August 2021
ER -