Cyberbullying: How to Identify, Resources to Help, and Innovative Solutions for the Future
Bullying has long been a problem people of all ages have faced, although it is more closely linked to school aged children. In recent years, with the explosive growth of social media and online communities, experts have blamed cyberbullying for “creating an epidemic of teenage depression and anxiety.” Researchers and companies are looking to data science to find a solution to the growing cyberbullying problem.
What is Cyberbullying?
When a person seeks to intentionally harm or intimidate another person who may appear vulnerable, their actions would be considered bullying. There are many forms of bullying including physical, verbal, social, and cyber. Rates of bullying vary between studies because the definitions used to identify bullying incidences differ. However, it is clear bullying incidences are not going away.
This is particularly evident when it comes to cyberbullying. Cyberbullying is when someone seeks to repeatedly inflict harm on someone using technology such as computers, mobile phones, or tablets. Using this technology, cyberbullies may use hurtful words or share embarrassing pictures through social media channels, chat rooms, texts, or emails with the intent of harming someone.
With the continued advancement of technology and the pervasive nature of social media channels, cyberbullying continues to be all too common among teens. Over one-third (37%) of teenagers (12-17 years old) report being bullied online according to an April 2019 study conducted by the Cyberbullying Research Center. Of these teenagers, almost a third (30%) share it has happened to them more than once. Most cyberbullying incidences are reported to occur through social media platforms. However, a study found teens also experience cyberbullying through other channels such as YouTube (24%) and email (15%).
Reasons People Choose to Cyberbully
In general, reasons for cyberbullying do not differ from those of bullying in general. People who bully others may be giving in to pressure from peers, seeking revenge, attempting to elevate their social status in some way, or trying to exert or display their power over another.
The question then becomes why people choose cyberbullying over more traditional forms of bullying. Those who choose to cyberbully would likely admit they find it is easier to get away with. Because there are no face-to-face confrontations, many who participate in cyberbullying somewhat naively believe they cannot get caught or be identified, because they believe it is difficult to monitor online interactions. Research has found that less than half of cyberbullied teens know the identity of the perpetrator. This perceived anonymity also contributes to the frequency at which cyberbullying occurs.
For some guilty of cyberbullying, the lack of face-to-face confrontations allows them to avoid at least some of the associated guilt. They are not present to witness the emotional reactions of their victims, and thereby avoid seeing the immediate damage their actions have caused. Cyberbullying allows them to be emotionally detached from their action.
Still others choose cyberbullying merely because they are bored. They view it as a form of entertainment. Technology has put information at our fingertips, and our expectations for continuous entertainment increases. When teenagers are unsatisfied with the entertainment available online, they may look to create their own.
There is a risk to being over-exposed to social media. With the plethora of negative interactions on social media, over time, people may become desensitized to cyberbullying. Due to the frequency at which it occurs, cyberbullying has the danger of becoming a social norm for some.
Populations at Risk of Being Cyberbullied
Society tends to associate cyberbullying primarily with teenagers. However, cyberbullying can happen within any age group that regularly communicates using computers, mobile phones, or tablets. A Pew Research Study found that 40% of surveyed adults have experience harassment online. Most frequently, these experiences included name calling or the sharing of embarrassing information.
As expected, this study found that age plays a factor as younger adults (18-24 year old’s) are more likely to experience some form of cyberbullying relative to other adult age groups, with 70% having been harassed online at least once. In addition, almost a quarter of this population (24%) has been physically threatened online. This may, in part, be a result of younger age groups participating in social media platforms more frequently.
Particularly among youth, some are more at risk of being cyberbullied. Those who are perceived as different, weak, having low self-esteem, or being lower on the social hierarchy are sometimes considered potential targets for cyberbullying. Victims of cyberbullying can include youth with disabilities, of different races or religions, or those that are part of the LGBTQ community. In most cases, teenagers are the ones guilty of cyberbullying their peers. In addition, a study has shown cyberbullying victims who seek revenge can end up becoming perpetrators, resulting in a vicious cycle that can spread.
Those who experience cyberbullying are at risk of suffering from several possible damaging effects. These effects include depression, anxiety, inability to sleep, low self-esteem, and loss of interest in activities. Over time, these can manifest themselves as physical ailments. Depending on the severity of cyberbullying, the effects can last well into adulthood for victims.
Types of Cyberbullying
Cyberbullying comes in several different forms. These include harassment, cyberstalking, flaming, trolling, exclusion, outing (or doxing), and masquerading. These types of cyberbullying have similar characteristics and may have overlapping definitions. For example, harassment, as it relates to cyberbullying, is a general category that is defined as a persistent pattern of hurtful behavior towards another through messages or actions taken online. Cyberstalking is a specific form of online harassment that can eventually lead to physical harm. Cyberstalkers may continuously follow a person’s online actions and communicate threats or accusations. In many cases, cyberstalking is linked to physical stalking.
Flaming is when hurtful messages or images are directly targeted to another using harsh language. Those engaging in flaming are looking to instigate or perpetuate an online fight. Flaming can also be considered a form of harassment. Trolling can be similar to flaming when the words used are intentionally harmful. However, unlike flaming, trolling includes cases where the cyberbully does not personally know the target.
Exclusion occurs when a group of teenagers intentionally leave out another from gatherings or conversations whether online or offline. It becomes a form of cyberbullying when online interactions take place with the intent of the victim seeing they are being excluded. In the process, victims may be directly or indirectly targeted with hurtful words.
Also known as doxing, outing is a type of cyberbullying that involves the sharing of personal information without the consent of the victim. In doing so, cyberbullies are looking to cause the victim embarrassment or emotional harm. Personal information shared could include private messages (emails, texts, chatroom responses), personal documents, or images.
Cyberbullies who choose masquerading to bully another, do so to protect their identity. When masquerading, cyberbullies anonymously create a fake social media account or online profile. Using this false online identity, they send or post hurtful messages or images directed to their victim. Their target is usually someone they know quite well.
Although there is overlap among the types of cyberbullying and slightly varying definitions, the growing popularity of different social media channels and online communities unfortunately increases the tools and methods cyberbullies have at their disposal. Cyberbullies use numerous tactics that fall into these general categories of cyberbullying.
Data Science’s Role in the Current Battle Against Cyberbullying
Even with technological advancements, the policing of online interactions and the ability to identify and stop cyberbullying has been difficult. Parents have difficulty monitoring the online interactions of their children. It can be time consuming to monitor all the online channels in which youth participate. Social media companies, with millions of users, are continually looking for ways to improve their ability to identify cyberbullying on their platforms. Unfortunately, technology works in both directions. As companies are developing ways to quickly and accurately identify cyberbullying, technology is also helping cyberbullies get away with more by circumventing these features.
The difficulty also lies in accurately identifying offensive comments or remarks that would be considered cyberbullying. Systems can be developed to flag online interactions that include particularly offensive words, but research has shown that not all communications considered to be cyberbullying contain these types of words. Instead, the words need to be evaluated in the context of the sentences.
Despite the difficulties and challenges, there has been no shortage of effort in using data science to combat cyberbullying. Data science researchers, individuals, and companies with online communities are all understanding the severity and prominence of cyberbullying in our society.
Evaluating Groups of Words to Detect Cyberbullying – MIT Media Lab
A group from within the MIT Media Lab, led by Karthik Dinakar, developed an algorithm that detected groups of words and then categorized the online interaction into at least one of thirty themes. This algorithm was tested on A Thin Line, a website run by MTV, where teenagers can anonymously receive advice from other teenagers. Researchers found the algorithm more accurately categorized content compared to other algorithms that matched based on keywords only. MTV implemented this tool on its A Thin Line site where teenagers seeking advice can be matched with other teens in similar situations by matching their stores. Although this algorithm was not initially used to combat cyberbullying, the idea was that its logic could play a valuable role in detecting offensive comments.
To further improve on his work and specifically target cyberbullying, Dinakar created software built to compare online posts to information found in ConceptNet, a database that helps networks understand the human language. His goal was to combine his original algorithm with this software to create a tool social networks could use to identify cyberbullying and subsequently trigger an alert or account disablement. Variations of his work have already been used to direct adolescents in crisis to much needed counseling and support.
Online Monitoring Tools for Parents Based on Artificial Intelligence
Artificial intelligence is also being used to build tools allowing parents to monitor their children’s online interactions. Identity Guard, an identity protection service company, partnered with the Megan Meier foundation and other cyberbullying experts to develop a tool that helps parents keep their children’s online interactions safe while respecting their privacy. This tool uses both natural language processing (NLP) and natural language classifiers (NLC) to detect instances of cyberbullying.
Natural Language Processing (NLP) deals with the relationship between computers and the human language. It seeks to find value in the human language by reading and deciphering patterns. NLP is used in common applications such as Google Translate, Microsoft Word (to detect grammatical errors), personal assistants (Siri, Alexa) and IVR (Interactive Voice Response) systems used by companies’ customer service departments to route calls.
The IBM Watson Natural Language Classifier (NLC) analyzes text data and organizes it into categories set by the user. NLC is used in applications including routing online customer inquiries, gauging customer sentiment, filtering spam in email inboxes, and social media monitoring. Companies employing NLC for social media monitoring can learn how their brand and products/services are being perceived by customers and the general public.
Using NLP and NLC, complex algorithms are built to detect cyberbullying incidences and alert parents. These algorithms can also be programmed to detect language that could potentially escalate to cyberbullying. The application is set to provide parents with the time and date stamp as well as the content of the online interaction in question. Parents make the final determination of whether the interaction is cyberbullying. This use of artificial intelligence allows teenagers to maintain a level of privacy as parents will only be alerted when questionable interactions occur.
DeepText – Facebook’s Anti-Cyberbullying Tool Based on Deep Learning
Facebook has developed DeepText, a deep learning tool that was built to understand the text in thousands of posts within seconds. DeepText is able to interpret and understand the content almost as accurately as humans and can do so in over 20 languages. With 1.59 billion daily Facebook users, as reported by Facebook’s Investor Relations, DeepText’s speed and accuracy is necessary in order to be effective.
The DeepText tool employs deep learning which allows it to more accurately learn how humans use and understand text. Using several deep neural network architectures, DeepText goes beyond the general classification of text and the identification of entities found in posts. Deep learning also allows the tool to understand the many nuances of over 20 different languages. The challenges of learning multiple languages are better handled by deep learning than more traditional Natural Language Processing.
Detecting Cyberbullying in Images using Artificial Intelligence – Instagram
Instagram is one of the most favored social media platforms among teenagers. Unfortunately, a 2017 study found Instagram to be, of the most popular social media platforms, the one with the highest proportion (47%) of users experiencing cyberbullying. As a result, Instagram has implemented anti-cyberbullying tools built and operated using artificial intelligence. Besides using DeepText, Instagram also made an image filter available that detects offensive content in images posted on the social media platform. This image filter works in both feeds and stories. More recently, Instagram activated a classifier to scan videos posted on its platform. Unlike its text filter, Instagram has humans making the final decision on whether images and videos are considered cyberbullying. The artificial intelligence employed by the image and video filter routes the visual content in question to the appropriate Instagram employee who determines if it is offensive.
Protecting Gaming Communities from Cyberbullies
Gaming companies are also investing resources to protect their online communities from cyberbullying. Riot Games has employed artificial intelligence and predictive analytics tools to identify and discipline cyberbullies within their online gaming community. The company relies on its online community to identify and suggest appropriate actions against offenders, while artificial intelligence is used to determine if the recommendations are appropriate for the offense. Eventually the goal is for artificial intelligence to control all aspects of the cyberbullying situation, from identification, education, reformation and discipline.
With over 75 million daily players, Riot Games found it a priority to deal with cyberbullying head on. Using a combination of online participation and artificial intelligence, Riot Games has already seen a 40% decline in offensive comments within their community. Although it anticipates the continuation of online community involvement, the goal is to have artificial intelligence take over more of the tasks involved.
Stopping Cyberbullying in its Tracks – ReThink
Most of the data science currently being used to combat cyberbullying deals with the detection of content that has already been posted. However, ReThink aims to stop cyberbullying before the controversial content is posted. ReThink is an app developed by a teenager who was deeply moved by the constant stories of cyberbullied teens taking their own lives. The app can be loaded on a smart phone or computer and, using artificial intelligence, is able to detect offensive words that are typed into the app-specific keyboard. The keyboard looks similar to that of the smart phone and integrates easily with other apps including email clients and social media platforms. A pop-up alert is displayed on the screen when the artificial intelligence detects any offensive words. The alert gives users time to rethink their decision to send or post the offensive words. The ReThink app continues to use updated machine learning models to improve its accuracy.
How will Data Science’s Role Change in the Future Cyberbullying Battle?
The popularity of online communities and social media channels will continue to remain strong into the foreseeable future, so the need to identify and prevent cyberbullying should remain a high priority for all involved. As bullies find ways to bypass filters set by online communities and social media platforms, researchers continue to work on developing more accurate tools to continue the fight against cyberbullying.
OCDD – A Deep Learning Method
Researchers have found another deep learning method to detect cyberbullying, aiming to improve on the shortcomings of previous methods. This new method, known as OCDD, was developed to detect offensive content on Twitter. The developers of OCDD have identified limitations to the deep learning approaches currently used to identify cyberbullying. These other approaches are continually adding features to improve the ability to detect cyberbullying on social media platforms. The continual addition of features complicates execution and impedes performance of the tool. In addition, these other deep learning tools also rely on user classification data entered by the user, such as age, that can be fictitious.
While these other tools rely on a classifier (NLC), OCDD represents tweets as word vectors. In their published study, the researchers explain that “… the semantics of words are preserved, and the feature extraction and selection phases can be eliminated.” These word vectors are known as word embeddings and are defined as “a collective term for models that learned to map a set of words or phrases in a vocabulary to vectors of numerical values.” More simply, word embeddings are words in the form of vectors. These word embeddings are then run through a neural network to determine if the text can be classified as cyberbullying. A neural network is a computer system built to operate similarly to the human brain. Also known as Artificial Neural Networks (ANN), they are developed to recognize patterns in data.
OCDD has not yet been deployed in the battle against cyberbullying, but the tool continues to make progress as it is being tested with other languages. The development of OCDD is an example of how data scientists are recognizing the need to make continual progress against cyberbullying by developing new and improved ways to identify it.
The Future of Artificial Intelligence in the Battle Against Cyberbullying
Currently, artificial intelligence may be the best solution to combat cyberbullying, but it is far from being perfect. These systems must continually learn the human language with its nuances and everchanging slang terms. As these systems continuously learn, there is no doubt that cyberbullying still occurs undetected. Companies with online communities must identify timely ways for this learning to occur. For example, Instagram began surveying thousands of its users to better understand the cyberbullying behaviors they have witnessed while engaging with others on the platform. Because cyberbullying may be defined differently by each user and comes in different forms, the hope is the data will help refine the algorithms to more effectively detect offensive content.
Instagram has also identified the need to look at the broader picture while trying to detect cyberbullying on its platform. The company is working to build artificial intelligence to look at patterns in behavior on the platform as opposed to single posts. By studying user behavior, the company is working to identify patterns that should trigger an alert on their platform.
Similar to the ReThink app, Instagram has been refining a feature that will activate a warning on possible cyberbullying comments before they are actually posted. The feature will not block users from posting them but will give them an opportunity to reconsider their words. Much like the other features Instagram has employed, this warning will be activated by machine learning algorithms that will need to be continuously trained with updated data. This option is scheduled to be rolled out to users this year (2019).
Because of its size and influence, Instagram has recognized and taken up the challenge to invest in the battle against cyberbullying. For now, some of the tools such as DeepText are being shared between itself and its parent company Facebook. However, the general ideas, learnings, data and practices need to be shared across all companies with online communities. Both companies and academic researchers need to come together to battle this issue that has already led to the loss of many lives. Data science can play an effective role, but companies will need to continue finding ways to collect the appropriate data to train the machine learning algorithms. This will require observing and facilitating conversations with members of their online communities.