Report Harmful Content is an impartial dispute resolution service designed to empower anyone (aged 13+) who has come across harmful content online to report it by providing up to date information on community standards and direct links to the correct reporting facilities across multiple platforms. Before you submit a report to Reporting Harmful Content it is essential that you have reported the material to the social media service directly using their online tools.
Report Harmful Content can review reports made about the following eight types of online harm:
1. Online Abuse: A term that covers any form of abuse committed on a social network, website, gaming platform, or app. It is generally verbal but can also include image-based abuse
2. Bullying or Harassment: This can include hurtful language which targets an individual or group of people, trolling, spreading rumours, and excluding people from online communities. In the case of harassment, the behaviour is repeated and intended to cause distress.
3. Threats: When a threat poses a real-life danger, putting someone at immediate risk of harm. These types of threats should always be reported as an emergency to the police. Other threats of this kind could be “outing” someone’s behaviour to blackmail them. They may be used to coerce someone into doing something they don’t want to do.
4. Impersonation: This is when someone assumes the identity of another person, to harass or defraud them. It can also include behaviours such as creating fake accounts, or hijacking accounts usually with the intent of targeting an individual.
5. Unwanted Sexual Advances (not image-based): This is often gender-based abuse and can take the form of highly sexualised language or persistent and unsolicited messages, often of a sexual nature.
6. Violent Content: This could include graphic content including gore content, such as beheading videos or scenes which glorify animal abuse. Most of which will be against various platforms community standards.
7. Self-Harm or Suicide Content: Most platforms do not allow any content which encourages, instructs or glorifies self-harm or suicide. Some platforms have processes in place for safeguarding users who view or share this type of content.
8. Pornographic Content: This includes adult (nude or sexual) content which is not illegal but breaches the terms of most online platforms.
Report Harmful Content has two main functions;
Advice: Empowering anyone who has come across harmful content online to report it by providing up-to-date information on community standards and direct links to the correct reporting facilities across multiple platforms.
Reporting: Providing further support to users over the age of 13 who have already submitted a report to the industry and would like outcomes reviewed. Report Harmful Content will check submitted reports and industry responses against platform-specific reporting procedures and community standards in order to provide users with further advice on actions they can take.
Reporting Harmful Content does not offer reporting services for all types of online crime. This is because there are other routes to resolution where other categories of harmful content are concerned. There are dedicated, specialist services that exist already.
Action Counters Terrorism: They are unable to take reports about terrorism-related content. If you've seen something online that supports, directs, or glorifies terrorism, report it to Action Counters Terrorism.
Child Sexual Abuse Imagery: They are unable to take reports of sexual images of those under 18s. You can report sexual images of those under 18s online directly to the Internet Watch Foundation.
Platforms Outside of Remit: They are unable to take action when concerned content appears on independently owned sites. This is because these sites do not have the same community standards as the larger social media platforms. Independently owned and moderated sites are able to set their own rules, which often includes permitting this type of material.