Social media screening to comply with new KCSIE guidelines
-
Supports the safeguarding of pupils
-
Avoids any behavioural and reputational risk in the hiring of front-line staff
-
Designed for the education sector
-
Used by Educational Trusts, primary and secondary schools
Quick screening with comprehensive reporting
- Automated check of an applicant’s social media history
- Report detailing harmful content delivered in minutes
- Risks identified across 10 key categories
Reliable analysis of an applicant’s social media profile
- Avoid missing inappropriate content by manual searching
- No unconscious bias – 100% automated analysis
- GDPR compliant – fully consented model
- Checks all public and private posts across major social media platforms
Our team is
here to help.
Contact us for details of our exclusive discount for education or to arrange a demo.
Why choose us?
- Ready made solution used in public and private sectors including Police Constabularies, Liberal Democrats, BBC, Sports and Entertainment industries
- Approved app partners of the social media platforms
Answers to your questions
The applicant receives an email asking them to consent to a social media check being conducted. The email details the company requesting the report so that this individual is aware of why this report has been requested and by whom.
The individual then has to consent to share their social media account credentials with Social Media Check in order to grant access to each individual social media account.
Only when these consents for each platform have been received can a social media check report be conducted. If an individual doesn’t consent, the report cannot be conducted.
The Social Media Check platform distributes the report and the certificate to the person who requested it. The individual whose accounts have been checked can receive a copy of the report, but this would have to be done by the requestor manually.
The 6 main social media platforms that are checked include Instagram, Facebook, Twitter, Flickr, Medium and Tumblr.
Each social media platform is checked against the 10 key classifiers:
- Extremist groups
- Hate speech
- Potential nudity
- Swearing and profanity
- Toxic language
- Violent image
- Drugs
- Weapons
- Firearms
- Client keywords
The final report shows findings against each of these classifiers.
Social Media Check will only check the channels that have been consented to by the individual.
The content that the system then analyses includes posts, tweets, retweets, blogs, titles, text within posts and images within posts.
Content that is not analysed include private messages, liked posts, comments made by user on other posts, video, personal profile information and the accounts the user is following.
Yes, the Social Media Check solution (including hosting services and software) complies with the most stringent data security legislation (ISO27001) to ensure that your data is safe. All data that is submitted, processed, controlled and reported is encrypted as standard and we do not use your data for anything except the report.
Reports that are agreed to be stored on UK-based data centres are kept for 90 days before being archived in an encrypted format. This storage option is available for business customers only.