TikTok will begin using algorithms for auto-deleting content that violates its user policy.
The ByteDance-owned social network is also making some changes to how it notifies users. TikTok says the new system is designed to help reduce the number of distressing videos that its human safety team must review. The moderation team will then be free to focus on areas like bullying, hate speech, and harassment.
TikTok is testing automatic deletion of content that violates its policies, like adult nudity, sexual activity, violent content, illegal activities, and regulated goods. The company has been experimenting with auto-deleting content violations outside the U.S. for months now. TikTok says the false positive rate of its algorithms is around 5%. Requests to review content removal appeals have remained consistent since automation has been introduced to the process.
TikTok wants to introduce more transparency to its moderating process by introducing algorithms.
The company will also report how many accounts it removes because they belong to users under the age of 13. TikTok has faced stiff fines from the FTC for violating child privacy laws like COPPA. TikTok is also making some changes to how it notifies users who have uploaded content that violates its guidelines.
“People will be notified of the consequences of their violations starting in the Account Updates section of their Inbox. There, they can see a record of their accrued violations,” the TikTok blog post reads. There are several categories of violations, including video, comments, direct message, hashtag, profile, sound, and live.
How TikTok Content Violations Work – Auto-Deleting
TikTok has also outlined how these consequences will work for repeat content violations. Upon the first violation, users will get a warning in-app unless the violation is one of zero-tolerance. Zero-tolerance policies include (but are not limited to) child sexual abuse and pornographic material.
After the First Violation
Users who continue to violate TikTok’s policies can expect to see the following consequences:
- Suspended ability to upload a video, comment, or edit a profile for 24 to 48 hours
- Restricted account to a view-only experience for 72 hours, up to one week
- After several violations, a user will receive a ban warning
“We developed these systems with input from our US Content Advisory Council, and in testing them in the US and Canada over the last few weeks, over 60% of people who received a first warning for violating our guidelines did not have a second violation,” TikTok claims.
I must say as to my experience this is all a false narrative to imply that Tiktok employees are actually doing a good job. There are so many issues happening and no resolutions. All this is an exuse to add more censorship on those whom the company wants to keep shadowing and allowing more toxic voices roam free to create more distraction and division.
I especially know this because since May 17, 2021 my account has been permanently banned three times just to get it back with a “opps” and yet I have had to remove all my videos due to massive reporting and still cannot post a video. WHY?
Because as a person involved with the real Anti-bully problem which go unnoticed I am one of the main targets. On Monday I have had my thirteenth permanent ban because of jerks that are allowed to abuse the system. Nothing is done and I lose more videos, over 100 so far. Stop bullying not even close, Tiktok is a main generator for bullies.
@electricfireballz look me up and see my story.
However I’m pretty sure that you are well aware of the issues.