Skip to content
Home » Blog » TikTok Shares Knowledge on EU Enforcement Actions in H2 2024

TikTok Shares Knowledge on EU Enforcement Actions in H2 2024


TikTok has revealed its newest Transparency Report, as required beneath the EU Code of Apply, which outlines the entire enforcement actions it undertook inside EU member states over the past six months of final yr.

And there are some attention-grabbing notes in regard to the affect of content material labeling, the rise of AI-generated or manipulated media, overseas affect operations, and extra.

You may obtain TikTok’s full H2 2024 Transparency Report right here (warning: it’s 329 pages lengthy), however on this put up, we’ll check out a few of the key notes.

First off, TikTok studies that it eliminated 36,740 political adverts within the second half of 2024, consistent with its insurance policies in opposition to political data within the app.

Political adverts should not permitted on TikTok, although because the quantity would counsel, that hasn’t stopped a lot of political teams from searching for to make use of the attain of the app to develop their messaging.

That highlights each the rising affect of TikTok extra broadly, and the continued want for vigilance in managing potential misuse by these teams.

TikTok additionally eliminated virtually 10 million faux accounts within the interval, in addition to 460 million faux likes that had been allotted by these profiles. These may have been a method to control content material rating, and the removing of this exercise helps to make sure genuine interactions within the app.

Effectively, “genuine” when it comes to this coming from actual, precise individuals. It might probably’t do a lot about you liking your good friend’s crappy put up since you’ll really feel dangerous if you happen to don’t.

When it comes to AI content material, TikTok additionally notes that it eliminated 51,618 movies within the interval for violations of its artificial media movies for violations of its AI-generated content material guidelines.

Within the second half of 2024, we continued to spend money on our work to average and supply transparency round AI-generated content material, by turning into the primary platform to start implementing C2PA Content material Credentials, a expertise that helps us determine and routinely label AIGC from different platforms. We additionally tightened our insurance policies prohibiting harmfully deceptive AIGC and joined forces with our friends on a pact to safeguard elections from misleading AI.

Meta lately reported that AI-generated content material wasn’t a significant factor in its election integrity efforts final yr, with rankings on AI content material associated to elections, politics, and social matters representing lower than 1% of all fact-checked misinformation. Which, on stability, might be near what TikTok noticed as properly, although that 1%, at such an enormous scale, that also represents lots of AI-generated content material that’s being assessed and rejected by these apps.

This determine from TikTok places that in some perspective, whereas Meta additionally reported that it rejected 590k requests to generate photos of U.S. political candidates inside its generative AI instruments within the month main as much as election day. 

So whereas AI content material hasn’t been a significant factor as but, extra persons are at the least attempting it, and also you solely want a number of of those hoax photos and/or movies to catch on to make an affect.

TikTok additionally shared insights into its third-party fact-checking efforts:

TikTok acknowledges the vital contribution of our fact-checking companions within the combat in opposition to disinformation. In H2 we onboarded two new fact-checking companions and expanded our fact-checking protection to a lot of wider-European and EU candidate nations with present fact-checking companions. We now work intently with 14 IFCN-accredited fact-checking organizations throughout the EU, EEA and wider Europe who’ve technical coaching, sources, and industry-wide insights to impartially assess on-line misinformation.”

Which is attention-grabbing within the context of Meta shifting away from third-party fact-checking, in favor of crowd-sourced Neighborhood Notes to counter misinformation.

TikTok additionally notes that content material shares had been decreased by 32%, on common, amongst EU customers when an “unverified declare” notification was displayed to point that the data offered within the clip will not be true.

In equity, Meta has additionally shared knowledge which means that the show of Neighborhood Notes on posts can cut back the unfold of deceptive claims by 60%. That’s not a direct comparability to this stat from TikTok (TikTok’s measuring whole shares by rely, whereas the examine checked out total distribution), however it could possibly be round about the identical end result.

Although the issue with Neighborhood Notes is that almost all are by no means exhibited to customers, as a result of they don’t achieve cross-political consensus from raters. As such, TikTok’s stat right here truly does point out that there’s a worth in third-party truth checks, and/or “unverified declare” notifications, so as to cut back the unfold of probably deceptive claims.

For additional context, TikTok additionally studies that it despatched 6k movies uploaded by EU customers to third-party fact-checkers throughout the interval.

That factors to a different challenge with third-party fact-checking, that it’s very troublesome to scale this technique, which means that solely a tiny quantity of content material can truly be reviewed.

There’s no definitive proper reply, however the knowledge right here does counsel that there’s at the least some worth to sustaining an neutral third-party fact-checking presence to observe a few of the most dangerous claims.

There’s a heap extra in TikTok’s full report (once more, over 300 pages), together with a variety of insights into EU-specific initiatives and enforcement packages.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *