foto CMS

By Adriana Zdanowicz-Leśniak, L.L.M, advocate, senior associate and Agnieszka Górecka, advocate, associate, from CMS

 

 

A quiet revolution has recently broken out on the internet. The changes are significant for us as Internet users, but they are the real game changers for the providers of online platforms, especially the large ones. This is the result of EU Regulation 2022/2065 – known as the Digital Service Act or abbreviated as DSA. The DSA is another milestone for cyber giants like Meta (Facebook, Instagram) or Byte Dance (TikTok) and other large online platforms. It introduces an equally important revolution, just as GDPR did.

To give some idea of what the DSA is about, in this article we will focus on one of its aspects – namely social-media content recommendations. Let us start with the question of whether we even wonder why the content on social media shows us posts/reels that are specifically tailored to our interests or previously visited profiles. How do they know us so well? Most of us probably do not care how this happens or we lack basic information about it. Even if we were to see some brief information about algorithms, profiling, etc., it would not explain a lot. This is where the DSA comes in, proposing transparent rules for “content” and giving us the chance to decide about the so-called “AI switch-off”. We will now briefly discuss what this means in practice.

One of the key changes in the DSA is the issue of transparency of recommendations. Generally, this means that platform providers will have to ‘speak openly’ about the main parameters used in their content recommendation systems, how they can be changed or even turned off, as well as about the criteria that decide that some information is more relevant and therefore displayed on feeds before other information.

The DSA also obliges very large online platforms (‘VLOPs’) – 19 selected platforms, such as Facebook or Snap – to guarantee their users a choice of at least one option for each of their recommendation systems that is not based on profiling: simply understood as an automotive evaluation of our online habits and behaviours. It is up to VLOPs to decide what such an alternative is. Typically, these are more old-fashioned methods, such as showing users their friends’ posts in chronological order – from oldest to newest activity, rather than in customised order based on, for example, profiles the users have visited most often.

We can already see the first effects of the DSA among VLOPs. These include the Meta official statement stating: “We’re now giving our European community the option to view and discover content on Reels, Stories, Search and other parts of Facebook and Instagram that is not ranked by Meta using these [AI recommender] systems. For example, on Facebook and Instagram, users will have the option to view Stories and Reels only from people they follow, ranked in chronological order, newest to oldest. They will also be able to view Search results based only on the words they enter, rather than personalised specifically to them based on their previous activity and personal interests.”

For example, users might see the announced feature on Instagram where they could choose an option to change to the lowly feed channel full of a friend’s chronological activity without suggested ‘For You’ posts chosen by algorithms. Undoubtedly, it is surprising how simple content is which is not filtered by AI beforehand, whereas the fact that the user cannot set a non-personalised feed as a default option still gives Meta room for improvement.

Why is the EU forcing VLOPs to turn off the default recommendation system based on profiling? One of the reasons is that the algorithms are designed to evoke emotions in users, and engage them in the content they see, all so that they spend as much time as possible on the platform. As a result, users are bombarded with low-value content:  in the social-media business model, the most clickable and engaging post always wins. Published content can even be harmful to people (causing anxiety) or trap them in a filter bubble or provide disinformation, the best example of which is the Cambridge Analytica scandal. Algorithms which select ads and other content are always a few steps ahead of users – they use collected traces to match users with the posts they see. Platforms often present tailored offers to users, even if they do not disclose their fears or health problems in advance.

Despite the obvious threats and new ‘safe’ features, Meta hopes to convince users not to turn off AI by doubling down on clarity measures and ensuring ‘unprecedented insight into how our AI systems classify content’. This insight would be made possible by the release of 22 system maps-cards, which reveal how the AI systems rank content for Feed, Reels and Stories. In fact, this is only the next step in providing the new compliance, as the DSA requires platforms to explain how the systems work: on what basis content is selected, what the essential criteria are (e.g. personal data) in the selection of recommendations, and why this parameter is important. In the terms of use, platforms should also set out any and all options for users to change recommendations or users’ impact on those options.

Recommendation transparency is just the tip of the regulatory iceberg. The DSA, along with its sister regulation, the Digital Markets Act 2022/1925 – which is aimed at the largest platforms – is the key piece of regulation that imposes far more obligations on platforms than simply ensuring users have the choice to opt out of personalisation.

The fact that the DSA prohibits the use of children’s data for commercial purposes (sensitive data such as health, political opinions, and religion) deserves credit. Facebook or TikTok promise that users aged between 13 and 17 will no longer see personalised ads, and Snapchat limits personalised ads to teenagers under 18. Targeting ads at people who suffer from a particular condition or who favour a political party (at least until there is a specific regulation in this area) would also be banned by the DSA. Therefore, the proposal of advertising based on tracking would only be possible for “non-sensitive data”.

We will see what the position of the online platform providers will be on implementing all the DSA requirements, especially the position of VLOPs, i.e. whether they will ignore the DSA risks associated with their business model or adapt to them. Under the DSA, they need to regularly assess the systemic risks associated with their services; for example, the risk of spreading illegal content or the potential impact of their actions on the electoral process. The motivation factor clearly derives from the high fines of the DSA, which can even amount to as much as 6% of total annual turnover.

The changes discussed in this article are only part of the bigger picture. The EU is a leader on regulating the cyberworld. The world is watching to see whether the DSA will work or whether it is just a utopian project to repair the cyberreality. Undoubtedly, the greatest factor in the DSA’s success will be the approach by companies to this new EU Regulation and, in the long term, users’ awareness of their legal rights that protect the way they spend their free time when surfing the Internet.