Expanding the AI Act's Prohibition on Social Scoring to the Private Businesses May Negatively Impact Consumers
The European Union (EU) AI Act, which came into effect in February 2025, has put a complete ban on AI systems that involve social scoring, classifying them as posing an unacceptable risk to human dignity and fundamental rights. However, the question of extending this ban specifically to the private sector remains a topic of discussion.
As it stands, the AI Act applies broadly to any entity placing AI systems on the EU market or putting them into service, including both public and private sector actors. The ban on social scoring AI is part of the AI systems categorized as "unacceptable risk," which are outlawed entirely by the Act.
Proponents of a ban on private social scoring argue that private companies could use such scores to unfairly discriminate against individuals. Recital 17 of the latest version of the AI Act states that AI systems providing public or private social scoring may violate the right to dignity and non-discrimination and the values of equality and justice.
On the other hand, critics suggest that a blanket ban may stifle legitimate private sector AI applications and innovation that might use social scoring techniques responsibly. They argue for a nuanced approach, with private sector use subject to strict transparency, accountability, and human oversight to allow beneficial use cases while mitigating risks.
The concern for innovation and competition is not unfounded. For instance, companies like Match Group, a conglomerate of dating brands including Tinder, implement cross-brand bans for user behaviour unrelated to dating. Similarly, scoring protects consumers against misleading advertising and better-informs ride-hailing, accommodation hosting, and food delivery choices.
Moreover, some argue that banning regulated social scoring practices through the AI Act is senseless. "Social currency" encourages prosocial behaviour online, particularly in games where reputation systems incentivize good behaviour. Twitch, a streaming platform, bans users who commit offline offences, demonstrating the potential for responsible use of social scoring.
However, there are concerns that a ban on social scores for the private sector may negatively impact businesses. The ban may ultimately hurt consumers by limiting their choices and potentially leading to a decrease in the quality of services. As of July 2025, no official update has been made public proposing an explicit or new extension of the ban specifically focused on the private sector beyond its existing broad application.
The debate around such extensions involves balancing human rights protections against innovation and regulatory impacts. As the EU continues to refine its AI regulations, it will be crucial to find a balance that ensures protection of fundamental rights while fostering innovation and competition in the private sector.
- The AI Act, effective from February 2025, prohibits any entity, public or private, from using AI systems that involve social scoring, classifying them as posing an unacceptable risk to human dignity and fundamental rights.
- Proponents of a private sector ban on social scoring argue that it could prevent unfair discrimination against individuals, as AI systems providing social scoring may violate the right to dignity and non-discrimination.
- Critics, however, contend that a blanket ban could stifle legitimate private sector AI applications and innovation, suggesting a nuanced approach with private sector use subject to strict transparency, accountability, and human oversight.
- As of July 2025, there has been no official update proposing an explicit extension of the ban specifically focused on the private sector, leaving the balance between human rights protections, innovation, and regulatory impacts under debate in the EU.