Mass surveillance through artificial intelligence on the Paris Olympics – a legal scholar on the blessing for security and the nightmare for privacy

The 2024 Paris Olympic Games will capture the world's attention as hundreds of athletes and support staff, in addition to a whole lot of hundreds of holiday makers from world wide, converge on France. But it's not only the eyes of the world that might be watching. Artificial intelligence systems might be watching, too.

Governments and personal firms will use advanced AI tools and other surveillance technologies to conduct comprehensive and chronic surveillance before, during and after the Games. The Olympic world stage and international crowds pose such an important security risk that lately authorities and critics have dubbed the Olympic Games a “World's largest security operations outside of war.”

The French government, hand in hand with the private technology sector, has used this legitimate need for greater security as a reason to deploy technologically advanced surveillance and data collection tools. Its surveillance plans to deal with these risks, including the controversial use of experimental AI video surveillance, are so extensive that the country had to vary its laws to make the planned surveillance legal.

The plan goes beyond latest AI video surveillance systems. According to news reports, the Prime Minister’s Office has provisional decree, which serves as to permit the federal government to significantly expand traditional, clandestine surveillance and intelligence gathering tools at some point of the Games, including wiretapping telephone conversations, collecting geolocation, communications and computer data, and collecting large amounts of image and audio data.

A pair of horizontal metal cylinders in the foreground and a group of people in business suits and military uniforms in the background
French President Emmanuel Macron checks surveillance cameras through the preparations for the Olympic Games in Paris.
Christophe Petit Tesson/AFP via Getty Images

I’m a Law professor and lawyerand I research, teach and write about privacy, artificial intelligence and surveillance. I also offer legal and political guidelines on these issues for the legislator and others. Increased security risks can and do require increased monitoring. This 12 months, France has raised concerns about its Olympic security capabilities and credible threats surrounding public sporting events.

However, preventive measures must be proportionate to the risks. Critics worldwide claim that France takes advantage of the Olympic Games as a seizure of power through surveillance and that the federal government will use this “extraordinary” surveillance justification to Normalization of state surveillance throughout society.

At the identical time, there are legitimate concerns about appropriate and effective surveillance for security reasons. In the United States, for instance, the country is wondering how the key service Security monitoring couldn’t prevent an assassination attempt on former President Donald Trump on July 13, 2024.

AI-supported mass surveillance

Thanks to recently expanded surveillance laws, French authorities have been in a position to Collaboration with AI firms Videtics, Orange Business, ChapsVision and Wintics are all deploying extensive AI video surveillance. They have deployed AI surveillance at major live shows, sporting events and in subway and train stations during high-usage periods, including around a Taylor Swift concert and the Cannes Film Festival. French officials said these AI surveillance experiments have gone well and The traffic lights are green for future uses.

The AI ​​software used is usually designed to flag certain events, comparable to changes in crowd size and movement, abandoned objects, the presence or use of weapons, a body on the bottom, smoke or flames, and certain traffic violations. The goal is for the surveillance systems to instantly detect events comparable to a crowd moving toward a gate or an individual leaving a backpack on a busy street corner in real time and alert security personnel. Flagging these events looks like a logical and sensible use of technology.

But the actual privacy and legal questions arise from how these systems work and are used. How much and what forms of data should be collected and analyzed to flag these events? What is the training data of the systems, what are their error rates, and what are the signs of bias or inaccuracy? What happens to the information after it’s collected, and who has access to it? There is little transparency to reply these questions. Despite safeguards designed to forestall the usage of biometric data to discover individuals, it is feasible that the training data captures this information and the systems may very well be adapted to make use of it.

By giving these private firms access to the hundreds of video cameras already installed throughout France, Use and coordination of monitoring options of railway firms and transport firms and Permission to make use of drones with camerasFrance legally allows and supports these firms to check and train AI software on its residents and visitors.

Legalized mass surveillance

Both the necessity and the practice of state surveillance on the Olympic Games are nothing latest. Security and privacy concerns on the 2022 Winter Olympics in Beijing were so great that the FBI calls on “all athletes” to depart private mobile phones at home and to make use of only a disposable phone as a result of the acute government surveillance in China.

However, France is a member state of the European Union. General Data Protection Regulation is one in all the strictest data protection laws on the planet, and the The EU AI law is leading efforts to manage the harmful use of AI technologies. As a member of the EU, France must comply with EU law.

France has legally paved the way in which for an expanded use of artificial intelligence within the surveillance of public places.

In preparation for the 2023 Olympic Games, France adopted Law No. 2023-380, a legislative package on a legal framework for the 2024 Olympic Games. It includes the controversial Article 7, a provision allowing French law enforcement and their technical contractors to experiment with intelligent video surveillance before, during and after the 2024 Olympic Games, in addition to Article 10, which explicitly allows the usage of AI software to review video and camera feeds. These laws France is the primary EU country to legalize such a far-reaching AI-supported surveillance system.

scientist, civil society groups And Champion of civil liberties have identified that these articles are in conflict with the General Data Protection Regulation and the EU's efforts to manage AI. They argue that Article 7 particularly violates the General Data Protection Regulation's provisions on the protection of biometric data.

French officials and representatives of technology firms said that the AI ​​software can achieve its goals to discover and flag these specific forms of events without identifying individuals or violating the General Data Protection Regulation’s restrictions on the processing of biometric data. However, European civil liberties organisations have identified that if the aim and performance of the algorithms and AI-driven cameras is to detect certain suspicious events in public spaces, these systems are inevitably “Recording and analysis of physiological characteristics and behaviors” of individuals in these spaces. This includes posture, gait, movements, gestures and appearance. Critics argue that biometric data is collected and processed and that the French law subsequently violates the General Data Protection Regulation.

AI-supported security – at a price

For the French government and AI firms, AI surveillance has to date been a mutually helpful success. The algorithmic observers are more used And they supply governments and their technical collaborators with way more data than humans alone could provide.

But these AI-powered surveillance systems are subject to little regulation and little independent testing. Once the information is collected, there is big potential for further data evaluation and privacy invasion.

image credit : theconversation.com