
News + Trends
As if because of AI: "Clair Obscur: Expedition 33" not an indie game of the year after all
by Michelle Brändle

In the UK, a safety report by West Midlands Police has been heavily criticised. It has now come to light that it is based on false statements from Microsoft's AI "Copilot".
The United Kingdom is currently providing a perfect example of why you shouldn't accept information from an AI without checking it. The case of a UEFA Europa League football match in Birmingham on 6 November is making waves there: Back in October, the responsible police authority announced that it would not be allowing any visiting fans for the match between English club Aston Villa and Israeli club Maccabi Tel Aviv.
The decision was based on an internal report by West Midlands Police and Birmingham City Council, which categorised the match as a high-risk game. It has now emerged that Microsoft's chatbot Copilot had been used for the risk analysis - but had provided incorrect information. Among other things, Copilot cited riots at a previous game that never took place.
Previously, those responsible, above all the head of West Midlands Police, Craig Guildford, had been criticised for weeks for excluding the Israeli fans. As accusations of anti-Semitism were levelled, the case was also widely discussed in British politics from the outset: Guildford and other members of the authorities had to answer questions in parliament on several occasions.
Guildford had previously denied that AI was used for the report and instead cited a faulty Google search. In December, his authority even stated that the internal use of AI tools was not permitted. Now Guildford has admitted Copilot's involvement and apologised for the past misinformation. High-ranking politicians are calling for his resignation.
The official statement on the non-admission of guest fans refers to secret reports and violent riots by Maccabi fans at a match in Amsterdam in 2024. However, the Dutch police contradict this in a letter.
The secret reports apparently contain references to an earlier match between Maccabi Tel Aviv and West Ham United - which, however, never took place. How a fictitious match can make it into an official risk analysis and why such information is not checked has since been the focus of the investigation into the case.
Under the input mask of the chatbot Copilot can be read: «Copilot is an AI and can make mistakes». The incident drastically illustrates the risks of modern language models. AI chatbots generate their answers on the basis of probabilities, not because they understand the context. Current studies - such as this - show that large language models can simply be systematically misled.
Copilot could have combined patterns from previous games or messages and created a fictitious scenario from them. The phenomenon of inventing credible statements is known as AI hallucination.
For application scenarios with sensitive data or far-reaching consequences, it is therefore essential to check the generated texts for accuracy.
Feels just as comfortable in front of a gaming PC as she does in a hammock in the garden. Likes the Roman Empire, container ships and science fiction books. Focuses mostly on unearthing news stories about IT and smart products.
From the latest iPhone to the return of 80s fashion. The editorial team will help you make sense of it all.
Show all