Your data. Your choice.

If you select «Essential cookies only», we’ll use cookies and similar technologies to collect information about your device and how you use our website. We need this information to allow you to log in securely and use basic functions such as the shopping cart.

By accepting all cookies, you’re allowing us to use this data to show you personalised offers, improve our website, and display targeted adverts on our website and on other websites or apps. Some data may also be shared with third parties and advertising partners as part of this process.

Shutterstock/Photo For Everything
Background information

Anthropic’s risking its positive PR: could Claude sneak in spyware?

Debora Pape
22.4.2026
Translation: Patrik Stainbrook

Anthropic, an AI company known for its strict ethics, is facing heavy criticism after a privacy expert discovered unneeded installations of potential spyware in the Claude desktop app.

Anthropic, the U.S. firm behind AI model Claude, goes to great lengths to be seen as a responsible company with strong principles. This year alone, Anthropic made headlines on multiple occasions with AI-related choices and statements that reinforced the company’s reputation as «the good guys.» But now data protection consultant Alexander Hanff has made a discovery that casts Anthropic in a different light.

In a blog post, Hanff describes how he stumbled upon a file on his MacBook that had been installed along with the Claude desktop app. It could’ve given Anthropic access to sensitive data and privileges on his system – he’d neither consented to its installation nor was he aware of it. Hanff describes it as spyware, even referring to a possibly illegal backdoor.

Software bridge enables Claude to act without consent

The file is a Native Messaging Manifest, a type of software bridge. It allows the Claude for Chrome browser extension to launch the chrome-native-host executable utility outside the app. All the while, it enjoys the same privileges as the logged-in user.

By default, this bridge is inactive. However, it’s preconfigured so it can be activated at any time, as soon as certain conditions are met. Once the program is executed, Claude gains extensive agent capabilities: the AI can access websites where the user is logged in, extract data from websites and store it locally, record animated GIFs of user interactions in the browser, and execute workflows across multiple websites – all without users having explicitly consented to the underlying mechanism.

Communication between the browser and chrome-native-host occurs over standard data streams. It doesn’t appear in the usual macOS dialogue boxes. This file, installed without your permission, was digitally signed by Anthropic and verified through Apple’s notarisation process. This means it passed Apple’s automated security check.

Only Chromium-based browsers affected

The Claude app saves this file in the application directories of seven Chromium-based browsers: Google Chrome, Microsoft Edge, Vivaldi, Brave, Arc, Chromium and Opera. And that’s not all: these browsers don’t even need to be installed on your system. The app will simply create the necessary directories on its own. If the user installs one of these browsers in the future, the connection for Claude will already be in place.

Browsers that aren’t based on Chromium, such as Firefox and Safari, aren’t affected. Hanff conducted his research exclusively on Mac systems. He doesn’t specify whether Windows computers are also affected.

According to Hanff, the bridge violates data protection laws, particularly the ePrivacy Directive, and could be subject to criminal prosecution. Other data protection experts doubt this, but confirm Hanff’s findings. Anthropic hasn’t yet commented on the matter.

Why has this happened?

Backdoors and spyware are at complete odds with Anthropic’s image. And really, the company has no reason to risk its reputation. The unmentioned installation of this bridge file is most likely a result of negligence or a lack of communication.

Anthropic may have been preparing for planned agent features down the line and, in doing so, failed to ask users for their consent. An eventual opt-in is very much likely. It’s possible Anthropic’s development and privacy teams didn’t coordinate sufficiently – or at all. The problem with this program isn’t around the features it offers, but the lack of transparency. The correct approach would be to first ask users for consent, then install the software.

It’s also possible that the mechanism was initially intended only for internal test versions (builds) and was accidentally left in place for the official release. In addition, U.S. development teams consistently underestimate the EU’s strict data protection standards, under which preemptive installations without consent are a major no-go.

What kind of a company is Anthropic?

Anthropic was founded in 2021 by former employees of OpenAI, the company behind ChatGPT. They’d left OpenAI because the company was becoming too commercial (article in German) and was too lax in its approach to risk. Anthropic was intended to serve an alternative vision, emphasising AI as a useful, harmless tool that makes the world a better place. Headlines from recent weeks help Anthropic reinforce this impression of a responsible company with principles – and that, unlike some of its competitors, it doesn’t make decisions based solely on financial considerations.

  • News + Trends

    No more ad-free AI chats: ChatGPT now displays adverts in the USA

    by Debora Pape

In January, Anthropic CEO Dario Amodei warned about the dangers of AI on his blog, for example. In February, Anthropic became embroiled in a high-profile dispute with the U.S. government after it declined an exclusive partnership with the military. The reason: its AI could be used for mass surveillance and automated military strikes (article in German). According to Anthropic, AI isn’t ready for this. The contract was ultimately awarded to competitor OpenAI (article in German).

Just a few days ago, Anthropic caused a stir (article in German) by revealing it didn’t intend to make its newly developed AI, Claude Mythos, available to the public. Apparently, Mythos is extremely reliable at detecting dangerous vulnerabilities in IT systems (article in German). This is good news for security experts, but in the wrong hands, Mythos could become a dangerous tool. Recent headlines also reveal that Anthropic can’t guarantee with 100 per cent certainty who will have access to Mythos.

Header image: Shutterstock/Photo For Everything

62 people like this article


User Avatar
User Avatar

Feels just as comfortable in front of a gaming PC as she does in a hammock in the garden. Likes the Roman Empire, container ships and science fiction books. Focuses mostly on unearthing news stories about IT and smart products.


Background information

Interesting facts about products, behind-the-scenes looks at manufacturers and deep-dives on interesting people.

Show all

These articles might also interest you

  • Background information

    New AI model both threatens and enhances IT security

    by David Lee

  • Background information

    Agents, not chatbots: why OpenClaw’s electrifying everyone right now

    by Luca Fontana

  • Background information

    Digital Omnibus: the EU’s set to restructure digital laws

    by Florian Bodoky

23 comments

Avatar
later