Ad

Unauthorized users have gained access to Anthropic Mythos, an advanced AI model described by Anthropic as highly powerful and potentially risky if misused. The group claims to have accessed the tool since its initial announcement. This incident comes as technology firms seek to secure their systems before releasing such tools more widely.
Anthropic has not made Mythos available to the public. Instead, it distributed the model to select software providers through Project Glasswing. This initiative aims to help these firms test and strengthen their defenses against cyberattacks. According to a Bloomberg report, the group accessed Mythos through a third party, using tactics that included leveraging the account of a contractor working for Anthropic.
Anthropic confirmed it is aware of unauthorized access to the Claude Mythos Preview. The company is investigating the incident and stated it has found no evidence that the group accessed systems beyond the third-party vendor. In a statement to Bloomberg, an Anthropic spokesperson said, “We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments.”
When announcing Mythos, Anthropic highlighted the model’s ability to identify and exploit vulnerabilities in every major operating system and web browser, if directed by a user. The group of unauthorized users reportedly participates in a private Discord channel focused on uncovering information about unreleased AI models. Bloomberg reported that the group has used Mythos regularly since their initial access, but not for cybersecurity-related activities.
To access Mythos, the group deduced the model’s online location by analyzing the format Anthropic used for other models. According to Bloomberg, a group member said their interest lies in exploring new models rather than causing harm. The group has not used Mythos for cybersecurity prompts and also has access to other unreleased Anthropic AI models.
This unauthorized access raises concerns about the ability of AI companies to prevent their most advanced technology from spreading beyond trusted partners. It also prompts questions about whether other unauthorized users may have gained access to Mythos and how the tool might be used.
Ad

OpenAI Releases ChatGPT Images 2.0 With Enhanced Precision and Multilingual Support
OpenAI has released ChatGPT Images 2.0, featuring enhanced precision, multilingual support, and improved realism. The model supports complex prompts and new thinking capabilities, available to premium ChatGPT users.
21-Apr-2026 10:30 PM

Unauthorized Users Gain Access to Anthropic Mythos AI Model
A small group of unauthorized users accessed Anthropic's Mythos AI model through a third-party contractor. Anthropic is investigating the breach, which highlights challenges in securing advanced AI tools before wider release.
21-Apr-2026 10:30 PM

Meta Implements Employee Computer Tracking to Train AI Agents
Meta has deployed tracking software on US employee computers to collect data for AI model training, as part of a broader automation push. The initiative includes layoffs and raises legal questions about workplace surveillance and privacy.
21-Apr-2026 09:30 PM

Samsung Galaxy Z Flip8 to Feature 4,300mAh Battery and 25W Charging
Samsung's Galaxy Z Flip8 will feature a 4,300mAh battery and 25W wired charging, matching previous Flip models. Reports suggest no upgrades to charging speed or camera hardware for this upcoming device.
21-Apr-2026 09:30 PM

iPhone 18 May Launch in Spring With Downgraded Specs, Rumors Suggest
Rumors indicate Apple may launch the iPhone 18 and iPhone 18e together in spring, with the iPhone 18 featuring downgraded chipset and memory, making it closer in specs to the affordable model.
21-Apr-2026 08:30 PM

Google Photos Adds New Touch-Up Tools for Enhanced Portrait Editing
Google Photos has introduced new touch-up tools that allow users to enhance portraits by refining skin, removing blemishes, and brightening eyes or teeth, with adjustable intensity for each effect.
21-Apr-2026 07:30 PM
Ad
Ad






Ad
Ad