comScore Tracking
site logo
search_icon

Ad

Goldman Sachs Blocks Anthropic AI Use in Hong Kong Amid US-China AI Tensions

author-img
|
Updated on: 29-Apr-2026 09:00 AM
share-icon

Follow Us:

insta-icon
total-views-icon

895 views


Goldman Sachs logo in Hong Kong with a digital overlay representing restricted AI access and US-China technological friction.
Goldman Sachs bars Hong Kong bankers from using Anthropic's Claude AI due to contractual and geopolitical concerns. Access to OpenAI and Gemini remains unaffected.

The competition between China and the United States in artificial intelligence is escalating. Both governments are taking measures against each other. Recently, Goldman Sachs, a major US investment bank, stopped its bankers in Hong Kong from using Anthropic’s AI models. This decision comes as US AI models, including ChatGPT and Claude, remain banned in mainland China under the Great Firewall.

Key Highlights

  • Goldman Sachs blocks Anthropic AI model Claude for Hong Kong bankers.
  • US AI models remain banned in mainland China under the Great Firewall.
  • Anthropic accuses Chinese firms of using fake accounts to harvest AI data.
  • China prevents Manus AI from being sold to Meta amid AI sector rivalry.

Goldman Sachs Restricts Anthropic AI Access

Goldman Sachs employees in Hong Kong lost access to Anthropic’s Claude AI model a few weeks ago. The Financial Times reported this, citing sources familiar with the situation. The restriction applies both to direct use and to internal AI tools that rely on Claude. This move was not prompted by Chinese government action or pressure. Instead, Goldman Sachs reviewed its contract with Anthropic after consulting the company and decided to block access. The restriction does not extend to other AI providers, such as OpenAI.

An Anthropic spokesperson stated that its products were never officially supported in Hong Kong. Despite this, Hong Kong has long operated outside the censorship and restrictions found in mainland China. The city serves as a key financial hub in Greater China, where global banks handle cross-border deals, trading, mergers, and share sales.

AI Model Security and International Concerns

American AI companies have accused Chinese firms of using their models to train local AI systems at lower costs. This process, known as AI distillation, involves copying and training on outputs from foreign AI models. Anthropic recently alleged that Chinese companies DeepSeek, Moonshot AI, and MiniMax generated over 16 million conversations with its Claude chatbot. They reportedly used more than 24,000 fake accounts to collect data and train their own models.

Other US companies, including OpenAI and Google, have raised similar concerns. OpenAI told US lawmakers in February that it caught DeepSeek trying to copy its advanced AI models. The company warned that such actions could bypass years of expensive AI research. OpenAI also noted that Chinese firms are developing new methods to hide these activities.

China's AI Protection Measures

China is taking steps to protect its own AI companies. Recently, the country barred Manus AI, a startup known for its AI agent technology, from being sold to Meta, Mark Zuckerberg’s company. Chinese authorities do not want high-performing AI systems created by local talent to end up in American hands. This move comes as the US and China continue to compete for dominance in the AI sector.

Follow Us:

insta-iconlinkedin-iconfacebook-icon

Ad

Ad