Goldman Sachs has restricted access to Anthropic’s AI tools for its bankers in Hong Kong, according to sources familiar with the situation. The Goldman Sachs AI ban affects employees who previously used Anthropic’s Claude model through the bank’s internal artificial intelligence system.
The change reportedly happened in recent weeks. Staff in Hong Kong are no longer able to access Claude, even though they previously used it for internal tasks. However, other major AI tools such as ChatGPT and Google’s Gemini are still available on the bank’s internal platform.
The decision marks a shift in how global banks are managing artificial intelligence tools across different regions. Hong Kong has traditionally had more flexibility in using US-based AI systems compared to mainland China, where many such tools are restricted. But companies themselves often set internal limits based on contracts and compliance rules.
Sources say the Goldman Sachs AI ban followed a review of the bank’s agreement with Anthropic. After consulting with the company, Goldman Sachs concluded that employees in Hong Kong should not use Anthropic’s products under the current contract terms. The decision was described as a strict interpretation of usage rules rather than a wider regulatory order.
Anthropic’s Claude models are developed in the United States and are widely used in business applications. However, the company has stated that its tools were not officially supported in Hong Kong, even though they were previously accessible through internal systems.
Goldman Sachs has not publicly commented on the change. Anthropic also did not respond to media requests regarding the restriction. The Hong Kong Monetary Authority and local government bodies also did not issue statements on the matter.
The move highlights growing caution among major financial institutions when adopting AI systems. Banks are increasingly reviewing legal, security, and compliance risks linked to generative AI tools. This is especially important in regions that sit between Western tech ecosystems and Asian regulatory environments.
Despite the restriction on Anthropic, Goldman Sachs continues to use other AI models internally. ChatGPT and Gemini remain active tools for employees, suggesting the bank is not reducing AI use overall but rather adjusting its vendor relationships.
AI systems are increasingly being used in banking for tasks such as data analysis, document processing, and internal automation. Goldman Sachs has previously stated that it is exploring AI agents to improve efficiency and reduce manual work across departments.
In earlier comments, the bank’s Chief Information Officer said Goldman Sachs was working with Anthropic to develop AI-based systems for internal use. These tools were expected to automate several business functions and improve productivity across teams.
The latest restriction shows that even as banks expand AI adoption, they are also becoming more selective about providers. Contract terms, regional rules, and internal compliance standards are now playing a major role in deciding which tools are allowed.
The Goldman Sachs AI ban also reflects broader uncertainty in the global AI industry. Companies are still defining how AI systems should be deployed across different countries and legal environments. This is especially complex for multinational banks operating in multiple regulatory zones.
Hong Kong remains an important financial hub, with strong links to global markets. However, its position between Western tech policies and Asian regulatory systems often creates unique compliance challenges for international firms.
Industry experts say such restrictions are likely to increase as AI becomes more deeply integrated into financial services. Banks will continue balancing innovation with risk management and legal obligations.
For now, Goldman Sachs appears to be maintaining a mixed AI environment. Some tools remain active, while others are restricted based on contract interpretation and internal policy decisions.
The Goldman Sachs AI ban on Anthropic tools may signal a more cautious phase in AI adoption within major financial institutions, where access is no longer just about technology capability, but also about legal clarity and operational control.

