Anthropic
American artificial intelligence research company From Wikipedia, the free encyclopedia
Anthropic PBC is an American artificial intelligence (AI) startup company founded in 2021. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.[5] According to the company, it researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe models for the public.[6][7]
![]() | |
Company type | Private |
---|---|
Industry | Artificial intelligence |
Founded | 2021 |
Founders |
|
Headquarters | San Francisco, California, U.S. |
Products | Claude |
Number of employees | c. 800 (2024)[4] |
Website | anthropic.com |
Anthropic was founded by former members of OpenAI, including siblings Daniela Amodei and Dario Amodei.[8] In September 2023, Amazon announced an investment of up to $4 billion, followed by a $2 billion commitment from Google in the following month.[9][10][11]
History
Summarize
Perspective

Founding and early development (2021–2022)
Anthropic was founded in 2021 by seven former employees of OpenAI, including siblings Daniela Amodei and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research.[12][13]
In April 2022, Anthropic announced it had received $580 million in funding,[14] including a $500 million investment from FTX under the leadership of Sam Bankman-Fried.[15][3]
In the summer of 2022, Anthropic finished training the first version of Claude but did not release it, mentioning the need for further internal safety testing and the desire to avoid initiating a potentially hazardous race to develop increasingly powerful AI systems.[16]
Legal and strategic partnerships (2023)
In February 2023, Anthropic was sued by Texas-based Anthrop LLC for the use of its registered trademark "Anthropic A.I."[17] On September 25, 2023, Amazon announced a partnership with Anthropic, with Amazon becoming a minority stakeholder by initially investing $1.25 billion, and planning a total investment of $4 billion.[9] As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and make its AI models available to AWS customers.[9][18] The next month, Google invested $500 million in Anthropic, and committed to an additional $1.5 billion over time.[11]
Major investments and acquisitions (2024)
In March 2024, Amazon maxed out its potential investment from the agreement made in the prior year by investing another US$2.75 billion into Anthropic, completing its $4 billion investment.[10]
In November 2024, Amazon announced a new investment of $4 billion in Anthropic (bringing its total investment to $8 billion), including an agreement to increase the use of Amazon's AI chips for training and running Anthropic's large language models.[19]
In 2024, Anthropic attracted several notable employees from OpenAI, including Jan Leike, John Schulman, and Durk Kingma.[20]
Business structure

According to Anthropic, the company's goal is to research the safety and reliability of artificial intelligence systems.[7] The Amodei siblings were among those who left OpenAI due to directional differences.[13]
Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which enables directors to balance the financial interests of stockholders with its public benefit purpose.[21]
Anthropic's "Long-Term Benefit Trust" is a purpose trust for "the responsible development and maintenance of advanced AI for the long-term benefit of humanity". It holds Class T shares in the PBC which allow it to elect directors onto Anthropic's board.[22][23] As of April 2025, the members of the Trust are Neil Buddy Shah, Kanika Bahl and Zach Robinson.[24]
Investors include Amazon.com for $8B,[19] Google for $2B,[11] and Menlo Ventures for $750M.[25]
Key employees
- Dario Amodei: Co-founder and chief executive officer[8]
- Daniela Amodei: Co-founder and President[8]
- Mike Krieger: Chief Product Officer[26][27]
- Jan Leike: ex-OpenAI alignment researcher[28]
Projects
Summarize
Perspective
Claude
Claude incorporates "Constitutional AI" to set safety guidelines for the model's output.[29] The name, "Claude", was chosen either as a reference to mathematician Claude Shannon, or as a male name to contrast the female names of other A.I. assistants such as Alexa, Siri, and Cortana.[3]
Anthropic initially released two versions of its model, Claude and Claude Instant, in March 2023, with the latter being a more lightweight model.[30][31][32] The next iteration, Claude 2, was launched in July 2023.[33] Unlike Claude, which was only available to select users, Claude 2 is available for public use.[34]
Claude 3 was released in March 2024, with three language models: Opus, Sonnet, and Haiku.[35][36] The Opus model is the largest. According to Anthropic, it outperformed OpenAI's GPT-4 and GPT-3.5, and Google's Gemini Ultra, in benchmark tests at the time. Sonnet and Haiku are Anthropic's medium- and small-sized models, respectively. All three models can accept image input.[35] Amazon has added Claude 3 to its cloud AI service Bedrock.[37]
In May 2024, Anthropic announced the Claude Team plan, its first enterprise offering for Claude, and Claude iOS app.[38]
In June 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview select code in real time such as websites or SVGs.[39]
In October 2024, Anthropic released an improved version of Claude 3.5, along with a beta feature called "Computer use", which enables Claude to take screenshots, click, and type text.[40]
In November 2024, Palantir announced a partnership with Anthropic and Amazon Web Services to provide U.S. intelligence and defense agencies access to Claude 3 and 3.5. According to Palantir, this was the first time that Claude would be used in "classified environments".[41]
In December 2024, Claude 3.5 Haiku was made available to all users on web and mobile platforms.[42]
In February 2025, Claude 3.7 Sonnet was introduced to all paid users. It is a "hybrid reasoning" model (one that responds directly to simple queries, while taking more time for complex problems).[43][44]
Constitutional AI
According to Anthropic, Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest.[12][45] Within this framework, humans provide a set of rules describing the desired behavior of the AI system, known as the "constitution".[45] The AI system evaluates the generated output and then adjusts the AI models to better fit the constitution.[45] The self-reinforcing process aims to avoid harm, respect preferences, and provide true information.[45]
Some of the principles of Claude 2's constitution are derived from documents such as the 1948 Universal Declaration of Human Rights and Apple's terms of service.[33] For example, one rule from the UN Declaration applied in Claude 2's CAI states "Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood."[33]
Interpretability research
Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture.[12][46][47]
Part of Anthropic's research aims to be able to automatically identify "features" in generative pretrained transformers like Claude. In a neural network, a feature is a pattern of neural activations that corresponds to a concept. In 2024, using a compute-intensive technique called "dictionary learning", Anthropic was able to identify millions of features in Claude, including for example one associated with the Golden Gate Bridge. Enhancing the ability to identify and edit features is expected to have significant safety implications.[48][49][50]
In March 2025, research by Anthropic suggested that multilingual LLMs partially process information in a conceptual space before converting it to the appropriate language. It also found evidence that LLMs can sometimes plan ahead. For example, when writing poetry, Claude identifies potential rhyming words before generating a line that ends with one of these words.[51][52]
U.S. Military and Intelligence
Anthropic partnered with Palantir and Amazon Web Services in November 2024 to provide the Claude model to U.S. intelligence and defense agencies.[53] Anthropic's CEO Dario Amodei said about working with the U.S. military:
The position that we should never use AI in defense and intelligence settings doesn’t make sense to me. The position that we should go gangbusters and use it to make anything we want — up to and including doomsday weapons — that’s obviously just as crazy. We’re trying to seek the middle ground, to do things responsibly.[54]
Lawsuit
Summarize
Perspective
On October 18, 2023, Anthropic was sued by Concord, Universal, ABKCO, and other music publishers for, per the complaint, "systematic and widespread infringement of their copyrighted song lyrics."[55][56][57] They alleged that the company used copyrighted material without permission in the form of song lyrics.[58] The plaintiffs asked for up to $150,000 for each work infringed upon by Anthropic, citing infringement of copyright laws.[58] In the lawsuit, the plaintiffs support their allegations of copyright violations by citing several examples of Anthropic's Claude model outputting copied lyrics from songs such as Katy Perry's "Roar" and Gloria Gaynor's "I Will Survive".[58] Additionally, the plaintiffs alleged that even given some prompts that did not directly state a song name, the model responded with modified lyrics based on original work.[58]
On January 16, 2024, Anthropic claimed that the music publishers were not unreasonably harmed and that the examples noted by plaintiffs were merely bugs.[59]
In August 2024, a class-action lawsuit was filed against Anthropic in California for alleged copyright infringement. The suit claims Anthropic fed its LLMs with pirated copies of the authors' work, including from participants Kirk Wallace Johnson, Andrea Bartz and Charles Graeber.[60]
See also
References
External links
Wikiwand - on
Seamless Wikipedia browsing. On steroids.