Loading AI tools
Artificial intelligence research company From Wikipedia, the free encyclopedia
Anthropic PBC is a U.S.-based artificial intelligence (AI) public-benefit startup founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public.[5][6][7] Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.[8]
Company type | Private |
---|---|
Industry | Artificial intelligence |
Founded | 2021 |
Founders |
|
Headquarters | San Francisco, California, U.S. |
Products | Claude |
Number of employees | c. 500 (2024)[4] |
Website | anthropic.com |
Anthropic was founded by former members of OpenAI, siblings Daniela Amodei and Dario Amodei.[9] In September 2023, Amazon announced an investment of up to $4 billion, followed by a $2 billion commitment from Google in the following month.[10][11][12]
Anthropic was founded in 2021 by seven former employees of OpenAI, including siblings Daniela Amodei and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research.[13][14]
In April of 2022, Anthropic announced it had received $580 million in funding,[15] with $500 million of this funding coming from FTX under the leadership of Sam Bankman-Fried.[16][3]
In the summer of 2022, Anthropic finished training the first version of Claude but did not release it, mentioning the need for further internal safety testing and the desire to avoid initiating a potentially hazardous race to develop increasingly powerful AI systems.[17]
In February 2023, Anthropic was sued by Texas-based Anthrop LLC for the use of its registered trademark "Anthropic A.I."[18] On September 25, 2023, Amazon announced a partnership with Anthropic, with Amazon becoming a minority stakeholder by initially investing $1.25 billion, and planning a total investment of $4 billion.[10] As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and make its AI models available to AWS customers.[10][19] The next month, Google invested $500 million in Anthropic, and committed to an additional $1.5 billion over time.[12]
In March 2024, Amazon maxed out its potential investment from the agreement made in the prior year by investing another US $2.75 billion into Anthropic, completing its $4 billion investment.[11]
In 2024, Anthropic attracted several notable employees from OpenAI, including Jan Leike, John Schulman, and Durk Kingma.[20]
According to Anthropic, the company's goal is to research the safety and reliability of artificial intelligence systems.[7] The Amodei siblings were among those who left OpenAI due to directional differences, specifically regarding OpenAI's ventures with Microsoft in 2019.[14] Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which requires the company to maintain a balance between private and public interests.[33]
Anthropic is a corporate "Long-Term Benefit Trust", a company-derived entity that requires the company's directors to align the company's priorities with the public benefit rather than profit in "extreme" instances of "catastrophic risk".[34][35] As of September 19, 2023, members of the Trust included Jason Matheny (CEO and President of the RAND Corporation), Kanika Bahl (CEO and President of Evidence Action),[36] Neil Buddy Shah (CEO of the Clinton Health Access Initiative),[37] Paul Christiano (Founder of the Alignment Research Center),[38] and Zach Robinson (CEO of Effective Ventures US).[39][40]
Claude incorporates "Constitutional AI" to set safety guidelines for the model's output.[41] The name, "Claude", was chosen either as a reference to mathematician Claude Shannon, or as a male name to contrast the female names of other A.I. assistants such as Alexa, Siri, and Cortana.[3]
Anthropic initially released two versions of its model, Claude and Claude Instant, in March 2023, with the latter being a more lightweight model.[42][43][44] The next iteration, Claude 2, was launched in July 2023.[45] Unlike Claude, which was only available to select users, Claude 2 is available for public use.[27]
Claude 3 was released on March 4, 2024, unveiling three language models: Opus, Sonnet, and Haiku.[46][47] The Opus model is the largest and most capable—according to Anthropic, it outperforms the leading models from OpenAI (GPT-4, GPT-3.5) and Google (Gemini Ultra).[46] Sonnet and Haiku are Anthropic's medium- and small-sized models, respectively.[46] All three models can accept image input.[46] Amazon has incorporated Claude 3 into Bedrock, an Amazon Web Services-based platform for cloud AI services.[48]
On May 1, 2024, Anthropic announced the Claude Team plan, its first enterprise offering for Claude, and Claude iOS app.[49]
On June 20, 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview select code in real time such as websites or SVGs.[50]
According to Anthropic, Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest.[13][51] Within this framework, humans provide a set of rules describing the desired behavior of the AI system, known as the "constitution".[51] The AI system evaluates the generated output and then adjusts the AI models to better fit the constitution.[51] The self-reinforcing process aims to avoid harm, respect preferences, and provide true information.[51]
Some of the principles of Claude 2's constitution are derived from documents such as the 1948 Universal Declaration of Human Rights and Apple's terms of service.[45] For example, one rule from the UN Declaration applied in Claude 2's CAI states "Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood."[45]
Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture.[13][52][53]
Part of Anthropic's research aims to be able to automatically identify "features" in generative pretrained transformers like Claude. In a neural network, a feature is a pattern of neural activations that corresponds to a concept. Using a compute-intensive technique called "dictionary learning", Anthropic was able to identify millions of features in Claude, including for example one associated with the Golden Gate Bridge. Enhancing the ability to identify and edit features is expected to have significant safety implications.[54][55][56]
On October 18, 2023, Anthropic was sued by Concord, Universal, ABKCO, and other music publishers for, per the complaint, "systematic and widespread infringement of their copyrighted song lyrics."[57][58][59] They alleged that the company used copyrighted material without permission in the form of song lyrics.[60] The plaintiffs asked for up to $150,000 for each work infringed upon by Anthropic, citing infringement of copyright laws.[60] In the lawsuit, the plaintiffs support their allegations of copyright violations by citing several examples of Anthropic's Claude model outputting copied lyrics from songs such as Katy Perry's "Roar" and Gloria Gaynor's "I Will Survive".[60] Additionally, the plaintiffs alleged that even given some prompts that did not directly state a song name, the model responded with modified lyrics based on original work.[60]
On January 16, 2024, Anthropic claimed that the music publishers were not unreasonably harmed and that the examples noted by plaintiffs were merely bugs.[61]
In August 2024, a class-action lawsuit was filed against Anthropic in California for alleged copyright infringement. The suit claims Anthropic fed its LLMs with pirated copies of the authors' work, including from participants Kirk Wallace Johnson, Andrea Bartz and Charles Graeber.[62]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.