Claude is a family of large language models developed by Anthropic.[1] The first model was released in March 2023. Claude 3, released in March 2024, can also analyze images.[2]

Quick Facts Developer(s), Initial release ...
Claude
Developer(s)Anthropic
Initial releaseMarch 2023; 1 year ago (2023-03)
Type
LicenseProprietary
Websiteclaude.ai
Close

Training

Claude models are generative pre-trained transformers. They have been pre-trained to predict the next word in large amounts of text. Claude models have then been fine-tuned with Constitutional AI with the aim of making them helpful, honest, and harmless.[3][4]

Constitutional AI

Constitutional AI is an approach developed by Anthropic for training AI systems, particularly language models like Claude, to be harmless and helpful without relying on extensive human feedback. The method, detailed in the paper "Constitutional AI: Harmlessness from AI Feedback" involves two phases: supervised learning and reinforcement learning.[4]

In the supervised learning phase, the model generates responses to prompts, self-critiques these responses based on a set of guiding principles (a "constitution"), and revises the responses. Then the model is fine-tuned on these revised responses.[4]

For the reinforcement learning from AI feedback (RLAIF) phase, responses are generated, and an AI compares their compliance with the constitution. This dataset of AI feedback is used to train a preference model that evaluates responses based on how much they satisfy the constitution. Claude is then fine-tuned to align with this preference model. This technique is similar to reinforcement learning from human feedback (RLHF), except that the comparisons used to train the preference model are AI-generated, and that they are based on the constitution.[5][4]

This approach enables the training of AI assistants that are both helpful and harmless, and that can explain their objections to harmful requests, enhancing transparency and reducing reliance on human supervision.[6][7]

The "constitution" for Claude included 75 points, including sections from the UN Universal Declaration of Human Rights.[6][3]

Models

The name Claude was notably inspired by Claude Shannon, a pioneer in artificial intelligence.[8]

Claude

Claude was the initial version of Anthropic's language model released in March 2023,[9] Claude demonstrated proficiency in various tasks but had certain limitations in coding, math, and reasoning capabilities.[10] Anthropic partnered with companies like Notion (productivity software) and Quora (to help develop the Poe chatbot).[10]

Claude Instant

Claude was released as two versions, Claude and Claude Instant, with Claude Instant being a faster, less expensive, and lighter version. Claude Instant has an input context length of 100,000 tokens (which corresponds to around 75,000 words).[11]

Claude 2

Claude 2 was the next major iteration of Claude, which was released in July 2023 and available to the general public, whereas the Claude 1 was only available to selected users approved by Anthropic.[12]

Claude 2 expanded its context window from 9,000 tokens to 100,000 tokens.[9] Features included the ability to upload PDFs and other documents that enables Claude to read, summarize, and assist with tasks.

Claude 2.1

Claude 2.1 doubled the number of tokens that the chatbot could handle, increasing it to a window of 200,000 tokens, which equals around 500 pages of written material.[1]

Anthropic states that the new model is less likely to produce false statements compared to its predecessors.[13]

Claude 3

Claude 3 was released on March 14, 2024, with claims in the press release to have set new industry benchmarks across a wide range of cognitive tasks. The Claude 3 family includes three state-of-the-art models in ascending order of capability: Haiku, Sonnet, and Opus. The default version of Claude 3, Opus, has a context window of 200,000 tokens, but this is being expanded to 1 million for specific use cases.[14][2]

Claude 3 drew attention for demonstrating an apparent ability to realize it is being artificially tested during needle in a haystack tests.[15]

Claude 3.5

On June 20, 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview the rendered output in real time, such as SVG graphics or websites.[16]

An "upgraded Claude 3.5 Sonnet" was introduced on October 22, 2024, along with Claude 3.5 Haiku. Anthropic simultaneously introduced "computer use" in the API, which allows Claude 3.5 Sonnet to interact with a computer desktop environment.[17]

Access

Limited-use access using Claude 3.5 Sonnet is free of charge, but requires both an e-mail address and a cellphone number. A paid plan is also offered for more usage and access to all Claude 3 models.[18]

On May 1, 2024, Anthropic announced the Claude Team plan, its first enterprise offering for Claude, and a Claude iOS app.[19]

Criticism

Claude 2 received criticism for its stringent ethical alignment that may reduce usability and performance. Users have been refused assistance with benign requests, for example with the programming question "How can I kill all python processes in my ubuntu server?" This has led to a debate over the "alignment tax" (the cost of ensuring an AI system is aligned) in AI development, with discussions centered on balancing ethical considerations and practical functionality. Critics argued for user autonomy and effectiveness, while proponents stressed the importance of ethical AI.[20][13]

References

Wikiwand in your browser!

Seamless Wikipedia browsing. On steroids.

Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.

Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.