Messages Beta True

messages_beta_true

Methods

Create A Message -> { id, content, model, 5 more... }
post/v1/messages?beta=true

Send a structured list of input messages with text and/or image content, and the model will generate the next message in the conversation.

The Messages API can be used for either single queries or stateless multi-turn conversations.

Learn more about the Messages API in our user guide

header Parameters
anthropic-beta: Array<string>
Optional

Optional header to specify the beta version(s) you want to use.

To use multiple betas, use a comma separated list like beta1,beta2 or specify the header multiple times for each beta.

anthropic-version: string
Optional

The version of the Anthropic API you want to use.

Read more about versioning and our version history here.

x-api-key: string
Optional

Your unique API key for authentication.

This key is required in the header of all API requests, to authenticate your account and access Anthropic's services. Get your API key through the Console. Each key is scoped to a Workspace.

Response fields
id: string

Unique object identifier.

The format and length of IDs may change over time.

content: Array<{ citations, text, type } | { id, input, name, 1 more... } | { signature, thinking, type } | 1 more...>

Content generated by the model.

This is an array of content blocks, each of which has a type that determines its shape.

Example:

[{"type": "text", "text": "Hi, I'm Claude."}]

If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

For example, if the input messages were:

[
  {"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
  {"role": "assistant", "content": "The best answer is ("}
]

Then the response content might be:

[{"type": "text", "text": "B)"}]
model: string
(maxLength: 256, minLength: 1)

The model that handled the request.

role: "assistant"
(default: "assistant")

Conversational role of the generated message.

This will always be "assistant".

stop_reason: "end_turn" | "max_tokens" | "stop_sequence" | 1 more...
Nullable

The reason that we stopped.

This may be one the following values:

  • "end_turn": the model reached a natural stopping point
  • "max_tokens": we exceeded the requested max_tokens or the model's maximum
  • "stop_sequence": one of your provided custom stop_sequences was generated
  • "tool_use": the model invoked one or more tools

In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

stop_sequence: string
Nullable

Which custom stop sequence was generated, if any.

This value will be a non-null string if one of your custom stop sequences was generated.

type: "message"
(default: "message")

Object type.

For Messages, this is always "message".

usage: { cache_creation_input_tokens, cache_read_input_tokens, input_tokens, 1 more... }

Billing and rate-limit usage.

Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

For example, output_tokens will be non-zero, even for an empty string response from Claude.

Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.

Request example
200Example