Optional
fields: BedrockChatFieldsAWS Credentials.
If no credentials are provided, the default credentials from @aws-sdk/credential-provider-node
will be used.
A custom fetch function for low-level access to AWS API. Defaults to fetch().
Optional
init: RequestInitOptional
init: RequestInitIdentifier for the guardrail configuration.
Version for the guardrail configuration.
Model to use. For example, "amazon.titan-tg1-large", this is equivalent to the modelId property in the list-foundation-models api.
The AWS region e.g. us-west-2
.
Fallback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here.
Whether or not to stream responses
Optional
endpointOverride the default endpoint hostname.
Optional
guardrailRequired when Guardrail is in use.
Optional
maxMax tokens.
Optional
modelAdditional kwargs to pass to the model.
Optional
stopOptional
temperatureTemperature.
Optional
traceTrace settings for the Bedrock Guardrails.
Optional
options: unknown
AWS Bedrock chat model integration.
Setup: Install
@langchain/community
and set the following environment variables:Key args
Init args
Runtime args
Examples
Instantiate
Invoking
Streaming Chunks
AIMessageChunk { "content": "", "additional_kwargs": { "id": "msg_bdrk_01RhFuGR9uJ2bj5GbdAma4y6" }, "response_metadata": { "type": "message", "role": "assistant", "model": "claude-3-5-sonnet-20240620", "stop_reason": null, "stop_sequence": null }, } AIMessageChunk { "content": "J", } AIMessageChunk { "content": "'adore la", } AIMessageChunk { "content": " programmation.", } AIMessageChunk { "content": "", "additional_kwargs": { "stop_reason": "end_turn", "stop_sequence": null }, } AIMessageChunk { "content": "", "response_metadata": { "amazon-bedrock-invocationMetrics": { "inputTokenCount": 25, "outputTokenCount": 11, "invocationLatency": 659, "firstByteLatency": 506 } }, "usage_metadata": { "input_tokens": 25, "output_tokens": 11, "total_tokens": 36 } }
Aggregate Streamed Chunks
Bind tools
Structured Output
Response Metadata