Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Blog “aws-cdk-bedrock-basics” #4221

Open
wants to merge 1 commit into
base: gatsby
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 6 additions & 9 deletions content/blog/aws-cdk-bedrock-basics.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,7 @@ comments: true
published: true
language: en
---

AI is taking over the world. At Bright Inventions, we've already helped several clients with generative AI.
AI is taking over the world. At Bright Inventions, we've already helped several clients with [generative AI](/our-areas/ai-software-development/).\
In this blog post, we'll see how to use aws-cdk to create a simple API that responds to prompts.

## Request Bedrock model access
Expand Down Expand Up @@ -108,7 +107,6 @@ curl -s -X POST --location "https://${YOUR_LAMBDA_ID}.lambda-url.eu-central-1.on
}
]
}

```

## Titan Text Express configuration
Expand All @@ -117,10 +115,10 @@ We can control and tweak some the aspects of how the model responds to our promp
we can
configure:

- temperature: Float value to control randomness in the response (0 to 1, default 0). Lower values decrease randomness.
- topP: Float value to control the diversity of options (0 to 1, default 1). Lower values ignore less probable options.
- maxTokenCount: Integer specifying the maximum number of tokens in the generated response (0 to 8,000, default 512).
- stopSequences: Array of strings indicating where the model should stop generating text. Use the pipe character (|) to
* temperature: Float value to control randomness in the response (0 to 1, default 0). Lower values decrease randomness.
* topP: Float value to control the diversity of options (0 to 1, default 1). Lower values ignore less probable options.
* maxTokenCount: Integer specifying the maximum number of tokens in the generated response (0 to 8,000, default 512).
* stopSequences: Array of strings indicating where the model should stop generating text. Use the pipe character (|) to
separate different sequences (up to 20 characters).

Let's modify our lambda to allow controlling the parameters.
Expand Down Expand Up @@ -181,5 +179,4 @@ curl -X POST --location "https://${YOUR_LAMBDA_ID}.lambda-url.eu-central-1.on.aw
## Summary

As you see, it is straightforward to get started with AWS Bedrock. The full example of this blog post is available in
[GitHub repo](https://github.com/bright/bright-aws-cdk-bedrock).

[GitHub repo](https://github.com/bright/bright-aws-cdk-bedrock).