DEV Community

Cover image for Amazon Bedrock Blueprint: Architecting AI Projects with Amazon Bedrock
NaDia for AWS Community Builders

Posted on

Amazon Bedrock Blueprint: Architecting AI Projects with Amazon Bedrock

Initial Words

If you're actively involved in the AI field and utilise AWS Cloud services, chances are you've explored Amazon Bedrock to enhance your applications with AI capabilities. Even if you haven't directly worked with it, you've likely heard about the advanced Foundational Models that Amazon Bedrock offers. In this blog post, I'll provide a comprehensive introduction to Amazon Bedrock components and delve into common workflows for integrating Amazon Bedrock into Generative AI projects.

Amazon Bedrock Components

Exploring various articles on Amazon Bedrock will give you enough information about its nature. As you may be aware, Amazon Bedrock offers a quick serverless experience, granting access to an extensive array of Foundational Models. Its unified API is especially noteworthy, as it streamlines the integration of these diverse models into your system.
However, the question remains: how does Amazon Bedrock achieve this? What components does it comprise that set it apart from other AI platforms or services? This is the exploration we aim to undertake in this section.

Foundational Models

Yes, Amazon Bedrock offers a wide range of Foundational Models like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon. But what's the advantage of using these models within Bedrock? You're not restricted to just one specific model. Bedrock's API makes it easy to integrate these models. If you decide to switch from Mistral to Cohere models, all you need to do is change the model ID within your "InvokeModel" API from Bedrock. Additionally, if your system needs to integrate with multiple models, Bedrock's API layer allows you to invoke as many models as you need in parallel but completely isolated from each other.

Knowledge Base

Using Foundational Models alone has limitations. However, there are effective approaches to overcome these limitations, which I'll discuss in the "AI Workflows With Bedrock" section. It's still important to understand these limitations such as:

  • Outdated information
  • Lack of knowledge on your data set.
  • Lack of transparency on how they arrived at specific answers.
  • Hallucination

I'm sure you're aware that Foundational Models are trained on vast amounts of data, but there's no guarantee they're always up to date with the latest information. These models provide answers without providing the source links for the context they used. The answers they give are very general, and if you want them to be based on your company's specific data, you'll need to retrain them using that data. However, if your data is constantly changing, continuously retraining the models can be computationally intensive and expensive. Additionally, by the time you finish retraining the model, your company may have already generated new data, making the model's information outdated.

To address issues such as providing source links or offering more specific domain-related answers, Amazon Bedrock offers the "Knowledge Base" component. This feature provides additional data access during runtime.
Using the Knowledge Base, you can create a RAG (Retrieval Augmented Generation) application that utilizes the "RetrieveAndGenerate API" to fetch information from your Knowledge Base (KB) and generate responses. Alternatively, you can build a basic RAG Application with the "Retrieve API", which retrieves information from the knowledge base and presents it to the user along with the source link.

Beyond answering user queries, a KB can augment prompts for Foundational Models by adding context to the prompt. This adds RAG capability to Agents for Amazon Bedrock.

Agents For Bedrock

The Amazon Bedrock Knowledge Base (KB) handles data ingestion, while agents manage the Retrieval Augmented Generation (RAG) workflow. Agents for Amazon Bedrock automates prompt engineering and the organization of user-requested tasks.

Agents can perform various tasks, including:

  • Take actions to fulfill the users request.

    Agents have predefined Action groups, which are tasks they can autonomously perform. Each Action Group comprises some Lambda functions and API Schema. A crucial aspect of Schema Definition is the description of each Endpoint in your Schema. These descriptions can act as prompts to your Agent, helping it understand when to use which API Endpoint.

  • Break down complex user queries for Foundational Model.

    Agents assist Foundation Models in comprehending user requests. In the upcoming workflow explanations, I will delve deeper into how Agents employ ReAct strategies to analyse user requests and determine the actions that Foundation Models should take to fulfill those queries.

  • Collect additional information

    When you create an Agent for Amazon Bedrock, you can configure Agent to collect additional information from user through natural language conversation.

Common AI Workflows With Amazon Bedrock

Having explored all the components of Amazon Bedrock, let's now delve into the most common patterns for integrating Amazon Bedrock into our Generative API applications. Knowing the common blueprints will help us to identify when a specific service is a good addition to our architecture and which blueprint is a good candidate for our use-case.

Beginning with standard workflow to only use Amazon Bedrock API that invokes different models using "InnvokeModel" API.
This invocation can be initiated either by an event within you AWS account or through Application API.

In an event driven workflow, the Model Invocation can occurs by S3 notifications when a file is uploaded to a specific S3 Bucket. This might be necessary when new files are uploaded to the bucket, and you want to summarise the document using Amazon Bedrock Foundational Models.

event-driven

You could also set up your Application API with an AWS Lambda Function that triggers a Foundational Model. For instance, it could generate text based on a user-provided topic or describe an image uploaded by the user.

generate-text

This approach may appear simplistic, but that's the essence of utilising Amazon Bedrock as an API abstraction layer in your Generative AI application. Despite its simplicity, this method can yield effective responses and enhance answer quality using common techniques like Prompt engineering.

The next pattern I'd like to discuss involves creating RAG applications using Knowledge Base, which blends prompt engineering techniques with retrieving information from external sources.

To set up a RAG workflow, begin by creating a Knowledge Base in Amazon Bedrock. This involves specifying the S3 Bucket containing your external resources, determining the document chunk size, selecting the Embedding model to generate vectors for the dataset, and selecting a Vector Database to store the indexes, like Amazon OpenSearch. Setting the chunk size is crucial as it leads to finer embeddings, enhancing retrieval accuracy, and prevents overloading the model's context window with large source documents.

Similar to most of AI powered workflows, this one also starts with user input prompt. RAG uses the same embedding model to create a vector embedding representation of the input prompt. This embedding is then used to query the Knowledge Base for similar vector embeddings to return the most relevant text as the query result. The query result is then added to the prompt, and the augmented prompt is passed to the FM. The model uses the additional context in the prompt to generate the response to the user query.

RAG-Application

Like many AI-powered workflows, this one begins with a user input prompt. RAG uses an embedding model to create a vector representation of the input prompt. This vector is used to search the Knowledge Base for similar vectors, returning the most relevant text as the query result. The query result is then combined with the prompt and passed to the FM. The model uses the augmented prompt to generate a response to the user's query.

Ever since I was young, I've saved the best for last. Let's talk about the Amazon Bedrock workflow with Agents. Here, you can surpass limitations by combining all Amazon Bedrock components, from the Knowledge Base to your company's APIs, to empower the Model to generate robust answers.

Agent-for-bedrock

In an earlier section, I mentioned Agents extend FMs to understand user requests by breaking down complex tasks into multiple steps. This process occurs during the Pre-Processing phase. When an Agent receives a user request, it first analyses the request using ReAct Technique (Reason, Action, Observation).

During this phase, the Agent:

  • Reasons on the user query to understand the task at hand, determining whether it needs to call an API or access a Knowledge Base for information.
  • Takes action to fulfill the request by executing the necessary steps.
  • Returns the observation or results after completing the actions. It then incorporates this information into the input prompt, providing the model with additional context (Augmented Prompt).

Final Words

Understanding the structure of Amazon Bedrock and the typical architectures for integrating it into Gen AI applications can help us make more informed decisions about which components to use to achieve our goals. However, it can be challenging to determine the best workflow, especially for those new to the Gen AI field.

For simpler tasks that involve historical data, a standard approach with strong Prompt Engineering techniques is often effective. In contrast, for more complex tasks or when responses need to be specific to your dataset, leveraging Fine Tuning within Amazon Bedrock can be beneficial.

When the model requires external data resources to fulfill user requests, using a Knowledge Base or a combination of a Knowledge Base and an Agent can be helpful. A Knowledge Base workflow is suitable for relatively static data such as company documents or FAQs, while an Agent with a Knowledge Base is better for dynamic information like databases or APIs.

There is no one-size-fits-all solution, but the flexibility of Amazon Bedrock allows for various approaches to achieve the same result. The key is to choose the right approach for the task to achieve optimized results at minimal cost.

I hope you found this article useful. In the next part, I will demonstrate the most advanced workflow, where we will use an Agent with APIs and a Knowledge Base to create a Tourist and Travel Assistant using Amazon Bedrock providing all the code snippets and code repository for your reference.

Top comments (1)

Collapse
 
mmuller88 profile image
Martin Muller ๐Ÿ‡ฉ๐Ÿ‡ช๐Ÿ‡ง๐Ÿ‡ท๐Ÿ‡ต๐Ÿ‡น

That is awesome thanks a lot! I'll share that with my colleagues :)!