DEV Community

Renaldi for AWS Community Builders

Posted on • Edited on

A Deep Dive into Prompt Engineering for Amazon Bedrock (...Mostly for AI21 Foundational Models)

Image description

Prompt Engineering is a pivotal skill when it comes to effectively utilizing generative AI models like those offered by Amazon Bedrock. Crafting precise prompts can significantly enhance the outcomes, guiding the model towards generating desired responses. In this blog post, we'll walk through a step-by-step guide on how to approach prompt engineering with Amazon Bedrock, specifically more on its AI21 Foundational Models, illustrating the process with practical use cases.

Step 1. Understanding the Task:
Let's say you want to summarize medical articles. We will have to choose an appropriate model that would be good for our use case. Amazon Bedrock provides a variety of FMs from both Amazon and leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, and Stability AI. Some of the models include Amazon Titan, Jurassic-2, Claude 2, Command, Llama 2, and Stable Diffusion XL, which support different modalities like text, embeddings, and images. It's advised to experiment with various FMs in the playground provided by Amazon Bedrock to quickly ascertain which model resonates with your use case, as different models have different responses towards the prompts that will be given below.

To see all available foundational models, you can run the following on your AWS CLI:
aws bedrock list-foundation-models.
For this particular use case, we'll use AI21's Jurassic-2 model, using ai21.j2-ultra.

Step 2. Crafting Your Initial Prompt:
You will need to be specific in how you craft your prompt.
Example Prompt:
Summarize the following medical article: {article_text}
You can start with a straightforward prompt instructing the model to summarize a given text.

Step 3. Testing and Refining Your Prompt:
Utilize the Text Playground in Amazon Bedrock to test your prompt.
Refined Prompt:
Provide a concise summary of the following medical article, focusing on the main findings and conclusions: {article_text}

Step 4. Incorporating Explicit Instructions:
You can also specifically mention the article and instructions for the article, such as below.
Example Prompt for Summarization:

Article: {article_text}
Instruction: Summarize the key findings in one paragraph.
Enter fullscreen mode Exit fullscreen mode

This prompt makes it clear that you want a paragraph-length summary focusing on key findings.

Step 5. Using Sample Outputs for Better Results:
We can then use sample outputs for better results, as can be seen below.
Example for Summarization:

Article: {article_text}
Sample Summary: {previous_summary}
Instruction: Provide an updated summary based on the article above.
Enter fullscreen mode Exit fullscreen mode

Step 6. Iteratively Refining Prompts:
Test different prompt structures and instructions to guide the model towards desired outputs. Collect feedback on model outputs and iteratively refine your prompts accordingly.

Step 7. Exploring Advanced Prompting Techniques:
We can also extract information specifically from specific text, to generate insights from it. In this case, for example, we can extract the main topic from the text.

Text: {text}
Question: Identify the main topic discussed in the text above.
Enter fullscreen mode Exit fullscreen mode

Output Example: "The main topic discussed is {topic}"

Step 8. Utilizing Provided Examples and Resources:
Explore the GitHub repository Amazon Bedrock Prompting Examples & Tools for more examples and tools specifically designed for Amazon Bedrock: https://github.com/aws-samples/amazon-bedrock-prompting.

Step 9. Experimenting with Different Models:
Different FMs in Amazon Bedrock might respond better to different prompting strategies. Experiment to find what works best for your task.

Step 10. Consulting Community and Official Resources:
Engage with the community on forums or GitHub, and consult official documentation for further guidance on prompt engineering with Amazon Bedrock.

Now, let’s delve deeper into various other practical use cases:

Step 11. Summarizing Text (e.g., summarizing user reviews for brevity):
You may want a succinct summary that encapsulates the sentiments of users regarding Product X.
Example Prompt:
Summarize the following user reviews about Product X: {user_reviews}
Refinement: To ensure the model captures user sentiments, we specify what aspect of the reviews to focus on.
Refined Prompt:
Provide a brief summary of the user sentiments about Product X based on the following reviews: {user_reviews}

Step 12. Inferring from Text (e.g., sentiment classification, topic extraction):
Sentiment Classification:
You want to know whether the review of Product Y is positive, negative, or neutral.
Example Prompt:
Classify the sentiment of the following review of Product Y: {review_text}
Refinement: Specifying the sentiment categories helps guide the model to the desired output format.
Refined Prompt:
Determine the sentiment (positive, negative, or neutral) expressed in the following review of Product Y: {review_text}

Topic Extraction:
You want to extract the main topics discussed in an article.
Example Prompt:
Identify the main topics discussed in the following article: {article_text}
Refinement: Asking the model to list the topics can help obtain a more structured output.
Refined Prompt:
List the key topics discussed in the following article, separating each topic with a comma: {article_text}

Step 13. Text Transforming (e.g., translation, spelling & grammar correction):

Translation:
You want to translate text from English to French.
Example Prompt:
Translate the following text from English to French: {english_text}
Refinement: Adding a requirement for fluency and preservation of original meaning ensures a higher-quality translation.
Refined Prompt:
Provide a fluent translation of the following English text to French, preserving the original meaning: {english_text}

Spelling & Grammar Correction:
You want to correct spelling and grammar mistakes in a text.
Example Prompt:
Correct the spelling and grammar in the following text: {text_with_errors}
Refinement: Explicitly asking for a corrected version can reinforce the task at hand.
Refined Prompt:
Provide a corrected version of the following text, addressing spelling and grammatical errors: {text_with_errors}

Step 14. Expanding the Text (e.g., automatically writing emails):
You want to draft a follow-up email regarding Project Z based on a previous discussion.
Example Prompt:
Draft a polite follow-up email to inquire about the status of Project Z based on the previous discussion: {previous_discussion_summary}

Refinement: Specifying a courteous tone and referencing previous discussions may help in crafting a more professional and contextually appropriate email.

Refined Prompt:
Compose a courteous follow-up email to inquire about the status of Project Z, referencing the points discussed earlier: {previous_discussion_summary}

This guide, interspersed with practical examples, aims to provide a human-centric and hands-on approach to prompt engineering with Amazon Bedrock. Each step is structured to give a clear pathway from understanding the task at hand to effectively crafting and refining prompts for desired outcomes. By following this guide, developers can significantly enhance their proficiency in prompt engineering, making the most out of the powerful foundational models offered by Amazon Bedrock.

Top comments (0)