DEV Community

Cover image for Top tips to pass the AWS AI Practitioner Exam
Matt Lewis for AWS Heroes

Posted on

Top tips to pass the AWS AI Practitioner Exam

I sat and passed the AWS Certified AI Practitioner exam last week. It’s currently still in beta, but that comes with the bonus of receiving an Early Adopter badge for anyone that is successful before Feb 15th, 2025.

It’s classed as a Foundational level exam, which puts it alongside the AWS Cloud Practitioner exam. But don’t let that fool you, a good general knowledge of AWS cloud helps, but you definitely need to freshen up on AI/ML terminology and concepts to be confident of a pass going into the exam.

Study Guides

Alongside the official exam guide, I used the following online resources:

There are additional resources available on AWS Skill Builder for anyone with a subscription.

Top Tips

There is no shortcut to pass the exam, other than ensuring you have covered all of the material called out in the exam guide. However, the following are my top 5 topics to make sure you know well, to help maximise your chances.

Machine Learning Lifecycle

It is important to understand the various phases of the machine learning lifecycle and the order in which they take place e.g. feature engineering takes place before model training. There are more details provided in the Machine Learning Lens of the Well Architected Framework:

ML Lifecycle Phases

It is also important to understand what AWS services are available to help in each phase such as AWS Glue and Amazon SageMaker Data Wrangler.

Model Selection and Customisation

It is critical to understand the trade offs in time, effort and complexity in selecting an appropriate model and ensuring it meets your requirements. The following is a high level summary:

Image description

The simplest option is to use an AI/ML hosted service such as Amazon Comprehend or Amazon Rekognition. You will need to know about all of the AWS hosted AI/ML services and what their capabilities are at a high level e.g. text-to-speech and text translation.

Following this you can use pre-trained foundation models available in services such as Amazon Bedrock and Amazon SageMaker JumpStart or you can bring a model into Amazon SageMaker.

The recommended way to first customise a model is using prompt engineering. You need to understand about different types of prompting and the use of prompt templates. Remember that with prompt engineering, there is no change to the underlying model weights.

Retrieval-Augmented Generation (RAG) is another approach to improve responses, this time by referencing an external knowledge base that is outside of the LLM's training data. There are a variety of options in this space, most notably Amazon Bedrock Knowledge Bases, but it is possible to take advantage of vector databases such as Amazon OpenSearch or the pgvector extension for PostgreSQL to roll your own RAG solution.

Next up is fine-tuning, with a couple of different options dependent upon where your model is hosted. Remember that with fine-tuning you are changing the weights of the model. Amazon Bedrock supports fine-tuning and continued pre-training. There are important distinctions between these.

  • Fine-tuning relies on you providing your own labelled data set to the model. Be aware that if you only provide instructions for a single task, the model may lose its more general purpose capability and experience catastrophic forgetting.
  • Continued pre-training uses unlabelled data

After you have created your custom model, you need to purchase provisioned throughput to be able to use it.

Amazon SageMaker supports both domain-adaptation fine-tuning and instruction-based fine-tuning.

  • Domain adaptation fine-tuning allows you to leverage pre-trained foundation models and adapt them to specific tasks using limited domain-specific data.
  • Instruction-based fine-tuning uses labeled examples to improve the performance of a pre-trained foundation model on a specific task.

The choice comes down to whether you want to train a model around domain data or to follow instructions and perform a specific task.

Finally, the most time-consuming and costly approach is to create your own custom model with Amazon SageMaker.

Parameters

It is useful to understand what parameters are available to you. These fall into two distinct categories.

Hyperparameters are used to control the training process. The most common are:

  • Epoch: The number of iterations through the entire training dataset
  • Batch Size: The number of samples processed before updating model parameters
  • Learning Rate: The rate at which model parameters are updated after each batch

Inference parameters are settings you can just during inference, that influence the response from the model. The most common are:

  • Temperature: Temperature is a value between 0 and 1, and it regulates the creativity of the model's responses. Use a lower temperature if you want more deterministic responses, and use a higher temperature if you want creative or different responses for the same prompt
  • Top K: The number of most-likely candidates that the model considers for the next token. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.
  • Top P: The percentage of most-likely candidates that the model considers for the next token. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.

Amazon SageMaker Capabilities

Amazon SageMaker is a service that provides a whole host of features and capabilities you need to be aware of. The exam will test your awareness of all these such as:

  • SageMaker Clarify
  • SageMaker JumpStart
  • SageMaker Studio
  • SageMaker Data Wrangler
  • SageMaker Feature Store
  • SageMaker Model Cards
  • SageMaker Model Dashboard
  • SageMaker Model Monitor
  • SageMaker Ground Truth

Metrics

Finally, expect to see a number of question around model performance metrics, and which one is the most appropriate.

For classification tasks such as spam detection you have:

  • Accuracy: The ratio of correctly predicted instances to the total instances
  • Precision: How many of the predicted positive cases are positive
  • Recall: How many positive cases were predicted correctly
  • F1 Score: The F1 score combines precision and recall into a single metric

You should be aware of the confusion matrix, and when you might want to optimise for recall (life-saving tasks such as cancer diagnosis where you want to minimise false negatives) versus when you might want to optimise for precision (minimise false positives such as spam email detection)

Task generation metrics include:

  • ROUGE (Recall-Oriented Understudy for Gisting Evaluation): used to evaluate text generation or summarisation tasks
  • BLEU (Bilingual Evaluation Understudy Score): used for translation tasks
  • Perplexity: measures how well a model can predict a sequence of tokens or words in a given dataset.

You should also know how to interpret model performance. For example, if your model performs better on training data that on new data it is overfitting which is exhibiting high variance. If your model does not work well on training data or new data it is underfitting and exhibiting high bias. If there are disparities in the performance of your model across different groups then it is showing bias.

Taking the Exam

Congratulations if you have booked your exam. After having taken a number of AWS exams over the years, these are my top tips for the exam itself:

  1. Don't panic. It's likely there will be one or two questions come up that confuse you. Simply flag them for review and move on.
  2. Read the question carefully and pick out the key information it is asking for
  3. If you don't know the answer for a question, you can often eliminate a couple of the possible options, which will help narrow it down.

Top comments (2)

Collapse
 
aa_acharya profile image
Anitha Acharya

Thank you! Very helpful guide. I cleared the AWS AI practitioner exam exam this morning. I additionally took mock exams from Skillcertpro for my preparation to test my readiness. In fact to my surprise it was of a great help. I got nearly 80% from these tests on my main exam. Always make sure to go through the explanations to understand the concepts. They also provided exam notes which is a great addon for last minute reference.

For other who is going to take test : Exam has lot of lengthy and twisted questions. Manage your time well. Good lcuck.

Collapse
 
emma_martin profile image
Emma Martin

Make sure to focus on core AI concepts and AWS services like SageMaker. Practice with real-world scenarios, and use Passexam4sure dumps to familiarize yourself with the exam format.