In recent years, Google's advancements in Generative AI have redefined how we interact with machine learning models, particularly through the potential of Vertex AI and the powerful Gemini models. Traditionally, machine learning models required vast amounts of labelled data to perform well on specific tasks. However, with the introduction of few-shot learning, these generative AI models can now achieve impressive performance with a handful of examples.
Few-shot learning offers a new approach, allowing models to generalize and adapt quickly, even with limited data. This is beneficial in scenarios where acquiring large datasets is impractical or costly.
In this article, we'll dive into the practical steps of implementing few-shot learning on Vertex AI using Gemini. We'll explore how to craft effective prompts, configure the environment, and generate meaningful outputs with just a few examples.
What is Few-Shot Learning?
Few-shot learning is a type of machine learning technique that uses very small data to learn and adapt to tasks. This is in contrast to traditional machine learning models, which typically require large datasets to achieve good performance.
Few-shot learning uses pre-trained models, often through transfer learning or meta-learning techniques. The model is first trained on a broad dataset to learn general patterns and representations. When introduced to a new task, it applies this prior knowledge along with the few provided examples to make predictions.
Overview of Vertex AI
Google's Vertex AI is a comprehensive, fully managed machine learning (ML) platform designed to streamline the development, deployment, and scaling of ML models and AI applications. By unifying various Google Cloud services under one umbrella, Vertex AI simplifies the entire ML workflow, enabling both beginners and experts to build robust AI solutions efficiently. Key features of Vertex AI include:
Unified Interface
Integration with Open-Source Frameworks (TensorFlow, PyTorch)
AutoML Capabilities
Pre-Trained APIs
End-to-End Integration with Google Cloud Services
MLOps Support (Monitoring, Versioning, Model Management)
Support for Custom and Pre-Trained Models
Experiment Tracking and Model Training
Hyperparameter Tuning
Feature Store for Data Management
Model Deployment and Serving
Scalability and High Performance
Setting up Vertex AI
Setting up Vertex AI involves a couple of configurations. To have a more detailed set-up, head over to the Vertex AI docs to explore various ways of setting up and installing. An overview of setting up Vertex AI includes:
Head over to Google Cloud Console.
Create a new project.
Enable Vertex AI API by navigating to “APIs & Services" > "Library” and then search for “Vertex AI”.
Click on “Enable” to activate the API.
To use locally, the Vertex AI SDK can be downloaded via pip using the command
pip install google-cloud-aiplatform
To learn more about the Vertex aI SDK or Interfaces, refer to the documentation.
Implementing Few-Shot with Gemini on Vertex AI
While various models like Google's PaLM and third-party options such as Anthropic's Claude and Kodi support this approach, this article focuses on using Gemini, Google's advanced multimodal model available on Vertex AI. Gemini excels in handling diverse inputs—including text, images, and videos—making it a versatile choice for implementing few-shot learning.
You can interact with Gemini through the Vertex AI SDK for Python or the user-friendly Vertex AI Studio interface.
Using the Vertex AI SDK
To use Vertex AI via its SDK, the library has to be downloaded first using the command:
pip install google-cloud-aiplatform
After installing, import the necessary packages
import vertexai
from vertexai.generative_models import GenerativeModel
This is the point where vertex AI and model are initialized.
vertexai.init(project="project-id", location="location")
model = GenerativeModel("gemini-2.0-pro-exp-02-05")
Let’s set up some examples for the few-shot task. This example is based on the analysis of the sentiment of a movie watched.
examples = """
Input: "This movie was amazing!"
Output: Positive
Input: "I hated this film."
Output: Negative
Input: "The acting was okay, but the plot was boring."
Output: Negative
"""
Now, let’s use the examples and take in a user’s input to get the output/sentiment.
# The Sentence We Want to Analyze
user_input = "I absolutely loved the story and the characters!"
# Combine Examples and the User Input into a Single Prompt
prompt = f"{examples}\nInput: {user_input}\nOutput:"
# Get the Response from Gemini
response = model.generate_content(prompt)
# Print the Result
print(f"Sentiment: {response.text}")
Note: To find the project id and location, go head over to the home page of the project on your console.
The output:
Sentiment: Output: Positive
Moving to using the beautiful Vertex AI Studio.
Using Vertex AI Studio
Vertex AI Studio is an integrated development environment within Google Cloud's Vertex AI platform, designed to improve rapid prototyping, testing, and deployment of machine learning models, including generative AI models like Google's Gemini. It offers a user-friendly interface that allows developers and data scientists to interact with models, craft and refine prompts, and evaluate outputs without extensive coding requirements.
Here’s a peek at the studio
To use the Vertex AI Studio for the few-shot task, let’s write some prompts in the “Prompt” Section and run.
Images can also be passed for the few-shot task
Conclusion
In this article, we've explored the implementation of few-shot learning using Gemini on Vertex AI, highlighting its versatility in handling diverse data inputs and the flexibility offered through both the Vertex AI SDK and Vertex AI Studio. By effectively using these tools, developers and data scientists can build robust AI models capable of learning from minimal examples, thereby accelerating the development of intelligent applications.
Top comments (0)