DEV Community

James Murdza
James Murdza

Posted on

Python code for every LLM API: OpenAI, Anthropic, Cohere, Mistral and Gemini ⚡️

Did you know you don't need to import any special libraries to generate text via an LLM?

All you need is an API key from the LLM platform, a Python environment, and the requests library. Then in a few lines of code you can get started.

Quick start!

If you are working from the command-line, you can run any of the scripts below with a simple call:

OPENAI_API_KEY=yourkeygoeshere python3 script.py

OR, you can add OPENAI_API_KEY=yourkeygoeshere to a .env file in the same directory, and add this code to the top of your script:

from dotenv import load_dotenv
load_dotenv()
Enter fullscreen mode Exit fullscreen mode

OR, if you're using Google Colab, click the 🔑 icon in the left sidebar and follow the resulting instructions.

API Reference

Start by copying the example code below for the API you need, then scroll to the end of this article for more tips.

OpenAI

🔑 Get API key here.

📃 API docs.

import requests
import os

response = requests.post(
    "https://api.openai.com/v1/chat/completions",
    headers={
        "Content-Type": "application/json",
        "Authorization": "Bearer " + os.environ["OPENAI_API_KEY"]
    },
    json={
        "model": "gpt-3.5-turbo",
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Hello!"}
        ]
    }
)

print(response.content)
Enter fullscreen mode Exit fullscreen mode

Anthropic

🔑 Get API key here.

📃 API docs.

import requests
import os

response = requests.post(
    "https://api.anthropic.com/v1/complete",
    headers={
        "accept": "application/json",
        "anthropic-version": "2023-06-01",
        "content-type": "application/json",
        "x-api-key": os.environ["ANTHROPIC_API_KEY"]
    },
    json={
        "model": "claude-2.1",
        "prompt": "\n\nHuman: Hello, world!\n\nAssistant:",
        "max_tokens_to_sample": 256
    }
)

print(response.content)
Enter fullscreen mode Exit fullscreen mode

Cohere

🔑 Get API key here.

📃 API docs.

import requests
import os

response = requests.post(
    "https://api.cohere.ai/v1/chat",
    headers={
        "accept": "application/json",
        "content-type": "application/json",
        "Authorization": f"Bearer {os.environ['COHERE_API_KEY']}"
    },
    json={
        "chat_history": [
            {"role": "USER", "message": "Who discovered gravity?"},
            {"role": "CHATBOT", "message": "The man who is widely credited with discovering gravity is Sir Isaac Newton"}
        ],
        "message": "What year was he born?",
        "connectors": [{"id": "web-search"}]
    }
)

print(response.content)
Enter fullscreen mode Exit fullscreen mode

Mistral

🔑 Get API key here.

📃 API docs.

import requests
import os

response = requests.post(
    "https://api.mistral.ai/v1/chat/completions",
    headers={
        "Content-Type": "application/json",
        "Accept": "application/json",
        "Authorization": f"Bearer {os.environ['MISTRAL_API_KEY']}"
    },
    json={
        "model": "mistral-tiny",
        "messages": [{"role": "user", "content": "Who is the most renowned French writer?"}]
    }
)

print(response.content)
Enter fullscreen mode Exit fullscreen mode

Google

🔑 Get API key here.

📃 API docs.

import requests
import os

response = requests.post(
    "https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=" + os.environ["GOOGLE_API_KEY"],
    headers={
        "Content-Type": "application/json"
    },
    json={
        "contents": [
            {
                "parts": [
                    {
                        "text": "Write a story about a magic backpack."
                    }
                ]
            }
        ]
    }
)

print(response.content)
Enter fullscreen mode Exit fullscreen mode

Top comments (1)

Collapse
 
inozem profile image
Sergey Inozemtsev • Edited

To avoid overthinking it, I created an adapter, so now my code works the same way for OpenAI, Anthropic, and Google:

from llm_api_adapter.messages.chat_message import AIMessage, Prompt, UserMessage
from llm_api_adapter.universal_adapter import UniversalLLMAPIAdapter

messages = [
    Prompt(
        "You are a friendly assistant who explains complex concepts "
        "in simple terms."
    ),
    UserMessage("Hi! Can you explain how artificial intelligence works?"),
    AIMessage(
        "Sure! Artificial intelligence (AI) is a system that can perform "
        "tasks requiring human-like intelligence, such as recognizing images "
        "or understanding language. It learns by analyzing large amounts of "
        "data, finding patterns, and making predictions."
    ),
    UserMessage("How does AI learn?"),
]

gpt = UniversalLLMAPIAdapter(
    organization="openai",
    model="gpt-3.5-turbo",
    api_key=openai_api_key
)
gpt_response = gpt.generate_chat_answer(messages=messages)
print(gpt_response.content)

claude = UniversalLLMAPIAdapter(
    organization="anthropic",
    model="claude-3-haiku-20240307",
    api_key=anthropic_api_key
)
claude_response = claude.generate_chat_answer(messages=messages)
print(claude_response.content)

google = UniversalLLMAPIAdapter(
    organization="google",
    model="gemini-1.5-flash",
    api_key=google_api_key
)
google_response = google.generate_chat_answer(messages=messages)
print(google_response.content)
Enter fullscreen mode Exit fullscreen mode