DEV Community

Cover image for AI Code Assistant — Continue Custom Configuration for AI Development Using OpenAI GPT Models or Claude 3.5 Models
TurnV X
TurnV X

Posted on

AI Code Assistant — Continue Custom Configuration for AI Development Using OpenAI GPT Models or Claude 3.5 Models

Introduction

This tutorial will guide you through the process of installing and customizing the Continue plugin in Visual Studio Code (VSCode) and using the Claude 3.5 model for AI development. By following this tutorial, you'll be able to efficiently use an AI assistant to enhance your development productivity. (This method allows access to all large models without a VPN)

Key Considerations

Key Point: Whether you're using OpenAI's GPT models, Claude models, or any other models, you only need to modify the config.json configuration file of Continue!

API KEY: Obtain your API key by creating one on the large model API platform CURSOR API. Example: sk-1Qpxob9KYXq6b6oCypgyxjFwuiA817KfPAHo8XET7HjWQqU

Base URL: https://api.cursorai.art/v1/

Mainstream Model Names: claude-3-5-sonnet-20241022, claude-3-5-sonnet-20240620, gpt-4o, gpt-4o-mini

Image description

Required Tools and Prerequisites

  • Install the latest version of Visual Studio Code
  • A network connection to download the plugin (No VPN needed for accessing large models)
  • An API key for the Claude 3.5 model
  • Basic programming knowledge, preferably familiar with JavaScript or Python

Detailed Step-by-Step Guide

Install the Continue Plugin

Open VSCode, go to the Extensions Marketplace (shortcut Ctrl+Shift+X), search for "Continue," and click Install.

Image description

Configure the Claude 3.5 Model

In VSCode, press Ctrl+Shift+P to open the command palette, type “Continue: Open configuration file,” and add the model configuration under "models". For example:

    {
      "apiKey": "your-api-key",
      "apiBase": "https://api.cursorai.art/v1",
      "model": "cursor-3-5-sonnet-20241022",
      "title": "Claude-3-5-sonnet-20241022",
      "systemMessage": "You are an expert software developer. You give helpful and concise responses.",
      "provider": "openai"
    }
Enter fullscreen mode Exit fullscreen mode

Image description

Note: Make sure to separate model configurations with commas, but don’t add a comma after the last model!

Customize Plugin Settings

Adjust the Continue plugin's user settings according to your development needs, such as enabling Google search for documentation.

Image description

Use Continue for AI Development

In the code editor, select the model you just configured. Type @ to have Continue read any file, and the plugin will use the Claude 3.5 model to provide code optimization suggestions.

Image description

Configure Autocomplete Model (Optional)

In the model configuration file, modify the "tabAutocompleteModel" section as follows:

  "tabAutocompleteModel": {
    "apiKey": "your-api-key",
    "apiBase": "https://api.cursorai.art/v1",
    "model": "gpt-4o-mini",
    "title": "gpt-4o-mini",
    "provider": "openai"
  }
Enter fullscreen mode Exit fullscreen mode

Example and Demonstration

Code Optimization Example

import requests
from bs4 import BeautifulSoup
import csv
from time import sleep

def crawl_website(url, output_file):
    # Set request headers to simulate browser access
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
    }

    try:
        # Send GET request to fetch page content
        response = requests.get(url, headers=headers)
        response.raise_for_status()  # Check if the request is successful

        # Parse the HTML using BeautifulSoup
        soup = BeautifulSoup(response.text, 'html.parser')

        # Extract data (example: extracting all titles)
        titles = soup.find_all('h2')  # Adjust the selector based on the actual webpage structure

        # Save data to a CSV file
        with open(output_file, 'w', newline='', encoding='utf-8') as f:
            writer = csv.writer(f)
            writer.writerow(['Title'])  # Write the header

            for title in titles:
                writer.writerow([title.text.strip()])

        print(f"Data saved to {output_file}")

    except requests.RequestException as e:
        print(f"Error during crawling: {e}")

    # Add delay to avoid frequent requests
    sleep(2)

# Usage example
if __name__ == "__main__":
    target_url = "https://example.com"  # Replace with the URL you want to scrape
    output_file = "crawled_data.csv"
    crawl_website(target_url, output_file)
Enter fullscreen mode Exit fullscreen mode

Using Continue-generated optimization suggestions (prompt: Modify for paginated scraping of Douban 250):

import requests
from bs4 import BeautifulSoup
import csv
from time import sleep
import random

def crawl_douban_top250(output_file, items_per_page=25):
    base_url = "https://movie.douban.com/top250"
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8',
    }

    total_pages = 10
    movies_data = []

    try:
        for page in range(total_pages):
            start_index = page * items_per_page
            url = f"{base_url}?start={start_index}"

            print(f"Scraping page {page + 1}...")
            response = requests.get(url, headers=headers)
            response.raise_for_status()

            soup = BeautifulSoup(response.text, 'html.parser')
            movie_items = soup.find_all('div', class_='item')

            if not movie_items:
                print(f"Warning: No movie data found on page {page + 1}")
                continue

            for item in movie_items:
                try:
                    rank = item.find('em').text
                    title = item.find('span', class_='title').text
                    info = item.find('div', class_='bd').find('p').text.strip()

                    info_lines = [line.strip() for line in info.split('\n') if line.strip()]
                    year = info_lines[1].split('/')[0].strip()
                    director = info_lines[0].split('导演: ')[1].split('主演:')[0].strip()

                    rating = item.find('span', class_='rating_num').text
                    votes = item.find('div', class_='star').find_all('span')[-1].text
                    votes = ''.join(filter(str.isdigit, votes))

                    movies_data.append([rank, title, year, director, rating, votes])
                    print(f"Scraped: {title}")
                except Exception as e:
                    print(f"Error processing movie data: {e}")
                    continue

        delay = random.uniform(3, 7)
        print(f"Waiting for {delay:.2f} seconds before continuing...")
        sleep(delay)

        with open(output_file, 'w', newline='', encoding='utf-8-sig') as f:
            writer = csv.writer(f)
            writer.writerow(['Rank', 'Movie Title', 'Year', 'Director', 'Rating', 'Number of Votes'])
            writer.writerows(movies_data)

            print(f"Scraping completed! A total of {len(movies_data)} movies scraped.")
            print(f"Data saved to {output_file}")

    except requests.RequestException as e:
        print(f"Network request error: {e}")
    except Exception as e:
        print(f"Program execution error: {e}")

if __name__ == "__main__":
    output_file = "douban_top250.csv"
    crawl_douban_top250(output_file)
Enter fullscreen mode Exit fullscreen mode

Tips and Precautions

Tip: Ensure that your API key is kept secure and not exposed in public code repositories.

Note: When

using the Continue plugin for code optimization, always review the suggestions and ensure they align with your project requirements.


FAQs

Q1: How can I get the API key for Claude 3.5?

A1: You can visit CURSOR API, sign up, and obtain the API key by creating a token on the token page.

Q2: What if the Continue plugin is not working?

A2: Check if your API key is configured correctly and ensure that your network connection is stable. Also, check VSCode's output panel for error logs.

Q3: How can I customize prompt templates?

A3: In the Continue plugin settings page, locate the “Workspace prompts path” option and input your custom prompt content.

Q4: Does Claude 3.5 not support one-click code writing?

A4: This is due to official limitations, as Anthropic does not currently offer any auto-completion models. You can switch the model to gpt-4o to enable this feature.

Summary

By following this tutorial, you have learned how to install and configure the Continue plugin in VSCode, use your custom API key to access OpenAI GPT models or Claude 3.5 models, and boost your AI development efficiency. With proper configuration and usage of these tools, you can significantly enhance your development productivity.

Next, you can explore more advanced features of the Continue plugin or integrate other AI models to meet more complex development needs.

References and Further Reading

Related Downloads

Visual Studio Code

Top comments (0)