DEV Community

Jason Park
Jason Park

Posted on

Building a Code Problem Solving Assistant

One of the hardest parts of solving coding problems, especially for beginners, is figuring out the right approach or algorithm. Once you understand how to break down the problem and select an algorithm, translating it into code becomes much more manageable.

CodeScript is designed to help with this exact challenge. It provides structured problem analysis, suggests potential solution approaches, and assists in refining code implementations.

Workflow

The app works by pulling a random coding problem from a database of LeetCode problems and displaying it to the user. Once the user submits their approach, the UI renders a response from the LLM, which analyzes the approach and provides feedback. This helps users validate their thought process before writing code, making it easier to refine their solution.

Application Structure

This is a full-stack app using MySQL, FastAPI, and React. Once all components are completed, we combine everything into Docker Compose.

LLM

The app uses Ollama to run a quantized DeepSeek-Coder:6.7B model locally. The LLM is the most resource-intensive part of the system, as generating high-quality coding problem analysis requires a large model. Running such a model on local machines—especially those with limited RAM—can be challenging, making performance optimizations a key focus.

Once the model is set up, we construct a structured prompt containing the problem details, user submission, and additional context. This prompt is sent to the Ollama server, which returns a response that we send back to the client.

def run_deepseek(problem: dict, submission: str):
    prompt = create_prompt(problem, submission)
    response = ask_model(prompt)
    return parse_response(response)
Enter fullscreen mode Exit fullscreen mode

Backend

The backend provides two main API routes:

  1. GET /problems/random

Fetches a random problem from the database.

  1. POST /generate_feedback

Sends the user's submission to the LLM for feedback.

The generate_feeback route integrates the DeepSeek-Coder:6.7B model through the run_deepseek function. When a user submits their approach, the backend processes the request, passes the problem data and user input to the LLM, and returns a structured response.

To ensure performance and prevent excessive requests, we implement a rate limiter (processing_limiter) and use a lock (request_lock) to manage concurrent LLM calls. The API also includes CORS middleware to allow secure frontend access.

# routers.py

router = APIRouter()

@router.get('/problems/random')
async def get_random_problem_route(db: Session = Depends(get_session)):
    return await get_random_problem(db)
Enter fullscreen mode Exit fullscreen mode
# main.py

app = FastAPI()
add_cors_middleware(app)

@app.post('/generate_feedback', response_model=LLMResponse, dependencies=[Depends(processing_limiter)])
def generate_feedback_route(request: LLMRequest) -> LLMResponse:
    try:
        response = run_deepseek(request.problem_data, request.user_submission)
        return LLMResponse.model_validate(response)
    finally:
        request_lock.release()

app.include_router(problems_router)
Enter fullscreen mode Exit fullscreen mode

Frontend

The frontend is built using Vite with React, providing a fast and lightweight interface for users to interact with the app. When the app loads, it fetches a random problem from the backend using useEffect, ensuring a new problem is displayed on each visit.

When the user submits their approach, the submit button is temporarily disabled to prevent multiple requests from being sent at once. This helps avoid overwhelming the backend and ensures smoother performance. The UI is designed to be simple and responsive, focusing on delivering the problem and feedback efficiently without unnecessary complexity.

Figma Wireframe

The purple button toggles between viewing the current problem and the model's response, the yellow shuffle button goes to the next random problem, and the blue play button generates a response.

Docker Compose

To simplify deployment and keep the app self-contained, we dockerize both the backend and frontend and use Docker Compose to run them together.

  • The backend is packaged with FastAPI, its dependencies, and the LLM model. A Dockerfile ensures all necessary packages and configurations are set up.

  • The frontend is also containerized with Vite and React, making it easy to serve the UI without needing additional setup.

Using Docker Compose, we define both services in a docker-compose.yml file, allowing them to start together with a single command.

services:
  backend:
    container_name: codescript-backend
    build: .
    volumes:
      - .:/app
    environment:
      - DOCKER_DATABASE_URL=${DOCKER_DATABASE_URL:-}
    networks:
      - codescript_network
    ports:
      - "8000:8000"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/"]

  frontend:
    container_name: codescript-frontend
    build:
      context: ../codescript-frontend
      args:
        - VITE_RANDOM_PROBLEM_URL=${VITE_RANDOM_PROBLEM_URL}
        - VITE_GENERATE_FEEDBACK_URL=${VITE_GENERATE_FEEDBACK_URL}
    env_file: 
      - .env
    networks:
      - codescript_network
    ports:
      - "3000:80"
    depends_on:
      backend:
        condition: service_healthy

networks:
  codescript_network:
    external: true
Enter fullscreen mode Exit fullscreen mode

Repo Links

Resources

Top comments (0)