For many developers, repetitive tasks like boilerplate generation, refactoring, or debugging can overshadow the creative problem-solving aspects of development.
So, how do we break free from this cycle and work smarter? AI tools can automate mundane tasks and offer real-time code suggestions, enabling us to focus on the bigger picture of building high-quality software.
In this post, we'll explore 5 practical ways AI tools can improve development workflows. To demonstrate these concepts, we’ll build a Task Management API using Go as our programming language.
Prerequisite
This tutorial uses Sourcegraph's AI coding assistant, which integrates language models with its Search API. This tool provides:
- Precise coding assistance tailored to your needs.
- Contextual insights to help you understand, write, and fix code effectively.
To start, create an account and install the Cody VSCode extension. For other IDEs, refer to the integration guides in the documentation.
1. Writing boilerplate code
One of the most repetitive and time-consuming tasks in software development is writing boilerplate code. Boilerplate code includes repetitive foundational elements, such as models or routes, essential for setting up a project’s architecture.
Instead of manually setting up models, controllers, or basic routes, we can simply describe what we need in human language, and AI will handle the rest. This allows us to focus on the more interesting parts of development, like building features and solving complex problems.
The first step in building the Task Management API is creating a Task model. To do this, we need attributes like title
, description
, and status
, and API routes to interact with them. Instead of starting from scratch, we can utilize AI to generate the boilerplate code for this model and its routes in the following manner:
-
Define the Task Model: Describe the structure of the Task model inside the chat interface. Specify the desired attributes, such as
id
,title
,description
,dueDate
, andstatus
.
The AI tool then responds with the Task model containing all the specified fields and JSON annotations:
package models
import (
"time"
"github.com/google/uuid"
)
type TaskStatus string
const (
StatusPending TaskStatus = "Pending"
StatusInProgress TaskStatus = "In Progress"
StatusCompleted TaskStatus = "Completed"
)
type Task struct {
ID uuid.UUID `json:"id"`
Title string `json:"title" validate:"required,max=100"`
Description string `json:"description" validate:"max=255"`
Status TaskStatus `json:"status" validate:"required,oneof=Pending 'In Progress' Completed"`
DueDate time.Time `json:"due_date" validate:"required"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
It also suggests the Go UUID package to be installed:
- Generate API routes: Once you have the model ready, the next step is setting up API routes. These routes allow you to interact with the Task model, such as creating, reading, updating, and deleting tasks.
Enter this prompt in the chat:
Create CRUD API routes for the Task model. Include routes for creating, reading, updating, and deleting tasks, and implement basic handlers.
The AI tool then suggests the implementation of CRUD routes and handlers like func (h *TaskHandler) CreateTask(w http.ResponseWriter, r *http.Request) {}
to create new tasks with the Task Management API.
The Sourcegraph prompts feature allows you to save prompts in the Prompts Library so you can easily reuse them for different projects instead of starting from scratch each time. You can access the Prompts Library through the Cody extension in your IDE, which will take you to the Prompts Library page.
Here, the prompt for generating the Task model structure (as shown in the image below) is saved for later use. This prompt allows Sourcegraph to generate the desired code structure based on specifications, which we can then tweak and integrate into the project as needed.
With the boilerplate code for the Task models and API routers set up, we can then proceed to further improve the project by optimizing the API routes.
2. Using AI to generate code suggestions and improve code readability
Code suggestions are a powerful feature that can help us write more efficient and readable code. Instead of getting bogged down in syntax and formatting, we can focus on writing cleaner, more efficient logic. AI tools can offer suggestions for optimizing memory usage, improving performance, or enhancing data integrity—key factors for producing maintainable, high-quality code.
Now that we have set up the boilerplate code for the Task model and API routes, let's get into how AI can assist us in generating code suggestions and improving code readability in our code.
Suppose we want to improve the NewTaskHandler
function for better readability and performance in the API routes. We tell the AI tool to:
Optimize the
NewTaskHandler
function (use @ to add the current file as context) for better readability and performance by adding a validation check to ensure that the task title is not empty
We can then use Sourcegraph's Apply feature to quickly integrate the suggested code changes to the NewTaskHandler
function and Accept the changes when done.
The initial NewTaskHandler
function simply created an empty map for tasks, as shown here:
func NewTaskHandler() *TaskHandler {
return &TaskHandler{
tasks: make(map[uuid.UUID]models.Task),
}
}
The optimized version below pre-allocates memory capacity for 100 tasks and adds a dedicated validation method. Using strings.TrimSpace
to properly validate empty titles ensures better performance and data integrity by preventing empty task titles from being stored:
func NewTaskHandler() *TaskHandler {
return &TaskHandler{
tasks: make(map[uuid.UUID]models.Task, 100), // Pre-allocate capacity for better performance
}
}
func (h *TaskHandler) validateTask(task models.Task) error {
if strings.TrimSpace(task.Title) == "" {
return errors.New("task title cannot be empty")
}
return nil
}
With these improvements, the NewTaskHandler
function is now better optimized for performance and data integrity, laying a solid foundation for the next step—debugging and troubleshooting the code to ensure the Task Management API runs smoothly.
3. Debug and troubleshoot code efficiently
Debugging is a critical yet time-consuming part of development. Complex issues, such as race conditions or misconfigured dependencies, can take hours to resolve without proper tools*.* AI tools simplify this process by quickly identifying bugs and syntax errors while offering actionable insights. This lets you focus on resolving issues efficiently instead of spending hours combing through code.
With the Task Management API taking shape, let’s set up a central entry point for our application. We’ll create a main.go
file, which initializes the API and serves it on a local server. Then, we’ll run the app and troubleshoot any issues that arise.
This creates a simple HTTP server and uses our TaskHandler
for the /tasks
endpoint.
package main
import (
"log"
"net/http"
)
func main() {
handler := NewTaskHandler()
http.HandleFunc("/tasks", handler.TaskRoutes)
log.Println("Starting server on :8080")
log.Fatal(http.ListenAndServe(":8080", nil))
}
When we attempt to run the app using:
go run cmd/main.go
We encounter an error:
You can prompt Sourcegraph through the chat to debug the error, and you can also paste the error message into the prompt for additional context, like so:
Pinpoint and resolve the error in
main.go
cmd\main.go:9:16: undefined: NewTaskHandler
Asking the AI tool, we get:
Sourcegraph points out that the error can be resolved by importing the handlers package—"task-management-api/handlers"—where NewTaskHandler
is defined. This allows main.go
to access and use the TaskHandler
function.
It also added the routes registration in main.go
, following a RESTful structure. Each route maps to a specific HTTP method and handler function for the Task Management API.
router.HandleFunc("/tasks", taskHandler.CreateTask).Methods("POST") // Create new tasks
router.HandleFunc("/tasks", taskHandler.GetAllTasks).Methods("GET") // List all tasks
router.HandleFunc("/tasks/{id}", taskHandler.GetTask).Methods("GET") // Get single task
router.HandleFunc("/tasks/{id}", taskHandler.UpdateTask).Methods("PUT") // Update task
router.HandleFunc("/tasks/{id}", taskHandler.DeleteTask).Methods("DELETE") // Delete task
This clear mapping shows exactly how the HTTP methods and URLs correspond to the TaskHandler
functions, making the API structure easy to understand and work with.
When we Apply and Accept the changes and run the app again, it works as it should:
With the server up and running, let’s go through some strategies for streamlining your code search in the next section.
4. Simplify code search and discovery
As software development projects grow, finding specific functions or modules in the codebase can become more challenging. Efficiently navigating large codebases is crucial for maintaining productivity and ensuring smooth collaboration among developers.
If we need to locate the UpdateTask
function in the Task Management API, instead of manually searching through multiple files, we can use an AI assistant to quickly pinpoint its location and provide helpful context.
In this case, we leverage Sourcegraph’s context-awareness feature to reference the codebase with @
. This lets us quickly identify the function’s location and understand its purpose within the code. Including the project context in the prompts helps the AI tool to efficiently navigate the codebase.
Find the
UpdateTask
function in the@task-management-api
codebase and explain what it does.
Sourcegraph then responds with the location of the UpdateTask
function in the codebase at handlers/task_handler.go
and adds inline comments in the code.
It also gives an overview of the UpdateTask
function, explaining the logic and making it easier to understand the purpose of the function within the Task Management API.
Locating the UpdateTask
function and understanding its functionality allows us to make targeted modifications quickly and efficiently. This streamlined approach ensures that navigating and understanding the codebase remains manageable, even as the project expands.
But what if developers are new to the project? How can they get up to speed quickly? In the next section, we'll explore how AI can help developers understand unfamiliar code faster.
5. Onboarding to code
When a new developer joins a project, getting up to speed on the codebase can feel like a huge task. Whether you're coming in fresh or revisiting a part of the code you haven’t worked with before, understanding unfamiliar code quickly is key to contributing effectively and maintaining a smooth workflow.
Instead of spending hours manually reading through files, AI-powered tools can simplify this process significantly.
For example, let’s say you’re a new developer joining the Task Management API project and need to get familiar with the codebase. Instead of digging through the entire code manually, you can simply ask AI to help you understand the project and its components.
Here’s a prompt:
As a new developer to this project, explain the
@task-management-api
codebase and what it’s about.
Once you provide this prompt, the AI tool will give you a breakdown of the project's core components, features, and available API endpoints.
This explanation shows the structure of the Task Management API, breaking down the main directories and files such as:
-
models/task.go
: Defines the structure of the Task, including fields like ID, title, and status. -
handlers/task_handler.go
: Contains the HTTP handlers for CRUD operations on tasks. -
cmd/main.go
: The entry point of the application, defining the API routes and setting up the server.
It also lists the key features of the API, such as task creation and management, RESTful API endpoints, status tracking, and JSON-based communication.
With AI tools, you can get onboarded to any project much more efficiently and start contributing right away.
6. Optimize code for performance and scalability
Ensuring your code is both performant and scalable is vital—especially as datasets grow or user traffic increases. In projects like the Task Management API, inefficiencies in data models or algorithms can result in slower response times and higher resource consumption.
The GetAllTasks
function processes tasks in memory, which is sufficient for small datasets but inefficient as data grows. AI tools can analyze such bottlenecks and suggest optimizations to improve performance and scalability.
Here’s the original implementation:
func (h *TaskHandler) GetAllTasks(w http.ResponseWriter, r *http.Request) {
tasks := make([]models.Task, 0, len(h.tasks))
for _, task := range h.tasks {
tasks = append(tasks, task)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(tasks)
}
While this implementation handles small datasets effectively, it does not scale well with growing data. As the dataset grows, processing everything in memory can lead to higher memory usage and slower response times.
To improve this function, we can use pagination to divide the dataset into manageable chunks. Let’s ask the AI tool for suggestions:
Analyze the
GetAllTasks
function inhandlers/task_handler.go
and suggest performance improvements for handling large datasets.
The response from the prompt is:
func (h *TaskHandler) GetAllTasks(w http.ResponseWriter, r *http.Request) {
// Parse pagination parameters
page, _ := strconv.Atoi(r.URL.Query().Get("page"))
limit, _ := strconv.Atoi(r.URL.Query().Get("limit"))
// Set defaults
if page < 1 {
page = 1
}
if limit < 1 || limit > 100 {
limit = 10
}
// Calculate pagination
start := (page - 1) * limit
tasks := make([]models.Task, 0, limit)
// Get paginated results
i := 0
for _, task := range h.tasks {
if i >= start && len(tasks) < limit {
tasks = append(tasks, task)
}
i++
if len(tasks) == limit {
break
}
}
// Return paginated response
response := map[string]interface{}{
"tasks": tasks,
"page": page,
"limit": limit,
"total": len(h.tasks),
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
How the optimized code improves performance:
-
Pagination: Instead of processing the entire dataset, the code retrieves tasks in smaller chunks. This reduces memory usage and improves response times. The logic for pagination is implemented by calculating the
start
index for each page and limiting the number of tasks retrieved.
start := (page - 1) * limit
tasks := make([]models.Task, 0, limit)
Tasks are only added to the response if they fall within the specified range:
if i >= start && len(tasks) < limit {
tasks = append(tasks, task)
}
- Customizable limits: Query parameters allow clients to specify the page size for more flexibility when making a request to the API. If no valid parameters are provided, default values are applied.
if page < 1 {
page = 1
}
if limit < 1 || limit > 100 {
limit = 10
}
- Metadata: The response includes additional information such as the total number of tasks, helping clients understand the dataset. The metadata is packaged into a map and encoded in the response.
response := map[string]interface{}{
"tasks": tasks,
"page": page,
"limit": limit,
"total": len(h.tasks),
}
These enhancements ensure the Task Management API remains efficient and scalable, even as the dataset grows. The improved GetAllTasks
function not only optimizes performance but also delivers a better experience for developers consuming the API.
7. Bonus: Implement AI-enhanced testing workflows
We’ve looked at how AI can assist with tasks like writing code, debugging, and navigating your codebase. Here’s an extra benefit AI brings to the table—helping with testing workflows.
Manual test case creation is tedious and error-prone, but AI tools simplify this by automating test generation and expanding coverage
For a deeper dive into automating your testing process with AI, read this guide on how to write unit tests.
Let’s explore how AI can help automate test case generation and improve coverage.
The CreateTask
function in the Task Management API creates new tasks, validates the request body, assigns a unique ID, and stores the task in memory.
func (h *TaskHandler) CreateTask(w http.ResponseWriter, r *http.Request) {
var task models.Task
if err := json.NewDecoder(r.Body).Decode(&task); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
task.ID = uuid.New()
h.tasks[task.ID] = task
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(task)
}
To generate unit tests for this function, select the code snippet, then right-click, hover on Cody, and click on Generate Unit Tests.
The AI tool then scans the code, generates different test case scenarios, and opens up a testing file with unit tests for the CreateTask
function, using the Go testing package.
These test cases cover a range of scenarios:
func TestCreateTask(t *testing.T) {
handler := NewTaskHandler()
t.Run("successful task creation", func(t *testing.T) {
task := models.Task{
Title: "Test Task",
Description: "Test Description",
}
taskJSON, _ := json.Marshal(task)
req := httptest.NewRequest(http.MethodPost, "/tasks", bytes.NewBuffer(taskJSON))
rr := httptest.NewRecorder()
handler.CreateTask(rr, req)
assert.Equal(t, http.StatusCreated, rr.Code)
var responseTask models.Task
err := json.NewDecoder(rr.Body).Decode(&responseTask)
assert.NoError(t, err)
assert.NotEqual(t, uuid.Nil, responseTask.ID)
assert.Equal(t, task.Title, responseTask.Title)
assert.Equal(t, task.Description, responseTask.Description)
})
t.Run("invalid json body", func(t *testing.T) {
req := httptest.NewRequest(http.MethodPost, "/tasks", bytes.NewBufferString("invalid json"))
rr := httptest.NewRecorder()
handler.CreateTask(rr, req)
assert.Equal(t, http.StatusBadRequest, rr.Code)
})
t.Run("empty request body", func(t *testing.T) {
req := httptest.NewRequest(http.MethodPost, "/tasks", nil)
rr := httptest.NewRecorder()
handler.CreateTask(rr, req)
assert.Equal(t, http.StatusBadRequest, rr.Code)
})
t.Run("verify task storage", func(t *testing.T) {
task := models.Task{
Title: "Storage Test",
Description: "Test Storage",
}
taskJSON, _ := json.Marshal(task)
req := httptest.NewRequest(http.MethodPost, "/tasks", bytes.NewBuffer(taskJSON))
rr := httptest.NewRecorder()
handler.CreateTask(rr, req)
var responseTask models.Task
json.NewDecoder(rr.Body).Decode(&responseTask)
storedTask, exists := handler.tasks[responseTask.ID]
assert.True(t, exists)
assert.Equal(t, responseTask, storedTask)
})
}
- Successful task creation: Verifies that the function processes valid input, assigns a unique ID, and responds with the correct status.
-
Invalid JSON body: Ensures the function returns a
400 Bad Request
status when invalid input is provided. - Empty request body: Confirms that the function handles missing input gracefully.
- Task storage validation: Checks that the created task is correctly stored in memory with all attributes intact.
Each of these tests ensures different parts of the function work as expected, making it easier to maintain high code quality. AI simplifies the process by generating these tests automatically, allowing you to review and customize them as needed.
You can run the generated tests using the go test
command-line tool, or any other testing framework when working with a different programming language.
Integrating AI into your testing workflow not only saves time but also increases confidence in your project’s reliability. With automated test generation, you can quickly identify edge cases, expand coverage, and ensure your code behaves as intended—without the manual overhead.
Next step
With the right tools, developers can transform their approach to software development and boost productivity. Beyond the workflows we've explored, AI can help streamline additional tasks, including:
- Creating and optimizing database schemas
- Setting up Continuous Integration and Continuous Delivery pipelines
- Documenting the codebase and overall project.
Whether you're developing an MVP, managing a large-scale project, or experimenting with new ideas, with the help of AI-powered tools like Sourcegraph, you can take your development workflows to the next level.
Sign up for a free account, install the Cody extension in your preferred IDE, VS Code, IntelliJ, or Neovim, to get started.
Top comments (0)