DEV Community

Cover image for Integrating DeepSeek API in NextJs and ExpressJs App
Rakesh Potnuru
Rakesh Potnuru

Posted on • Originally published at itsrakesh.com

Integrating DeepSeek API in NextJs and ExpressJs App

Published from Publish Studio

AI is an infinity stone. Learn to use its power, you can do wonders. In this guide, build a personal accountant with DeepSeek API and a sample project.

ai personal accountant

The Project

I've created a sample project called "Finance Tracker" for this tutorial. It lets you record your financial transactions. The front end is built with NextJs and the back end is with tRPC (express adapter) and Postgres database with drizzle ORM. But you don't need to know tRPC to continue with this tutorial. (In case you want to learn tRPC, check out Build a Full-Stack App with tRPC and Next.js App Router series.)

Github repo: https://github.com/itsrakeshhq/finance-tracker

Let's add a chatbot to the product that acts as our personal accountant.

Backend

Getting DeepSeek API key

As you might already know, DeepSeek went pretty viral when it was launched because of comparable performance to OpenAI yet only a fraction of the cost. This resulted in massive downtimes since its launch. I've been trying to use their API for a long time but no luck. (If you can get it, then please continue with it.)

So instead of using API from the DeepSeek API platform directly, we can use it from OpenRouter. OpenRouter gives access to various AI models from various providers. So if one provider goes down, it switches to another one.

API Integration

Put the API key you got from above in backend/.env:

DEEPSEEK_API_KEY=your_deepseek_api_key
Enter fullscreen mode Exit fullscreen mode

Since DeepSeek API is compatible with OpenAI SDK, let's install that:

yarn add openai
Enter fullscreen mode Exit fullscreen mode

Create src/modules/ai/ai.controller.ts. This is where will write AI accountant code. First, create OpenAI client:

export default class AiController {
  private readonly openai: OpenAI;

  constructor() {
    this.openai = new OpenAI({
      // baseURL: "https://api.deepseek.com" // if using DeepSeek API key
      baseURL: "https://openrouter.ai/api/v1",
      apiKey: process.env.DEEPSEEK_API_KEY,
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Here we've initialized Openai client.

Note: Make sure to use appropriate baseURL and apiKey based on what you are using - DeepSeek or OpenRouter.

Here's how I designed the accountant:

  1. Fetch transactions from DB.
  2. Include in the prompt.
  3. Answer user queries based on transactions.
  4. Stream response.
...
  async accountant(req: Request, res: Response) {
    try {
      const { query } = req.body;

      if (!query) {
        return res.status(400).json({ error: "Query is required" });
      }

      const userId = req.user.id;

      const data = await db
        .select()
        .from(transactions)
        .where(eq(transactions.userId, userId));

      if (data.length === 0) {
        return res.status(400).json({ error: "No transactions found" });
      }

      const formattedTxns = data.map((txn) => ({
        amount: txn.amount,
        txnType: txn.txnType,
        summary: txn.summary,
        tag: txn.tag,
        date: txn.createdAt,
      }));

      const prompt = `
    You are a personal accountant. You are given a list of transactions. You need to answer user query based on the transactions. 

    YOU MUST KEEP THESE POINTS IN MIND WHILE ANSWERING THE USER QUERY:
    1. You must give straight forward answer.
    2. Answer like you are talking to the user. 
    3. You must not output your thinking process and reasoning. This is very important.
    4. Answer should be in markdown format.

    Transactions: ${JSON.stringify(formattedTxns)}
    Currency: $ (USD)

    User Query: ${query}
    `;

      const response = await this.openai.chat.completions.create({
        model: "deepseek/deepseek-chat",
        messages: [{ role: "user", content: prompt }],
        stream: true,
      });

      res.writeHead(200, {
        "Content-Type": "text/plain",
        "transfer-encoding": "chunked",
      });

      for await (const chunk of response) {
        if (chunk.choices[0].finish_reason === "stop") {
          break;
        }

        res.write(chunk.choices[0].delta.content || "");
      }

      res.end();
    } catch (error) {
      console.error({ error });
      if (!res.headersSent) {
        res.status(500).json({ error: "Internal server error" });
      }
    }
  }
...
Enter fullscreen mode Exit fullscreen mode

Note: This is just to give an idea. In real projects it's not ideal to fetch all transactions from db for every query. To provide context to AI models, you can create embeddings. DeepSeek currently does not support embeddings.

As you can see all we did was write a prompt and provide as much context as possible to get accurate results. Then we are sending the response as a stream instead of waiting for the whole response which might take a lot of time.

And then expose this controller in a route:

// src/index.ts

...
app.use(express.json());
app.post("/ai", authMiddleware, (req, res) =>
  new AiController().accountant(req, res)
);
...
Enter fullscreen mode Exit fullscreen mode

authMiddleware protects the endpoint, so only logged-in users can access it. You can find the implementation in src/middleware/auth-middleware.ts.

That's all we need from the backend.

Frontend

In the front end, let's create a chat widget. Switch to frontend/ folder.

Create chat.tsx in src/components/modules/dashboard.

Chat UI

Then create a chat box UI with shadcn popover component.

// chat.tsx

export default function Chat() {
  const [conversation, setConversation] = useState<
    {
      role: "user" | "assistant";
      content: string;
    }[]
  >([
    {
      role: "assistant",
      content: "Hello, how can I help you today?",
    },
  ]);
  const [liveResponse, setLiveResponse] = useState<string>("");
  const [isThinking, setIsThinking] = useState<boolean>(false);

  // Auto scroll to bottom when new message is added
  const scrollRef = useRef<HTMLDivElement>(null);
  useEffect(() => {
    if (scrollRef.current) {
      scrollRef.current.scrollIntoView({ behavior: "smooth" });
    }
  }, [conversation, liveResponse]);

  return (
      <Popover>
      <PopoverTrigger className="absolute right-4 bottom-4" asChild>
        <Button size={"icon"} className="rounded-full">
          <BotMessageSquareIcon className="w-4 h-4" />
        </Button>
      </PopoverTrigger>
      <PopoverContent align="end" className="w-[500px] h-[600px] p-0 space-y-4">
        <h1 className="text-xl font-bold text-center p-4 pb-0">
          Personal Accountant
        </h1>
        <hr />
        <div className="pt-0 relative h-full">
          <div className="flex flex-col gap-2 h-[calc(100%-150px)] overflow-y-auto px-4 pb-20">
            {conversation.map((message, index) => (
              <div
                key={index}
                className={cn("flex flex-row gap-2 items-start", {
                  "rounded-lg bg-muted p-2 ml-auto flex-row-reverse":
                    message.role === "user",
                })}
              >
                {message.role === "assistant" && (
                  <BotMessageSquareIcon className="w-4 h-4 shrink-0 mt-1.5" />
                )}
                {message.role === "user" && (
                  <UserRoundIcon className="w-4 h-4 shrink-0 mt-1" />
                )}
                <Markdown className="prose prose-sm prose-h1:text-xl prose-h2:text-lg prose-h3:text-base">
                  {message.content}
                </Markdown>
              </div>
            ))}
            {isThinking && (
              <div className="flex flex-row gap-2 items-center">
                <BotMessageSquareIcon className="w-4 h-4" />
                <p className="animate-pulse prose prose-sm">Thinking...</p>
              </div>
            )}
            {liveResponse.length > 0 && (
              <div className="flex flex-row gap-2 items-start">
                <BotMessageSquareIcon className="w-4 h-4 shrink-0 mt-1.5" />
                <Markdown className="prose prose-sm prose-h1:text-xl prose-h2:text-lg prose-h3:text-base">
                  {liveResponse}
                </Markdown>
              </div>
            )}
            <div ref={scrollRef} />
          </div>
          <hr />
         </div>
      </PopoverContent>
    </Popover>
  );
}
Enter fullscreen mode Exit fullscreen mode

Form

Now create a form to handle user input:

const formSchema = z.object({
  message: z.string().min(1),
});

export default function Chat() {
...
  const form = useForm<z.infer<typeof formSchema>>({
    resolver: zodResolver(formSchema),
    defaultValues: {
      message: "",
    },
  });

  ...
      <hr />
          <div className="absolute bottom-20 left-0 right-0 p-4 w-full">
            <Form {...form}>
              <form
                onSubmit={form.handleSubmit(onSubmit)}
                className="flex flex-row gap-2"
              >
                <FormField
                  control={form.control}
                  name="message"
                  render={({ field }) => (
                    <FormItem className="flex-1">
                      <FormControl>
                        <Input placeholder="Ask me anything..." {...field} />
                      </FormControl>
                      <FormMessage />
                    </FormItem>
                  )}
                />
                <Button size={"icon"} type="submit">
                  <SendIcon className="w-4 h-4" />
                </Button>
              </form>
            </Form>
          </div>
    ...
...
Enter fullscreen mode Exit fullscreen mode

Submit handler

Finally, implement the submit handler. As I said above, we are sending responses in stream, so it's better if also show the response as it is coming.

...
  const onSubmit = async (data: z.infer<typeof formSchema>) => {
    setIsThinking(true);
    setConversation((prev) => [
      ...prev,
      { role: "user", content: data.message },
    ]);
    form.reset();

    let liveResponse = "";
    try {
      const response = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/ai`, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
        },
        credentials: "include",
        body: JSON.stringify({
          query: data.message,
        }),
      });

      const reader = response.body?.getReader();
      const decoder = new TextDecoder();

      setIsThinking(false);
      if (!reader) return;

      while (true) {
        const { done, value } = await reader.read();
        if (done) break;

        const text = decoder.decode(value, { stream: true });

        liveResponse += text;
        setLiveResponse(liveResponse);
      }

      setConversation((prev) => [
        ...prev,
        { role: "assistant", content: liveResponse },
      ]);
    } catch (error) {
      console.error(error);
    } finally {
      setIsThinking(false);
      setLiveResponse("");
    }
  };
...
Enter fullscreen mode Exit fullscreen mode

Note: Make sure to set NEXT_PUBLIC_API_URL in .env.local

That's it!

If you don't want to deal with all this, you can also use Vercel's AI SDK which has all the parts you need.


Feel free to ask any questions in the comments below. Follow for more 🎸🎸🎸!

Top comments (0)