DEV Community

Cover image for How to use Llama3.2 to write daily logs in Notion based on your screen
Louis Beaumont
Louis Beaumont

Posted on

How to use Llama3.2 to write daily logs in Notion based on your screen

Ever wished you had a personal AI assistant that could keep track of your daily work? With screenpipe & llama3.2, you can now automate the process of writing detailed logs based on your screen activity. Let's dive into how you can set this up using screenpipe's plugin system.

What is screenpipe?

screenpipe is an open-source tool that captures your screen and audio 24/7, extracts text using ocr, and allows you to build personalized ai-powered workflows. It's designed to be secure, with your data staying on your machine.

Installing screenpipe

(If you're not on macOS check these instructions)

To build the screenpipe app from source on macOS, follow these steps:

  1. Install Rust and necessary dependencies:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
brew install pkg-config ffmpeg jq tesseract cmake wget
Enter fullscreen mode Exit fullscreen mode
  1. Install Deno CLI:
curl -fsSL https://deno.land/install.sh | sh
Enter fullscreen mode Exit fullscreen mode
  1. Clone the screenpipe repository:
git clone https://github.com/mediar-ai/screenpipe
cd screenpipe
Enter fullscreen mode Exit fullscreen mode
  1. Build the project:
cargo build --release --features metal
Enter fullscreen mode Exit fullscreen mode
  1. To build the desktop app, first add this to your VSCode settings in .vscode/settings.json:
{
   "rust-analyzer.cargo.features": [
      "metal",
      "pipes"
   ],
   "rust-analyzer.server.extraEnv": {
      "DYLD_LIBRARY_PATH": "${workspaceFolder}/screenpipe-vision/lib:${env:DYLD_LIBRARY_PATH}",
      "SCREENPIPE_APP_DEV": "true"
   },
   "rust-analyzer.cargo.extraEnv": {
      "DYLD_LIBRARY_PATH": "${workspaceFolder}/screenpipe-vision/lib:${env:DYLD_LIBRARY_PATH}",
      "SCREENPIPE_APP_DEV": "true"
   },
   "terminal.integrated.env.osx": {
      "DYLD_LIBRARY_PATH": "${workspaceFolder}/screenpipe-vision/lib:${env:DYLD_LIBRARY_PATH}",
      "SCREENPIPE_APP_DEV": "true"
   }
}
Enter fullscreen mode Exit fullscreen mode
  1. Then, build the desktop app:
cd screenpipe-app-tauri
bun install
bun scripts/pre_build.js
bun tauri build
Enter fullscreen mode Exit fullscreen mode

This process will build the screenpipe app from source on your macOS system. If you encounter any issues, you can open an issue on the GitHub repository for assistance.

Start screenpipe by clicking start or through CLI in dev mode:

Image description

Once you installed screenpipe, add this plugin by dropping this URL in the "add your own pipe" bar:

https://github.com/mediar-ai/screenpipe/tree/main/examples/typescript/pipe-phi3.5-engineering-team-logs

Use llama3.2:3b-instruct-q4_K_M as model.

You should also create and configure your Notion integration (instructions on the plugin page in screenpipe app).

Start the LLM in the settings (or manually through Ollama):

Image description

If you are on Windows or Linux you need to run Ollama yourself.

Restart screenpipe by clicking stop and start or through CLI in dev mode:

Image description

You should start getting notifications in screenpipe AI inbox:

Image description

BTW, the code to do this is quite simple:

import { ContentItem } from "screenpipe";
import { Client } from "npm:@notionhq/client";
import { z } from "zod";
import { generateObject } from "ai";
import { createOllama } from "ollama-ai-provider";
import { pipe } from "screenpipe";

const engineeringLog = z.object({
  title: z.string(),
  description: z.string(),
  tags: z.array(z.string()),
});

type EngineeringLog = z.infer<typeof engineeringLog>;

async function generateEngineeringLog(
  screenData: ContentItem[],
  ollamaModel: string,
  ollamaApiUrl: string
): Promise<EngineeringLog> {
  const prompt = `Based on the following screen data, generate a concise engineering log entry:

    ${JSON.stringify(screenData)}

    Focus only on engineering work. Ignore non-work related activities.
    Return a JSON object with the following structure:
    {
        "title": "Brief title of the engineering task",
        "description": "Concise description of the engineering work done",
        "tags": ["tag1", "tag2", "tag3"]
    }
    Provide 1-3 relevant tags related to the engineering work.`;

  const provider = createOllama({ baseURL: ollamaApiUrl });

  const response = await generateObject({
    model: provider(ollamaModel),
    messages: [{ role: "user", content: prompt }],
    schema: engineeringLog,
  });

  console.log("ai answer:", response);

  return response.object;
}

async function syncLogToNotion(
  logEntry: EngineeringLog,
  notion: Client,
  databaseId: string
): Promise<void> {
  try {
    console.log("syncLogToNotion", logEntry);
    await notion.pages.create({
      parent: { database_id: databaseId },
      properties: {
        Title: { title: [{ text: { content: logEntry.title } }] },
        Description: {
          rich_text: [{ text: { content: logEntry.description } }],
        },
        Tags: { multi_select: logEntry.tags.map((tag) => ({ name: tag })) },
        Date: { date: { start: new Date().toISOString() } },
      },
    });

    console.log("engineering log synced to notion successfully");

    // Create markdown table for inbox
    const markdownTable = `
| Title | Description | Tags |
|-------|-------------|------|
| ${logEntry.title} | ${logEntry.description} | ${logEntry.tags.join(", ")} |
    `.trim();

    await pipe.inbox.send({
      title: "engineering log synced",
      body: `new engineering log entry:\n\n${markdownTable}`,
    });
  } catch (error) {
    console.error("error syncing engineering log to notion:", error);
    await pipe.inbox.send({
      title: "engineering log error",
      body: `error syncing engineering log to notion: ${error}`,
    });
  }
}

function streamEngineeringLogsToNotion(): void {
  console.log("starting engineering logs stream to notion");

  const config = pipe.loadPipeConfig();
  console.log("loaded config:", JSON.stringify(config, null, 2));

  const interval = config.interval * 1000;
  const databaseId = config.notionDatabaseId;
  const apiKey = config.notionApiKey;
  const ollamaApiUrl = config.ollamaApiUrl;
  const ollamaModel = config.ollamaModel;

  const notion = new Client({ auth: apiKey });

  pipe.inbox.send({
    title: "engineering log stream started",
    body: `monitoring engineering work every ${config.interval} seconds`,
  });

  pipe.scheduler
    .task("generateEngineeringLog")
    .every(interval)
    .do(async () => {
      try {
        const now = new Date();
        const oneHourAgo = new Date(now.getTime() - interval);

        const screenData = await pipe.queryScreenpipe({
          startTime: oneHourAgo.toISOString(),
          endTime: now.toISOString(),
          limit: 50,
          contentType: "ocr",
        });

        if (screenData && screenData.data.length > 0) {
          const logEntry = await generateEngineeringLog(
            screenData.data,
            ollamaModel,
            ollamaApiUrl
          );
          await syncLogToNotion(logEntry, notion, databaseId);
        } else {
          console.log("no relevant engineering work detected in the last hour");
        }
      } catch (error) {
        console.error("error in engineering log pipeline:", error);
        await pipe.inbox.send({
          title: "engineering log error",
          body: `error in engineering log pipeline: ${error}`,
        });
      }
    });

  pipe.scheduler.start();
}

streamEngineeringLogsToNotion();
Enter fullscreen mode Exit fullscreen mode

Any ideas of other interesting plugins could be made?

Feel free to join our Discord!

Top comments (0)