DEV Community

Connie Leung
Connie Leung

Posted on

Build a sentiment classifier with Chrome's Prompt API in Angular

In this blog post, I describe how to build a sentiment classification application locally using Chrome's Built-In Prompt API and Angular. The Angular application calls the Prompt API to create a language model and submits queries to Gemini Nano to classify positive or negative sentiment.

The benefit of using Chrome's built-in AI is zero cost since the application uses the local models in Chrome Canary. This is the happy path when users use Chrome Dev or Chrome Canary. If users use non-Chrome or old Chrome browsers, a fallback implementation should be available, such as calling Gemma or Gemini on Vertex AI to return the correct sentiment.

Install Gemini Nano on Chrome

Update the Chrome Dev/Canary to the latest version. As of this writing, the newest version of Chrome Canary is 133.

Please refer to this section to sign up for the early preview program of Chrome Built-in AI.
https://developer.chrome.com/docs/ai/built-in#get_an_early_preview

Please refer to this section to enable Gemini Nano on Chrome and download the model. https://developer.chrome.com/docs/ai/get-started#use_apis_on_localhost

Disable text safety classifier on Chrome

  • (Local Development) Go to chrome://flags/#text-safety-classifier.
  • (Local Development) Select Disabled
  • Click Relaunch or restart Chrome.

Scaffold an Angular Application

ng new prompt-api-demo
Enter fullscreen mode Exit fullscreen mode

Install dependencies

npm i -save-exact -save-dev @types/dom-chromium-ai
Enter fullscreen mode Exit fullscreen mode

This dependency provides the TypeScript typing of all the Chrome Built-in APIs. Therefore, developers can write elegant codes to build AI applications in TypeScript.

In main.ts, add a reference tag to point to the package's typing definition file.

// main.ts

/// <reference path="../../../node_modules/@types/dom-chromium-ai/index.d.ts" />   
Enter fullscreen mode Exit fullscreen mode

Bootstrap the language model

import { InjectionToken } from '@angular/core';

export const AI_PROMPT_API_TOKEN = new InjectionToken<AILanguageModelFactory | undefined>('AI_PROMPT_API_TOKEN');
Enter fullscreen mode Exit fullscreen mode
export function provideLanguageModel(): EnvironmentProviders {
   return makeEnvironmentProviders([
       {
           provide: AI_PROMPT_API_TOKEN,
           useFactory: () => {
               const platformId = inject(PLATFORM_ID);
               const objWindow = isPlatformBrowser(platformId) ? window : undefined;
               return  objWindow?.ai?.languageModel;
           },
       }
   ]);
}
Enter fullscreen mode Exit fullscreen mode

I define environment providers to return the languageModel in the window.ai namespace. When the codes inject the AI_LANGUAGE_PROMPT_API_TOKEN token, they can access the Prompt API to call its' methods to submit queries to the Gemini Nano.

// app.config.ts

export const appConfig: ApplicationConfig = {
  providers: [
    provideLanguageModel()
  ]
};
Enter fullscreen mode Exit fullscreen mode

In the application config, provideLanguageModel is imported into the providers array.

Validate browser version and API availability

Chrome's built-in AI is in experimental status, and the Prompt API is supported in Chrome version 131 and later. Therefore, I implemented validation logic to ensure the API is available before displaying the user interface so users can enter texts.

The validation rules include:

  • Browser is Chrome
  • Browser version is at least 131
  • ai Object is in the window namespace
  • Prompt API’s status is readily
export async function checkChromeBuiltInAI(): Promise<string> {
  if (!isChromeBrowser()) {
     throw new Error(ERROR_CODES.NOT_CHROME_BROWSER);
  }

  if (getChromVersion() < CHROME_VERSION) {
     throw new Error(ERROR_CODES.OLD_BROWSER);
  }

  if (!('ai' in globalThis)) {
     throw new Error(ERROR_CODES.NO_PROMPT_API);
  }

  const assistant = inject(AI_PROMPT_API_TOKEN);
  const status = (await assistant?.capabilities())?.available;
  if (!status) {
     throw new Error(ERROR_CODES.API_NOT_READY);
  } else if (status === 'after-download') {
     throw new Error(ERROR_CODES.AFTER_DOWNLOAD);
  } else if (status === 'no') {
     throw new Error(ERROR_CODES.NO_LARGE_LANGUAGE_MODEL);
  }

  return '';
}
Enter fullscreen mode Exit fullscreen mode

The checkChromeBuiltInAI function ensures the Prompt API is defined and ready to use. If checking fails, the function throws an error. Otherwise, it returns an empty string.

export function isPromptAPISupported(): Observable<string> {
  return from(checkChromeBuiltInAI()).pipe(
     catchError(
        (e) => {
           console.error(e);
           return of(e instanceof Error ? e.message : 'unknown');
        }
     )
  );
}
Enter fullscreen mode Exit fullscreen mode

The isPromptApiSupported function catches the error and returns an Observable of error message.

Display the AI components

@Component({
    selector: 'app-detect-ai',
    imports: [PromptShowcaseComponent],
    template: `
    <div>
      @let error = hasCapability();
      @if (!error) {
        <app-promt-showcase />
      } @else if (error !== 'unknown') {
        {{ error }}
      }
    </div>
  `
})
export class DetectAIComponent {
  hasCapability = toSignal(isPromptAPISupported(), { initialValue: '' });
}
Enter fullscreen mode Exit fullscreen mode

The DetectAIComponent renders the PromptShowcaseComponent where there is no error. Otherwise, it displays the error message in the error signal.

// prompt-showcase.component.ts 

@Component({
   selector: 'app-prompt-showcase',
   imports: [NgComponentOutlet],
   template: `
   @let outlet = componentOutlet();
   <ng-container [ngComponentOutlet]="outlet.component" [ngComponentOutletInputs]="outlet.inputs" />
 `,
   changeDetection: ChangeDetectionStrategy.OnPush
})
export class PromptShowcaseComponent {
   promptService = inject(ZeroPromptService);
   componentOutlet = computed(() => {  
      return {
        component: NShotsPromptComponent,
        inputs: {}
      }
   });
}
Enter fullscreen mode Exit fullscreen mode

The PromptShowcaserComponent renders the NShotsPromptComponent dynamically.

N Shots Prompt Component

// n-shots-prompt.component.ts

@Component({
   selector: 'app-n-shot-prompt',
   imports: [FormsModule],
   template: `
   <div>
     <h3>N-shots prompting</h3>
     @let myState = state();
     <div>
       <span class="label" for="input">Prompt: </span>
       <textarea id="input" name="input" [(ngModel)]="query" [disabled]="myState.disabled" rows="3"></textarea>
     </div>
     <button (click)="submitPrompt()" [disabled]="myState.submitDisabled">{{ myState.text }}</button>
     <div>
       <span class="label">Response: </span>
       <p [innerHTML]="response() | lineBreak"></p>
     </div>
     @if (error()) {
       <div>
         <span class="label">Error: </span>
         <p>{{ error() }}</p>
       </div>
     }
   </div>`,
   styleUrl: './prompt.component.css',
   providers: [
       {
           provide: AbstractPromptService,
           useClass: NShotsPromptService,
       }
   ],
   changeDetection: ChangeDetectionStrategy.OnPush
})
export class NShotsPromptComponent extends BasePromptComponent {
 initialPrompts = signal<LanguageInitialPrompt>([
   { role: 'system', content: `You are an expert in determine the sentiment of a text.
   If it is positive, say 'positive'. If it is negative, say 'negative'. If you are not sure, then say 'not sure'` },
   { role: 'user', content: "The food is affordable and delicious, and the venue is close to the train station." },
   { role: 'assistant', content: "positive" },
   { role: 'user', content: "The waiters are very rude, the food is salty, and the drinks are sour." },
   { role: 'assistant', content: "negative" },
   { role: 'user', content: "Google is a company" },
   { role: 'assistant', content: "not sure" },
   { role: 'user', content: "The weather is hot and sunny today." },
   { role: 'assistant', content: "postive" }
 ]);

 constructor() {
   super();
   this.query.set('The toilet has no toilet papers again.');
   this.promptService.setPromptOptions({ initialPrompts: this.initialPrompts() });
 }
}
Enter fullscreen mode Exit fullscreen mode

The NShotsPromptComponent displays a text area for a user to enter the query and a submit button to submit it to the LLM to generate a sentiment. The initialPrompts signal stores some examples of positive and negative sentiments. The first entry of the signal is a system prompt that describes the context of the problem. In this demo, the Gemini Nano is asked to determine the sentiment of a sentence. If the sentiment is positive, the result is 'positive'. If the sentiment is negative, the result is 'negative'. Otherwise, the result is 'not sure'.

constructor() {
   super();
   this.query.set('The toilet has no toilet papers again.');
   this.promptService.setPromptOptions({ initialPrompts: this.initialPrompts() });
}
Enter fullscreen mode Exit fullscreen mode

The constructor of the component sets the initial value of the query and calls the NShotsPromptService to update the initial prompts of the Prompt API.

Base Component

@Directive({
   standalone: false
})
export abstract class BasePromptComponent {
   promptService = inject(AbstractPromptService);
    session = this.promptService.session;

   isLoading = signal(false);
   error = signal('');
   query = signal('Tell me about the job responsibility of an A.I. engineer, maximum 500 words.');
   response = signal('');

   state = computed(() => {
       const isLoading = this.isLoading();
       const isUnavailableForCall = isLoading || this.query().trim() === '';
       return {
           status: isLoading ? 'Processing...' : 'Idle',
           text: isLoading ? 'Progressing...' : 'Submit',
           disabled: isLoading,
           submitDisabled: isUnavailableForCall
       }
   });

    async submitPrompt() {
     try {
       this.isLoading.set(true);
       this.error.set('');
       this.response.set('');
       const answer = await this.promptService.prompt(this.query());
       this.response.set(answer);
     } catch(e) {
       const errMsg = e instanceof Error ? (e as Error).message : 'Error in submitPrompt';
       this.error.set(errMsg);
     } finally {
       this.isLoading.set(false);
     }
   }
 }
Enter fullscreen mode Exit fullscreen mode

The BasePromptComponent provides the submit functionality and signals to hold the query, response, and view states.

The submitPrompt method submits the query to Gemini Nano to generate texts and assign them to the response signal. When the LLM is occupied, the isLoading signal is set to true, and the UI elements (text area and button) become disabled. When the signal is set to false, the UI elements are enabled.

Define a service layer over the Prompt Detection API

The NShotsPromptService service encapsulates the logic of the Prompt API.

The createPromptSession creates a session with the initial prompts. When the service is destroyed, the ngOnDestroy method destroys the session to avoid memory leaks.

@Injectable({
 providedIn: 'root'
})
export class NShotsPromptService extends AbstractPromptService implements OnDestroy  {
 #controller = new AbortController();

 override async createPromptSession(options?: PromptOptions): Promise<AILanguageModel | undefined> {
   const { initialPrompts = undefined } = options || {};
   return this.promptApi?.create({ initialPrompts, signal: this.#controller.signal });
 }

 ngOnDestroy(): void {
   this.destroySession();
 }
}
Enter fullscreen mode Exit fullscreen mode

The AbtractPromptService defines standard methods other prompt services can inherit.

The createSessionIfNotExists method creates a session and keeps it in the #session signal for reuse. A session is recreated when the old one has very few tokens remaining (< 500).

export abstract class AbstractPromptService {
   promptApi = inject(AI_PROMPT_API_TOKEN);
   #session = signal<AILanguageModel | undefined>(undefined);
   #tokenContext = signal<Tokenization | null>(null);
   #options = signal<PromptOptions | undefined>(undefined);

   resetSession(newSession: AILanguageModel | undefined) {
       this.#session.set(newSession);
       this.#tokenContext.set(null);
   }

   shouldCreateSession() {
       const session = this.#session();
       const context = this.#tokenContext();
       return !session || (context && context.tokensLeft < 500);
   }

   setPromptOptions(options?: PromptOptions) {
       this.#options.set(options);
   }

   async createSessionIfNotExists(): Promise<void> {
     if (this.shouldCreateSession()) {
        this.destroySession();
        const newSession = await this.createPromptSession(this.#options());
        if (!newSession) {
           throw new Error('Prompt API failed to create a session.');      
        }
        this.resetSession(newSession);
     }
   }
}
Enter fullscreen mode Exit fullscreen mode

The abstract createPromptSession method allows concrete services to implement different kinds of sessions. A session can have no prompt, a system prompt, or an array of initial prompts.

abstract createPromptSession(options?: PromptOptions): Promise<AILanguageModel | undefined>;
Enter fullscreen mode Exit fullscreen mode

The prompt method creates a session when it does not exist. Then, the session accepts a query to generate and return the texts.

async prompt(query: string): Promise<string> {
       if (!this.promptApi) {
           throw new Error(ERROR_CODES.NO_PROMPT_API);
       }

       await this.createSessionIfNotExists();
       const session = this.#session();
       if (!session) {
           throw new Error('Session does not exist.');      
       }
       const answer = await session.prompt(query);
       return answer;
}
Enter fullscreen mode Exit fullscreen mode

The destroySession method destroys the session and resets the signals in the service.

destroySession() {
    const session = this.session();

    if (session) {
        session.destroy();
        console.log('Destroy the prompt session.');
        this.resetSession(undefined);
    }
}
Enter fullscreen mode Exit fullscreen mode

In conclusion, software engineers can create Web AI applications without setting up a backend server or accumulating the costs of LLM on the cloud.

Resources:

Top comments (1)

Collapse
 
fyodorio profile image
Fyodor

I don’t like Chrome personally but these new APIs look really interesting 👍🏼