DEV Community

Cover image for Perspective API
Thomas Bnt
Thomas Bnt Subscriber

Posted on • Edited on • Originally published at thomasbnt.dev

Perspective API

What is Perspective?

Perspective is a free API that uses machine learning to identify "toxic" contents, making it easier to organize better online conversations.

By scoring a sentence based on the perceived impact the text may have in a conversation, developers and editors can use this score to provide feedback to commenters, help moderators review comments more easily, or help readers filter out "toxic" language. Perspective provides scores for several attributes, such as:

  • Severe toxicity
  • Insults
  • Injuries
  • Identity attacks
  • The threats
  • And the sexually explicit

Perspective API solutions

So we have a very good solution to protect ourselves from threatening comments and so on to avoid broadcasting bad content whether it is in a comment, in a forum or in a live chat. There are multitudes of functionality to realize with this API.

Perspective API is a very good way to filter out insults and phrases that are toxic.

Toxicity online poses a serious challenge for platforms and publishers. Online abuse and harassment silence important voices in conversation, forcing already marginalized people offline.

An example of code

The example is under JavaScript, but can totally work with any other language.

See the documentation dev

// Since the official documentation and slightly modified 
// https://developers.perspectiveapi.com/s/docs-sample-requests

const {google} = require('googleapis')
require('dotenv').config()

CONTENT = "You're really crap at this game"

// Create an .env file to recover GOOGLE_API_KEY.
API_KEY = process.env.GOOGLE_API_KEY
DISCOVERY_URL =
  'https://commentanalyzer.googleapis.com/$discovery/rest?version=v1alpha1'

google.discoverAPI(DISCOVERY_URL).then(client => {
  const analyzeRequest = {
    comment: {
      text: CONTENT,
    },
    requestedAttributes: {
      TOXICITY: {},
    },
  }

  console.info(`Input Text : ${analyzeRequest.comment.text}`)

  client.comments.analyze(
    {
      key: API_KEY,
      resource: analyzeRequest,
    },
    async (err, response) => {
      if (err) throw err
      let ScoreValue = response.data.attributeScores.TOXICITY.summaryScore.value
      await console.log(`TOXICITY Score : ${ScoreValue}`)
      console.table(JSON.stringify(response.data, null, 2))
    })
}).catch(err => {
  throw err
})
Enter fullscreen mode Exit fullscreen mode

It's so easy to set up, just install googleapis and dotenv and get your Perspective API key and test the code. 🎉

yarn add googleapis dotenv
Enter fullscreen mode Exit fullscreen mode

Or if you prefer NPM :

npm i googleapis dotenv
Enter fullscreen mode Exit fullscreen mode

I have already made a project that is Open Source on GitHub called No Toxic Discussions. It's an GitHub Action that identifies the message in the discussion space and checks if its content is toxic or not.

GitHub logo thomasbnt / NoToxicDiscussions

No Toxic Discussions, a GitHub Action to detect toxicity in discussions area.

No Toxic Discussions, a GitHub Action to detect toxicity in discussions area.

You also have a DEV post about this Action. Click here to read him.


Credits

Some texts have been copied from the website, as for the banner the logo "Perspective API" comes from Jigsaw of Google Inc. The source code has been taken from the example on their website and modified so that the result is visible. It comes from the modification I made for the No Toxic Discussions project.

Check my Twitter account. You can see many projects and updates. You can also support me on Buy Me a Coffee, Stripe or GitHub Sponsors. Thanks for read my post ! 🤩

Top comments (0)