DEV Community

Cover image for AWS + JavaScript + WordPress = Fun Content Automation Strategies Using Artificial Intelligence
Christonit
Christonit

Posted on

AWS + JavaScript + WordPress = Fun Content Automation Strategies Using Artificial Intelligence

Months ago, I began collaborating on a project all about AI generated content for a client focused on the tech sector. My role was mostly focused on setting up SSG using WordPress as a Headless CMS for a Nuxt Front-end.

The client used to write articles a couple of times per week about different trends or situations affecting the sector, in the hope of increasing traffic to the site and its output of articles he decided to use AI to generate articles for him.

After some time, with the right prompts the client had pieces of information that were close to an exact match of a human written article, it is super difficult to spot they are machine made.

Sometime after I moved to work on different features, I would continuously get asked one specific thing.

Ey, can you update the featured image for this article?

After 2 weeks of daily updating Posts I had a small eureka moment.

Image description

Why don't I automate the featured image generation for these articles using Artifical Intelligence?

We already automated post writing, why don't automate the featured images?

In my free time, I was experimenting with generative LLMs on my computer so I had a solid idea of more or less how to tackle this side-quest. I sent a message to the client detailing what is the problem, what I want to do and what were going to be the advantages and without having to do convincing, I got the green-lit to work on this feature and right away I went with my first step.

1. Architecting how the solution is going to look.

Given that I had some exposure to running models locally I knew right away it was not feasible to self host those models. With that discarded I started to play around APIs that generated images based on text prompts.

Featured images consisted of 2 parts: the main composed graphic and a catchy tagline.

The composed graphic would be some elements related to the article, arranged in a nice way with then some colors and textures with some blend modes applied to achieve some fancy effects following the branding.

Taglines were short, 8-12 words sentences with a simple drop shadow under them.

Based on my testing, I realized that pursuing the AI route for image generation wasn’t practical. The image quality didn’t meet expectations, and the process was too time-consuming to justify its use. Considering this would run as an AWS Lambda function, where execution time directly impacts costs.

With that discarded, I went with Plan B: mashing images and design assets together using JavaScript's Canvas API.

Taking a deep look we had mainly 5 styles of simple posts, and around 4 types of textures and 3 of them using the same text alignment, style and position. After doing some math I thought:

Hmm, If i take these 3 images, grab 8 textures and play with blend-modes, I can get around post 24 variations

Given that those 3 types of posts had the same text style it was practically one template.

With that settled, I moved to the Tagline Generator. I wanted to create a tagline based on the content and title of the article. I decided to use ChatGPT’s API given that the company was already paying for it, and after some experimenting and promps tweaking, I had a very good MVP for my tagline generator.

With the 2 hardest parts of the task figured out, I spent some time in Figma putting together the diagram for the final architecture of my service.

Software Architecture Diagram

2.Coding my lambda

The plan was to create a Lambda function capable of analyzing post content, generating a tagline, and assembling a featured image—all seamlessly integrated with WordPress.

I will provide some code but just enough to communicate the overall idea to ke.

Analyzing the content

The Lambda function starts by extracting the necessary parameters from the incoming event payload:

const { title: request_title, content, backend, app_password} = JSON.parse(event.body);

  • title and content: These provide the article’s context.
  • backend: The WordPress backend URL for image uploads.
  • app_password: The authentication token im going to use to upload as my user using Wordpress Rest API.

Generating the Tagline

The function’s first major task is to generate a tagline using analyzeContent function, which uses OpenAI’s API to craft a click-worthy tagline based on the article's title and content.

Our function takes the post title and content but returns a tagline, post sentiment to know if post is a positive, negative or neutral opinion and an optional company symbol from the S&P index companies.

const { tagline, sentiment, company } = await analyzeContent({ title: request_title, content });

This step is critical, as the tagline directly influences the image’s aesthetics.

Creating the Featured Image

Next, the generateImage function kicks in:

let buffer;

buffer = await generateImage({
    title: tagline,
    company_logo: company_logo,
    sentiment: sentiment,
});

Enter fullscreen mode Exit fullscreen mode

This function handles:

  • Designing the composition.
  • Layering textures, colors, and branding elements.
  • Applying effects and creating the title.

Here is a step by step breakdown on how it works:

The generateImage function begins by setting up a blank canvas, defining its dimensions, and preparing it to handle all the design elements.

const COLOURS = {
        BLUE: "#33b8e1",
        BLACK: "#000000",
    }

    const __filename = fileURLToPath(import.meta.url);
    const __dirname = path.dirname(__filename);
    const images_path = path.join(__dirname, 'images/');
    const files_length = fs.readdirSync(images_path).length;
    const images_folder = process.env.ENVIRONMENT === "local"
        ? "./images/" : "/var/task/images/";


    registerFont("/var/task/fonts/open-sans.bold.ttf", { family: "OpenSansBold" });
    registerFont("/var/task/fonts/open-sans.regular.ttf", { family: "OpenSans" });


    console.log("1. Created canvas");

    const canvas = createCanvas(1118, 806);

    let image = await loadImage(`${images_folder}/${Math.floor(Math.random() * (files_length - 1 + 1)) + 1}.jpg`);


    let textBlockHeight = 0;

    console.log("2. Image loaded");

    const canvasWidth = canvas.width;
    const canvasHeight = canvas.height;
    const aspectRatio = image.width / image.height;


    console.log("3. Defined ASPECT RATIO",)

    let drawWidth, drawHeight;
    if (image.width > image.height) {
        // Landscape orientation: fit by width
        drawWidth = canvasWidth;
        drawHeight = canvasWidth / aspectRatio;
    } else {
        // Portrait orientation: fit by height
        drawHeight = canvasHeight;
        drawWidth = canvasHeight * aspectRatio;
    }

    // Center the image
    const x = (canvasWidth - drawWidth) / 2;
    const y = (canvasHeight - drawHeight) / 2;
    const ctx = canvas.getContext("2d");
    console.log("4. Centered Image")
    ctx.drawImage(image, x, y, drawWidth, drawHeight);

Enter fullscreen mode Exit fullscreen mode

From there, a random background image is loaded from a predefined collection of assets. These images were curated to suit the tech-oriented branding while allowing for enough variety across posts. Background image is selected randomly based on its sentiment.

To ensure each background image looked great, I calculated its dimensions dynamically based on the aspect ratio. This avoids distortions while keeping the visual balance intact.

Adding the Tagline

The tagline is short but based on some rules, this impactful sentence is split into manageable pieces and is styled dynamically to ensure it’s always readable, regardless of length or canvas size based on the word count for the line, word length, etc.

console.log("4.1 Text splitting");
if (splitText.length === 1) {

    const isItWiderThanHalf = ctx.measureText(splitText[0]).width > ((canvasWidth / 2) + 160);
    const wordCount = splitText[0].split(" ").length;

    if (isItWiderThanHalf && wordCount > 4) {

        const refactored_line = splitText[0].split(" ").reduce((acc, curr, i) => {
            if (i % 3 === 0) {
                acc.push([curr]);
            } else {
                acc[acc.length - 1].push(curr);
            }
            return acc;
        }, []).map((item) => item.join(" "));

        refactored_line[1] = "[s]" + refactored_line[1] + "[s]";

        splitText = refactored_line

    }
}

let tagline = splitText.filter(item => item !== '' && item !== '[br]' && item !== '[s]' && item !== '[/s]' && item !== '[s]');
let headlineSentences = [];
let lineCounter = {
    total: 0,
    reduced_line_counter: 0,
    reduced_lines_indexes: []
}

console.log("4.2 Tagline Preparation", tagline);

for (let i = 0; i < tagline.length; i++) {
    let line = tagline[i];

    if (line.includes("[s]") || line.includes("[/s]")) {

        const finalLine = line.split(/(\[s\]|\[\/s\])/).filter(item => item !== '' && item !== '[s]' && item !== '[/s]');

        const lineWidth = ctx.measureText(finalLine[0]).width
        const halfOfWidth = canvasWidth / 2;

        if (lineWidth > halfOfWidth && finalLine[0]) {

            let splitted_text = finalLine[0].split(" ").reduce((acc, curr, i) => {

                const modulus = finalLine[0].split(" ").length >= 5 ? 3 : 2;
                if (i % modulus === 0) {
                    acc.push([curr]);
                } else {
                    acc[acc.length - 1].push(curr);
                }
                return acc;
            }, []);

            let splitted_text_arr = []

            splitted_text.forEach((item, _) => {
                let lineText = item.join(" ");

                item = lineText

                splitted_text_arr.push(item)
            })

            headlineSentences[i] = splitted_text_arr[0] + '/s/'

            if (splitted_text_arr[1]) {
                headlineSentences.splice(i + 1, 0, splitted_text_arr[1] + '/s/')
            }
        } else {
            headlineSentences.push("/s/" + finalLine[0] + "/s/")
        }


    } else {
        headlineSentences.push(line)
    }
}

console.log("5. Drawing text on canvas", headlineSentences);

const headlineSentencesLength = headlineSentences.length;
let textHeightAccumulator = 0;

for (let i = 0; i < headlineSentencesLength; i++) {
    headlineSentences = headlineSentences.filter(item => item !== '/s/');
    const nextLine = headlineSentences[i + 1];
    if (nextLine && /^\s*$/.test(nextLine)) {
        headlineSentences.splice(i + 1, 1);
    }

    let line = headlineSentences[i];

    if (!line) continue;
    let lineText = line.trim();

    let textY;

    ctx.font = " 72px OpenSans";

    const cleanedUpLine = lineText.includes('/s/') ? lineText.replace(/\s+/g, ' ') : lineText;
    const lineWidth = ctx.measureText(cleanedUpLine).width
    const halfOfWidth = canvasWidth / 2;

    lineCounter.total += 1

    const isLineTooLong = lineWidth > (halfOfWidth + 50);

    if (isLineTooLong) {

        if (lineText.includes(':')) {
            const split_line_arr = lineText.split(":")
            if (split_line_arr.length > 1) {
                lineText = split_line_arr[0] + ":";
                if (split_line_arr[1]) {
                    headlineSentences.splice(i + 1, 0, split_line_arr[1])
                }
            }
        }

        ctx.font = "52px OpenSans";

        lineCounter.reduced_line_counter += 1

        if (i === 0 && headlineSentencesLength === 2) {
            is2LinesAndPreviewsWasReduced = true
        }


        lineCounter.reduced_lines_indexes.push(i)

    } else {

        if (i === 0 && headlineSentencesLength === 2) {
            is2LinesAndPreviewsWasReduced = false
        }


    }

    if (lineText.includes("/s/")) {

        lineText = lineText.replace(/\/s\//g, "");

        if (headlineSentencesLength > (i + 1) && i < headlineSentencesLength - 1 && nextLine) {

            if (nextLine.slice(0, 2).includes("?") && nextLine.length < 3) {
                lineText += '?';
                headlineSentences.pop();
            }

            if (nextLine.slice(0, 2).includes(":")) {
                lineText += ':';
                headlineSentences[i + 1] = headlineSentences[i + 1].slice(2);
            }

        }

        let lineWidth = ctx.measureText(lineText).width


        let assignedSize;


        if (lineText.split(" ").length <= 2) {

            if (lineWidth > (canvasWidth / 2.35)) {

                ctx.font = "84px OpenSansBold";

                assignedSize = 80

            } else {

                ctx.font = "84px OpenSansBold";

                assignedSize = 84

            }
        } else {


            if (i === headlineSentencesLength - 1 && lineWidth < (canvasWidth / 2.5) && lineText.split(" ").length === 3) {

                ctx.font = "84px OpenSansBold";
                assignedSize = 84

            } else {

                lineCounter.reduced_line_counter += 1;

                ctx.font = "52px OpenSansBold";
                assignedSize = 52

            }

            lineCounter.reduced_lines_indexes.push(i)

        }

        lineWidth = ctx.measureText(lineText).width



        if (lineWidth > (canvasWidth / 2) + 120) {

            if (assignedSize === 84) {
                ctx.font = "72px OpenSansBold";
            } else if (assignedSize === 80) {
                ctx.font = "64px OpenSansBold";

                textHeightAccumulator += 8
            } else {
                ctx.font = "52px OpenSansBold";
            }
        }



    } else {

        const textWidth = ctx.measureText(lineText).width


        if (textWidth > (canvasWidth / 2)) {
            ctx.font = "44px OpenSans";
            textHeightAccumulator += 12
        } else if (i === headlineSentencesLength - 1) {
            textHeightAccumulator += 12
        }

    }

    ctx.fillStyle = "white";
    ctx.textAlign = "center";

    const textHeight = ctx.measureText(lineText).emHeightAscent;

    textHeightAccumulator += textHeight;

    if (headlineSentencesLength == 3) {
        textY = (canvasHeight / 3)
    } else if (headlineSentencesLength == 4) {
        textY = (canvasHeight / 3.5)
    } else {
        textY = 300
    }

    textY += textHeightAccumulator;

    const words = lineText.split(' ');
    console.log("words", words, lineText, headlineSentences)
    const capitalizedWords = words.map(word => {
        if (word.length > 0) return word[0].toUpperCase() + word.slice(1)
        return word
    });
    const capitalizedLineText = capitalizedWords.join(' ');

    ctx.fillText(capitalizedLineText, canvasWidth / 2, textY);

}

Enter fullscreen mode Exit fullscreen mode

Finally, the canvas is converted into a PNG buffer.

const buffer = canvas.toBuffer("image/png");
return buffer;
Enter fullscreen mode Exit fullscreen mode

Finally!!! Uploading the Image to WordPress

After successfully generating the image buffer, the uploadImageToWordpress function is called.

This function handles the heavy lifting of sending the image to WordPress using its REST API by Encoding the Image for WordPress.

The function first prepares the tagline for use as the filename by cleaning up spaces and special characters:

const createSlug = (string) => {
    return string.toLowerCase().replace(/ /g, '-').replace(/[^\w-]+/g, '');
};

const image_name = createSlug(tagline);
Enter fullscreen mode Exit fullscreen mode

The image buffer is then converted into a Blob object to make it compatible with the WordPress API:

const file = new Blob([buffer], { type: "image/png" });

Preparing the API Request Using the encoded image and tagline, the function builds a FormData object and I add optional metadata, such as alt_text for accessibility and a caption for context.

formData.append("file", file, image_name + ".png");
formData.append("alt_text", `${tagline} image`);
formData.append("caption", "Uploaded via API");
Enter fullscreen mode Exit fullscreen mode

For authentication, the username and application password are encoded in Base64 and included in the request headers:


const credentials = `${username}:${app_password}`;
const base64Encoded = Buffer.from(credentials).toString("base64");

Enter fullscreen mode Exit fullscreen mode

Sending the Image A POST request is made to the WordPress media endpoint with the prepared data and headers and after awaiting the response I validate for success or errors.

const response = await fetch(`${wordpress_url}wp-json/wp/v2/media`, {
    method: "POST",
    headers: {
        Authorization: "Basic " + base64Encoded,
        contentType: "multipart/form-data",
    },
    body: formData,
});

if (!response.ok) {
    const errorText = await response.text();
    throw new Error(`Error uploading image: ${response.statusText}, Details: ${errorText}`);
}

Enter fullscreen mode Exit fullscreen mode

If successful, I return that same media response in the lambda.

This is how my lambda looks in the end.

import { analyzeContent } from './ai-analysis.js';
import { uploadImageToWordpress, generateImage } from './draw.js';

export const handler = async (event) => {
    try {
        const { title: request_title, content, backend, app_password, return_image } = JSON.parse(event.body);

        if (typeof request_title !== 'string' || typeof content !== 'string') {
            return {
                statusCode: 400,
                body: JSON.stringify({ error: 'Invalid input' }),
            };
        }

        const { tagline, company_logo, sentiment } = await analyzeContent({ title: request_title, content });

        const BACKENDS = ["https://content.x.com/", "https://content.y.com/", "https://content.z.com/"]

        let buffer;

        console.log("1. SUCCESSFUL Tagline Creation", { tagline, sentiment, backend }, BACKENDS.includes(backend));


        buffer = await generateImage({
            title: tagline,
            company_logo: company_logo,
            sentiment: sentiment,
        });

        console.log("3. SUCCESSFUL Image Creation for", backend);
        if (buffer) {

            const response = await uploadImageToWordpress(backend, buffer, tagline, app_password);

            if (response.status !== 201) {

                throw new Error("Image Upload Failed" + response.statusText);
            }

            const data = await response.json();

            console.log("4. SUCCESSFUL Image Upload for", backend);
            return {
                statusCode: 200,
                headers: {
                    'Content-Type': 'application/json',
                },
                body: data,
            };

        }
    } catch (error) {
        return {
            statusCode: 500,
            body: JSON.stringify({ error: error.message }),
        };
    }
};

Enter fullscreen mode Exit fullscreen mode

This is a sample image produced by my script. It's not used in production, just created with generic assets for this example.

This is not a production used image, I used random elements for example purposes.

Aftermath

Some time has passed and everybody is happy that we no longer have shoddy or empty looking image-less articles, that images are a close match to the ones that the designer crafts, the designer is happy that he gets to only focus on designing for other marketing efforts across the company.

But then a new problem arose: sometimes the client did not like the Image generated and he would ask me to spin up my script to generate a new one for a specific post.

This brought me to my next sidequest: A Wordpress Plugin to Manually Generate a Featured image using Artificial Inteligence for an Specific Post

Top comments (0)