In this quick tutorial, we will create a no-code AI agent for generating logos using Aiflowly.com.
Workflow
Our AI agent will be able to:
- Read short user input for a logo topic.
- Pass it to text-to-text AI mode (we will use OpenAI's GPT models) and generate an advanced prompt for generating an image.
- The generated prompt will then be passed to various text-to-image models.
- Our no-code AI agent will render the output images after generating all images.
Text-to-image models
For this tutorial, we will use the following text-to-image models:
- Stability AI / Stable Diffusion v2
- Stability AI / Stable Diffusion v3
- Kandinsky 2.2
- ByteDance / sdxl-lightning-4step
- Stability AI / SDXL
Text-to-text models
We will use OpenAI's GPT models to generate an advanced text-to-image prompt. Currently, Aiflowly supports the following GPT models:
- GPT-4o
- GPT-4-turbo
- GPT-3.5-turbo
To improve the output of these models, we will use the following system prompt:
For a given topic, write a detailed text-to-image prompt that will be used by another AI model (text-to-image AI model).
Technically, we can improve this prompt or use a negative one to feed the text-to-image model.
Agent's no-code flow
Agent's flow consists of three simple steps:
- User input
- Text-to-image prompt
- Image generation (repeated for each image model)
It looks like the following:
Aiflowly's flow execution system will run through each node, automatically generate required input and output, and render the result as it progresses.
Conclusion
In this example, we used Aiflowly to generate a simple AI agent and workflow to chain the output of multiple AI models.
It is worth noticing that we used default AI models and default parameters. It is possible to fine-tune parameters to achieve better results.
You can generate your own AI workflows using Aiflowly.com.
Follow Aiflowly on X for feature demos and updates!
Top comments (0)