DEV Community

Rohit Ghumare
Rohit Ghumare

Posted on • Originally published at ghumare64.Medium

My first AI Food Assistant

This technical guide walks through deploying an AI-powered Food Recipe Assistant application on Sevalla's Application Hosting platform. We'll cover the deployment process, configuration, and best practices for hosting a Python FastAPI application with AI capabilities.
Posted on Medium first: Click her to check the full blog

Image description

Project Overview

The AI Food Recipe Assistant is a modern web application that leverages:

  • FastAPI for the backend API
  • OpenAI's GPT-3.5 and DALL-E 3 for AI-powered recipe and image generation
  • HTML/TailwindCSS/AlpineJS for the frontend
  • Environment variables for secure configuration
  • Docker for containerization

Application code is available on AI Food Recipe Assistant as a Github Repo.

AI Application Features & Output

Image description

Intelligent Recipe Generation
Our deployed AI Food Recipe Assistant will demonstrate the powerful AI capabilities like:

Natural Language Understanding: Users can request recipes in plain English (e.g., “vegan chocolate lava cake”)
Dietary Customization: Automatically adapts recipes for various preferences:

  1. Vegetarian/Vegan options
  2. Gluten-free alternatives
  3. Keto-friendly versions
  4. Other dietary restrictions
  5. Cuisine Fusion: Supports multiple cuisine types and cultural adaptations

AI-Generated Content

Each recipe request generates the following:

Detailed Recipe Information:

  • Ingredient lists with precise measurements
  • Step-by-step cooking instructions
  • Cooking times and temperatures
  • Serving suggestions
  • Nutritional information

Visual Content:

  • DALL-E 3 generated photorealistic food images
  • Appetizing presentation suggestions
  • Visual cooking guides
  • Learning Resources:
  • Cooking technique explanations
  • Ingredient substitution options
  • Tips for perfect execution

Sample Output

Here's an example of what the application generates for a "Vegan Italian Choco Lava Cake":

{
    "recipe": {
        "title": "Vegan Italian Choco Lava Cake",
        "description": "Indulge in the decadence of a vegan Italian-style choco lava cake that will impress even the most discerning dessert lovers!",
        "ingredients": [
            "1 cup all-purpose flour",
            "1/2 cup unsweetened cocoa powder", 
            "1/2 cup sugar",
            "1/2 cup plant-based milk",
            "// ... other ingredients"
        ],
        "instructions": [
            "1. Preheat oven to 375°F (190°C)",
            "2. Mix dry ingredients in a bowl",
            "// ... detailed steps"
        ]
    },
    "image_url": "https://ai-generated-image.example/vegan-lava-cake.jpg",
    "learning_resources": [
        {
            "type": "video",
            "title": "Master the Art of Vegan Lava Cakes",
            "url": "https://youtube.com/cookingtutorials"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Let’s deploy this…

Prerequisites

Before deploying to Sevalla, ensure you have:

  1. A Sevalla account
  2. The application code in a Git repository
  3. OpenAI API key for AI functionality

Local Deployment Steps

1. Application Setup

First, prepare your application for deployment:

Try running our application locally first:

  1. Clone the repository
   git clone https://github.com/rohitg00/ai-food-recipe-assistant.git
   cd ai-food-recipe-assistant
Enter fullscreen mode Exit fullscreen mode
  1. Set up Python environment
   python -m venv venv
   source venv/bin/activate  # On Windows: venv\Scripts\activate
   pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode
  1. Dockerfile Setup

    The application includes a Dockerfile for containerized deployment:

    FROM python:3.9-slim
    WORKDIR /app
    COPY requirements.txt .
    RUN pip install --no-cache-dir -r requirements.txt
    COPY . .
    EXPOSE 8000
    CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
    
  2. Configure environment variables

   cp .env.example .env
   # Edit .env and add your OpenAI API key:
   # OPENAI_API_KEY=your_api_key_here
Enter fullscreen mode Exit fullscreen mode
  1. Run the application
   uvicorn main:app --reload
Enter fullscreen mode Exit fullscreen mode

Feel free to create your own application by referring these quick start examples available on Sevella Docs.

Sevella Deployment Steps

2. Deploying to Sevalla

Sevalla is so easy to use and gets deployments up in a few seconds. So, in this step, we will create an application by connecting my GitHub repository, which already includes AI Food Recipe Assistant code.
Image description

  1. Log into Sevalla dashboard
  2. Click "Applications" > "Add application"
  3. Select "Git repository" and connect to your repository
  4. Choose deployment settings:
    • Repository: your-repo-url
    • Branch: main
    • Build Environment: Python
    • Region: Choose nearest to your users

3. Environment Variables

We will now add “OPENAI_API_KEY” inside “Environment variables” to use the power of AI in our application to suggest AI-generated recipes.
gif2

Configure the required environment variables in Sevalla:

  1. Navigate to "Environment variables"
  2. Add OPENAI_API_KEY with your API key
  3. Select "Available during runtime" and "Available during build process"
  4. Save changes

6. Deployment Configuration

  • Enable Dockerfile from the build section to use it for automatic configuration of the web process. Image1
  • Sample Logs
🔨 Building Docker Image
[#9] COPY . .
[#9] DONE (0.7s)

📦 Exporting to Image
- Exporting layers (2.1s)
- Writing image sha256:957405d9ec2ff6a5014705b07809593ed17ea8a6ec4c09433f262f51e42eec6b
- Naming to europe-west1-docker.pkg.dev/kinsta-app-hosting/kc-apps/97ad2f04-172c-4a35-8dee-933c1134f27c/ai-food-assistant-z11yi:eb339d69-56d2-4a64-9a64-0809d752aeb4
✅ Docker image built successfully

⬆️ Pushing Docker Image
Repository: europe-west1-docker.pkg.dev/kinsta-app-hosting/kc-apps/97ad2f04-172c-4a35-8dee-933c1134f27c/ai-food-assistant-z11yi

Layer Status:
- 32649fbbeda8: Pushed
- a206824f0a6e: Pushed
- 47e66bca131f: Pushed
- 3a8ec2a73c4d: Pushed
- aacba17e24d9: Layer already exists
- f751ad7c65c4: Layer already exists
- 7822e749b484: Layer already exists
- c3548211b826: Layer already exists

Digest: sha256:2cca92185beca97a2dda1507178c502f5fafefda7befd090109d9b2feb014100
✅ Docker image pushed successfully

🚀 Deployment
⏩ Deploying Web process...

Server Startup:
- Uvicorn running on http://0.0.0.0:8080
- Started reloader process [1] using StatReload
- Started server process [26]
- Application startup complete

Warning:
Valid config keys have changed in V2:
'schema_extra' has been renamed to 'json_schema_extra'
Enter fullscreen mode Exit fullscreen mode

Sevalla automatically:

  • Detects Python requirements from requirements.txt
  • Sets up the web process using the Dockerfile
  • Configures the PORT environment variable
  • Enables HTTPS and provides a domain

Final Output

Image description

Application Architecture on Sevalla

The deployed application architecture includes(3):

  • Web Process: Runs the FastAPI application
  • Environment Variables: Securely stores configuration
  • Cloudflare Integration: Provides CDN and DDoS protection
  • Auto-scaling: Handles traffic spikes efficiently

Monitoring and Management

Sevalla provides several tools for application management:

  1. Logs: Access application logs in real-time
  2. Analytics: Monitor application performance
  3. Web Terminal: Debug and run commands directly
  4. Process Management: Control application processes

Security Features

The deployment includes several security measures:

  • SSL/TLS encryption
  • DDoS protection through Cloudflare
  • Secure environment variable storage
  • Isolated application environment

Performance Optimizations

Sevalla automatically implements several performance features:

  1. CDN Integration: Global content delivery
  2. Edge Caching: Improved response times
  3. Auto-scaling: Dynamic resource allocation
  4. Load Balancing: Distributed traffic handling

Deployment Verification

After deployment, verify the application:

  1. Access the provided domain (e.g., https://ai-food-assistant-ll3mo.kinsta.app/)
  2. Test the recipe generation endpoint
  3. Monitor application logs for any issues
  4. Verify environment variables are properly set

Troubleshooting Tips

Common issues and solutions:

  1. Port Configuration: Ensure the application uses the PORT environment variable
  2. Build Failures: Check requirements.txt for compatibility
  3. Runtime Errors: Monitor logs for application errors
  4. Environment Variables: Verify all required variables are set

Why Sevalla for AI Application Deployment?

Building and deploying AI applications can be challenging. Whether you're a developer working on a side project or part of a team building the next big AI product, you need a reliable and easy way to get your app into production. That's where Sevalla comes in - let me show you why it's the perfect choice for deploying AI applications:

Cost Optimization

  • Pay-as-you-grow model: Only pay for resources you actually use, with no upfront infrastructure costs
  • Reduced DevOps overhead: Eliminate the need for dedicated infrastructure teams
  • Automated resource scaling: Optimize costs during low-traffic periods
  • Resource optimization: Automatic scaling prevents over-provisioning
  • No vendor lock-in: Standard container architecture ensures portability

Enterprise-Ready Infrastructure

  • 25+ global data centers: Deploy close to your users for optimal performance
  • Google Cloud Platform backbone: Enterprise-grade infrastructure and reliability
  • Cloudflare Enterprise: Advanced DDoS protection and WAF included
  • Compliant infrastructure: Meets industry security standards
  • Private networking: Secure internal connections between applications and databases

Developer Experience

  • 5-minute deployment: From code to production in minutes
  • Multi-framework support: Deploy any framework or language
  • Built-in CI/CD: Automated deployments from Git
  • Development tools: Web terminal, real-time logs, and metrics
  • Database integration: Managed databases with automatic backups

Operational Excellence

  • 99.9% SLA-backed uptime: Enterprise-grade reliability
  • Zero-downtime deployments: Continuous availability during updates
  • Auto-scaling: Handle traffic spikes automatically
  • Global CDN: Optimized content delivery across regions
  • 24/7 expert support: Technical assistance when you need it

AI-Optimized Features

  • Container-native platform: Ideal for AI/ML workloads
  • Edge computing capabilities: Reduced latency for AI operations
  • High-performance compute: CPU and memory-optimized instances
  • Automatic failover: Built-in high availability
  • Horizontal scaling: Handle viral growth seamlessly

Business Acceleration

  • Faster time-to-market: Launch products without infrastructure delays
  • Resource efficiency: Focus on product development, not DevOps
  • Enterprise security: Built-in compliance and protection
  • Global reach: Deploy worldwide in minutes
  • Scalability on demand: Grow without infrastructure constraints

Conclusion

Sevalla provides a affordable hosting platform for deploying AI applications with minimal configuration. The platform handles infrastructure management, allowing developers to focus on application development with easy deployments and required integrations. The AI Food Recipe Assistant demos how quickly you can deploy a modern AI-powered application with features like:

  • Automated deployment from Git
  • Container orchestration
  • Environment variable management
  • SSL/TLS security
  • CDN integration
  • Performance optimization

For more information about hosting applications on Sevalla, refer to their official documentation.


Top comments (0)