DEV Community

Cover image for ๐Ÿš€ How to Run Dolphin Uncensored AI Locally: A Step-by-Step Guide! ๐Ÿฆพ
deep shah
deep shah

Posted on

๐Ÿš€ How to Run Dolphin Uncensored AI Locally: A Step-by-Step Guide! ๐Ÿฆพ

๐Ÿค– Introduction

Ever wanted to run a powerful AI model without limits right on your own machine? Enter Dolphin Uncensored AIโ€”a beast of a Large Language Model (LLM) that offers unfiltered AI responses. Whether you're a tech enthusiast, researcher, or just curious, this guide will walk you through setting it up on your own PC using Ollama and Chatbox AI for a sleek chat experience. ๐Ÿ†

โš ๏ธ Disclaimer: This guide is for educational purposes only. Whatever you do with it is on youโ€”use AI responsibly! ๐Ÿšจ


๐ŸŒŠ Why Run Dolphin AI?

Dolphin Uncensored is a modified AI model that removes content restrictions, allowing you to explore AI responses without filters. If you're not up for local setup, you can also try Venice AI (Premium Plan required) for a cloud-based option. โ˜๏ธ


๐Ÿ–ฅ๏ธ PC Requirements for Running Dolphin AI

Before diving in, make sure your system can handle the load. Hereโ€™s what youโ€™ll need:

๐Ÿ”น Minimum Requirements:

  • ๐Ÿ’ป CPU: AMD Ryzen 5 / Intel i5 (6th Gen or newer)
  • ๐Ÿ”— RAM: 16GB DDR4
  • ๐ŸŽฎ GPU: NVIDIA GTX 1660 / RTX 2060 (6GB VRAM)
  • ๐Ÿ’พ Storage: SSD with 50GB free space
  • ๐Ÿ–ฅ๏ธ OS: Windows 10/11, macOS, or Linux

๐Ÿ”ฅ Recommended Specs (For Best Performance!):

  • ๐Ÿ’ป CPU: AMD Ryzen 9 / Intel i7 (10th Gen or newer)
  • ๐Ÿ”— RAM: 32GB+ DDR4/DDR5
  • ๐ŸŽฎ GPU: NVIDIA RTX 3090 / 4090 or AMD 7900XTX (24GB+ VRAM)
  • ๐Ÿ’พ Storage: NVMe SSD (100GB free for models and cache)
  • ๐Ÿ–ฅ๏ธ OS: Windows 11, macOS, or latest Linux distros

๐Ÿ‹๏ธ Model-Specific Requirements:

Dolphin AI comes in different model sizes, requiring varying hardware power:

  • ๐Ÿฌ Dolphin-Llama3:8B โ†’ Requires 5GB VRAM
  ollama run dolphin-llama3:8b
Enter fullscreen mode Exit fullscreen mode
  • ๐Ÿ‹๏ธโ€โ™‚๏ธ Dolphin-Llama3:70B โ†’ Requires 40GB VRAM (VERY demanding!)
  ollama run dolphin-llama3:70b
Enter fullscreen mode Exit fullscreen mode

Pro Tip: No beefy GPU? You can still run it on CPU, but expect snail-speed responses. ๐ŸŒ


โšก Installation & Setup Guide

Now, letโ€™s get Dolphin AI up and running!

Step 1๏ธโƒฃ: Install Ollama ๐Ÿ› ๏ธ

Ollama is a must-have for managing LLMs locally.

  1. Download & install Ollama from https://ollama.com.
  2. Open a terminal (Command Prompt/PowerShell/Terminal) and verify:
   ollama --version
Enter fullscreen mode Exit fullscreen mode

Step 2๏ธโƒฃ: Get the Dolphin AI Model ๐ŸŒŠ

  1. Pull the Dolphin AI model using Ollama:
   ollama pull dolphin-llama3
Enter fullscreen mode Exit fullscreen mode
  1. Once downloaded, run Dolphin AI locally:
   ollama run dolphin-llama3:8b  # For 8B model
Enter fullscreen mode Exit fullscreen mode
   ollama run dolphin-llama3:70b  # For 70B model (requires high-end GPU!)
Enter fullscreen mode Exit fullscreen mode

Step 3๏ธโƒฃ: Install Chatbox AI (Optional, but Awesome!) ๐Ÿ’ฌ

Want a slick chat UI instead of using the command line? Chatbox AI has you covered! ๐Ÿ˜

  1. Download Chatbox AI from https://chatboxai.app/en#download.
  2. Install & launch the app.
  3. Navigate to Settings > Local LLM and link it to Ollama.
  4. Start chatting! ๐ŸŽ‰

๐ŸŽฏ Final Thoughts

Running Dolphin AI locally gives you complete control over your AI experience. Whether you want to use it for research, learning, or experimentation, self-hosting puts power in your hands. ๐Ÿฆพ

If youโ€™d rather skip the setup, you can still try it in the cloud via Venice AI (Premium required). โ˜๏ธ

What do you think? Would you like more AI self-hosting guides? Let me know in the comments! โฌ‡๏ธ

Top comments (0)