In my previous post I described how AI tools have revolutionized my Development workflow. Toward the end of the blog, I shared the step-by-step on...
For further actions, you may consider blocking this person and/or reporting abuse
You forgot to mention hardware requirements for different models.
That's also what I'm curious about - maybe if I want to do this I first need a big hardware upgrade ... I think until then I'll have to pass on this.
To @anton_maryukhnenko_1ef094 's point, I need to update this blog to mention the general hardware requirements for Llama at least. I think that would be helpful to others..
@leob I took a chance on my old dying comp with Ollama and Llama3.2 and it ended up working. You should have heard the fan though. haha. It just so happened I NEEDED an upgrade so my new/current comp is more capable and has been fairing pretty well. If I do end up running into any hiccups, I will definitely try and share.
Thanks ... I'd probably end up wanting a SEPARATE "box" (hardware) dedicated to it and optimized for it (with a GPU and all that), so as not to "burden" my main workstation - then do the "queries" over a fast local network!
Hi, excellent post, I wonder why don't try another more friendly interface like LLM studio.
Thanks!
I personally have upgraded to using Open Web UI. I'm actually in the process of writing up a blog on the steps to getting that working on your local machine. :-)
It is SOOOO much better than using the command line interface but cmd interface was a good start for me when I was first experimenting with local LLMs.
Haven't tried LLM studio but I'm going to look into it. How do you like LLM studio?
Great post ... just curious: why would I want this, instead of using ChatGPT or other cloud-hosted/online AI tools?
No censorship, no limits for questions, and the most important thing, privacy!
Makes sense - it's just that the hardware requirements might be a bit of a concern ...
Def privacy but also I use it when developing applications and can use the Ollama python library in my local applications to directly access the LLMs I desire. Knowing I have no limitations on how many requests I can make is very nice.
Hey, i sended you a message on X. Answer me when possible!