Today, we are covering 25 side projects that you can build using Python and AI models.
Most of the projects will have a production-level code so you can learn a lot.
Let's do it!
Taipy - Data and AI algorithms into production-ready web applications.
Most of the initial projects will be using Taipy, so let's discuss the use cases with the concept.
Taipy is an open source Python library for easy, end-to-end application development, featuring what-if analyses, smart pipeline execution, built-in scheduling, and deployment tools.
To be clear, Taipy is used for creating a GUI interface for Python-based applications and improving data flow management.
The key is performance and Taipy is the perfect choice for that.
While Streamlit is a popular tool, its performance can decline significantly when handling large datasets, making it impractical for production-level use.
Taipy, on the other hand, offers simplicity and ease of use without sacrificing performance. By trying Taipy, you'll experience firsthand its user-friendly interface and efficient handling of data.
Taipy has a lot of integration options and connects effortlessly with leading data platforms.
Get started with the following command.
pip install taipy
They have also improved performance using distributed computing but the best part is Taipy and all its dependencies are now fully compatible with Python 3.12 so you can work with the most up-to-date tools and libraries while using Taipy for your projects.
You can read the docs.
Another useful thing is that the Taipy team has provided a VSCode extension called Taipy Studio to accelerate the building of Taipy applications.
If you want to read a blog to see codebase structure, you can read Create a Web Interface for your LLM in Python using Taipy by HuggingFace.
It is generally tough to try out new technologies, but Taipy has provided 10+ demo tutorials with code & proper docs for you to follow along. I will discuss some of these projects in detail!
The use cases are amazing so make sure to check them out.
Taipy has 8.5k+ Stars on GitHub and is on the v3.1
release so they are constantly improving.
1. Realtime Pollution Dashboard
A use-case of measuring air quality with sensors around a factory to showcase the ability of Taipy to dashboard streaming data.
The data is generated on another server and sent to this Taipy application via a WebSocket.
Taipy then processes the data and displays it on a dashboard.
The dashboard is updated in real time as new data is received.
If you want a tutorial on visualizing data streamed then check out the docs on multithreading.
It has discussed how to create a sender script and a receiver script including socket parameters and more.
It involves the concept of frontend as well as backend.
Check out the live demo.
Star Realtime Pollution Dashboard ⭐️
2. Fraud Detection
A Taipy Application that analyzes credit card transactions to detect fraud.
It shows a list of credit card transactions.
The user can select a date range to predict fraud.
The app will then use an XGB model to mark potentially fraudulent transactions in red or yellow.
The user can select a transaction to see an explanation of the model's prediction, as well as the client's other transactions.
The user can also choose the threshold of the model. The threshold is the model output above which a transaction is considered fraudulent. The user can select the model according to the displayed confusion matrix and by looking at False Positive and False Negative transactions.
Check out the live demo.
3. Covid Dashboard
This uses a Covid dataset for the year 2020.
Pages show different graphs and information on COVID-19. A Prediction page is also present to predict the number of casualties.
The app comprises four sections as follows:
✅ Country.
- Country-specific COVID-19 statistics.
- Easily switch between cumulative and density data views.
- Interactive bar chart for dynamic data exploration.
- Pie chart illustrating case distribution (Confirmed, Recovered, Deaths).
✅ Map.
Visual representation of COVID-19 impact through dynamic zoomable color-coded maps.
✅ Predictions.
Generate COVID-19 predictions by creating scenarios for different Prediction dates and different countries.
This generates 2 different predictions (prediction_x in orange and prediction_y in green) using respectively an ARIMA model and a Linear Regression model.
Initiate a new scenario by assigning it a name, specifying a prediction date, choosing a country, and clicking the "Submit" button to proceed.
You can access it in the Scenario tab located within the Results section.
✅ World.
Global COVID-19 statistics are summarized via line and pie charts. The Comparison of Covid countries' impact can be seen by changing the toggle between Absolute
and Relative
.
Overall, a very useful app for your side project and for building very cool projects :)
You can check out the live demo.
4. Creating an LLM Chatbot
This demo showcases Taipy's ability to enable end-users to run inference using LLMs. Here, we use GPT-3 to create a chatbot and display the conversation in an interactive chat interface.
The main function that takes as input a string prompt, which is the user message, and returns a response string from the LLM is given below.
def request(state: State, prompt: str) -> str:
"""
Send a prompt to the GPT-3 API and return the response.
Args:
- state: The current state.
- prompt: The prompt to send to the API.
Returns:
The response from the API.
"""
response = state.client.chat.completions.create(
messages=[
{
"role": "user",
"content": f"{prompt}",
}
],
model="gpt-3.5-turbo",
)
return response.choices[0].message.content
You can read the complete docs on how to build this LLM Chatbot.
The best part is that you can easily change the code to use any other API or model as per your usage.
You can check out the live demo.
5. Real-time Face Recognition
This demo seamlessly integrates face recognition into our platform, offering a real-time face detection experience using your webcam thanks to the OpenCV library.
You can use it very easily:
a. While opening the application, you'll see yourself across your webcam. A red square surrounding your face with someone else's name.
b. Train the model to recognize you by clicking the Capture
button and giving your name several times.
c. Click now the Re-train
button. Your name should now appear. The model recognizes you now.
Check out the live demo.
Make sure to enable your camera settings in the browser which is the primary condition!
The code to do face detection and face recognition is under src/demo/faces.py
. The complete directory structure is provided in the readme.
6. Stock Visualization
In the realm of financial markets, data is king. The ability to quickly and easily visualize historical stock data and make predictions is essential for investors and financial analysts.
This is a stock data dashboard with interactive visual elements to visualize historical stock data and make predictions for the stock within 1 to 5 years.
Built with Taipy and Prophet Library by Facebook. This demo works with a Python version superior to 3.8.
This is how you can use it:
a. Select the ticker you wish to predict.
b. Open the Historical Data Panel.
c. Select the period of prediction (from 1 to 5).
d. Click on the PREDICT
button.
e. See your predictions in the Forecast Data Panel.
f. Try it repeatedly using different tickers to compare the results.
You'll also get the prediction ranges as a table by clicking on the More info
button at the bottom.
You can find the main source code responsible under the src directory.
This fully interactive web app can be created with fewer than 120 lines of Python code.
Check out the live demo.
7. Sentiment Analysis.
Sentiment analysis is like a robot that reads how people feel in their words.
It looks at words like happy, sad, or angry, and decides if they are feeling good or bad. Then, it tells us if most people are happy or sad when they talk.
So, it helps us understand how people feel about things, like movies or games, by just looking at what they say!
In short, it is a technique in Natural Language Processing (NLP) used to determine the emotional tone conveyed in a text. It helps businesses and individuals better grasp the feelings, and tones expressed in written content.
The result is a two-page application that uses a sentiment analysis model to analyze input and entire text.
The first page analyzes the user input, while the second page lets the user choose a file to upload (a text). This text will be analyzed and the sentiments behind it will be displayed.
✅ Page 1: Line - Analyzing User Input
The initial page of our Sentiment Analysis application, named "Line", is meant for instantly analyzing user input. Whether it's a brief sentence or a longer paragraph, just type or paste the text into the input box, and Taipy will quickly evaluate the sentiment conveyed in the text.
✅ Text - Uploading and Analyzing Text Files
The second page, named "Text" allows users to upload entire text files (.txt) for comprehensive sentiment analysis.
Users can select a text file from their device, and the application will provide insights into the sentiment expressed throughout the document.
This feature is useful for processing longer texts such as articles, reports, or extensive customer feedback.
You can check out the live demo.
8. Drift Detection - Detecting Drift in a Diabetes Dataset.
Data drift is a concept used mainly in machine learning, where the distribution of inference data strays from the distribution of training data.
Various factors, such as changes in the underlying data source, changes in the data collection process, or changes in the data storage process, can cause data to drift.
This generally causes a performance issue called training-serving skew, where the model used for inference is not used for the distribution of the inference data and fails to generalize.
Statistical tests exist to detect drift in a dataset. These tests calculate the probability of two series coming from the same distribution. If the probability is under a threshold, we consider that there is drift.
How to use the app?
✅ Select the comparison dataset.
Here, we are selecting data_big, a dataset similar to the reference dataset but with rows with higher blood pressure values. We see on the blood pressure distribution chart that the distribution of the comparison dataset in red is shifted to the right compared to the reference dataset in green.
✅ Run the scenario by clicking here.
✅ Visualize the results at the bottom of the page.
Here, we see that the p-value for the Kolmogorov test of the blood pressure column is under 0.05, which means that the probability of both datasets of blood pressure coming from the same distribution is under 5%. We can reject the hypothesis that both datasets come from the same distribution and conclude that there is a drift in the blood pressure column.
You can follow the steps on the live demo that is attached below and see the source code on GitHub.
This uses a data pipeline to compare datasets and detect drift.
Check out the live demo.
9. Walletwise
WalletWise is like a friendly helper for our finances, helping us keep track of income and expenses. It uses Gemini for transactions and Taipy for understanding the expenditure.
Some of the nice features are:
✅ User can enter their Income, and Expense along with the sector as headings. This allows them to comprehend and explore how much they are earning from which sector and how much they are spending in which sector.
✅ User's income and expense are analyzed, mathematically shown and 7 tips for making better, wiser financial decisions are displayed.
✅ Implemented a visualizer where you can see the different headings from which you earned money and the different headings from which you spent it.
This is a very excellent use case that and very good in terms of creativity.
You can read the installation instructions and see the project demo.
10. Taipy Chess
My favorite of all the apps because I love chess. HAHA!
This is a chess visualization tool based on 20,000 games. You can see all the games, the openings they played, opponents, top-played openings, and most successful openings.
You can see heat maps and charts on the data.
You can see the demo by Korie. I loved it :)
This clearly shows that there is no limitation on the possibilities of how we can use Taipy.
11. Olympic Medals
This is a Taipy dashboard that shows information about Olympic medals awarded from the beginning of modern Olympic games until the beginning of the year 2024 (this is, the Paris Olympics 2024 are not included).
✅ The dashboard has two tabs:
- A tab shows aggregated data for all Olympic medals
- A second tab focuses on medals won by Olympic committees (countries, but also special committees, such as the Refugee Committee and so on).
It also has several types of charts like Bar charts, sunburst charts, line charts, choropleth maps, and grid charts.
Plus the dashboard dynamically updates data based on the selected year and area type.
You can watch the demo here!
There are a lot of concepts involved and a very excellent use case implemented by Eric!
12. GPT Researcher - GPT-based autonomous agent for online research.
GPT Researcher is the leading autonomous agent that takes care of everything from accurate source gathering to organization of research results.
The good part is that it also cites the sources for the research results which improves credibility. I loved the whole concept :)
Some of the wild features are:
✅ Can generate long and detailed research reports (over 2K words).
✅ Aggregates over 20 web sources per research to form objective and factual conclusions.
✅ Includes an easy-to-use web interface (HTML/CSS/JS).
✅ Scrapes web sources with JavaScript support.
✅ Keeps track and context of visited and used web sources.
✅ Export research reports to PDF, Word, and more.
Get started with the following command.
pip install gpt-researcher
This is how you can use it.
from gpt_researcher import GPTResearcher
query = "why is Nvidia stock going up?"
researcher = GPTResearcher(query=query, report_type="research_report")
# Conduct research on the given query
await researcher.conduct_research()
# Write the report
report = await researcher.write_report()
Read the installation instructions and quickstart guide just attached below that.
If you're wondering about the specifics of architecture, the agents leverage both gpt3.5-turbo and gpt-4-turbo (128K context) to complete a research task. We optimize for costs using each only when necessary. The average research task takes around 3 minutes to complete, and costs ~$0.1.
You can read the official blog on how GPT Researcher works.
You can read the FAQs to know more about accuracy and more.
You can read the docs and visit their official website.
Watch the demo here!
It has 8.7k stars on GitHub and is continuously improving.
13. Private GPT - ask questions about your documents without the internet.
PrivateGPT is a production-ready AI project that allows you to ask questions about your documents using the power of Large Language Models (LLMs), even in scenarios without an internet connection.
100% private meaning no data leaves your execution environment at any point.
The API is divided into two logical blocks:
a. High-level API, which abstracts all the complexity of a RAG (Retrieval Augmented Generation) pipeline implementation:
- Ingestion of documents: internally managing document parsing, splitting, metadata extraction, embedding generation, and storage.
- Chat & Completions using context from ingested documents: abstracting the retrieval of context, the prompt engineering, and the response generation.
b. Low-level API, which allows advanced users to implement their complex pipelines:
- Embeddings generation: based on a piece of text.
- Contextual chunks retrieval: given a query, returns the most relevant chunks of text from the ingested documents.
You can read the installation guide to get started.
You can read the docs and the detailed architecture that is involved.
PrivateGPT is now evolving towards becoming a gateway to generative AI models and primitives, including completions, document ingestion, RAG pipelines, and other low-level building blocks.
They have 51k+ Stars on GitHub and evolving at a rapid pace.
14. facefusion - Next-generation face swapper and enhancer.
This is a Next-generation face swapper and enhancer. There are various uses and you can easily do so.
They have also provided a workshop section where you can learn about how to create UI components and define the frame processor.
For instance, this is how you create a UI component.
// create a new file
facefusion/uis/components/example.py
// Implement the essential methods of the UI component
from typing import Optional
import gradio
from facefusion.uis.typing import Update
EXAMPLE_IMAGE : Optional[gradio.Image] = None
def render() -> None:
global EXAMPLE_IMAGE
EXAMPLE_IMAGE = gradio.Image()
def listen() -> None:
EXAMPLE_IMAGE.change(update, inputs = EXAMPLE_IMAGE, outputs = EXAMPLE_IMAGE)
def update() -> Update:
return gradio.update()
You just have to add the component.
from facefusion.uis.components import example
The installation might be a little complex so I suggest reading the installation guide based on the specific environment you're using.
You can check the benchmarks using this command.
python run.py --ui-layouts benchmark
You can read the docs and learn more about technical terms.
They have 14k+ stars on GitHub and are on the v2.5
release.
15. H2O LLMStudio - no-code GUI for fine-tuning LLMs.
H2O LLM Studio is an open source, no-code LLM graphical user interface (GUI) designed for fine-tuning state-of-the-art large language models.
Fine-tuning a pre-trained language model requires coding expertise and extensive knowledge about the model and its hyperparameters, however, H2O LLM Studio enables NLP practitioners to fine-tune their LLMs easily with no need for coding and better flexibility over customization.
H2O LLM Studio also lets you chat with the fine-tuned model and receive instant feedback about model performance.
NLP practitioners and data scientists in particular may find it useful to easily and effectively create and fine-tune large language models. You can read about detailed performance stats and the architecture of it's cloud.
If you're starting, I suggest watching this!
You can read about the core features such as:
✅ No-code fine-tuning
✅ Highly customizable
✅ Instant feedback on model performance
You can start H2O LLM Studio using the following command.
make llmstudio
If you don't know these concepts, they also have a clear guide of concepts including Generative AI, LoRA, Quantization, LLM Backbone, and more.
You can read the docs.
You can make a side project using this very easily and up to the standards.
They have 3.6k stars on GitHub and are on the v1.5
release.
16. Voice Assistant on Mac - Your voice-controlled Mac assistant.
Your voice-controlled Mac assistant. GPT Automator lets you perform tasks on your Mac using your voice. For example, opening applications, looking up restaurants, and synthesizing information. Awesome :D
It was built during the London Hackathon.
It has two main parts:
a. Voice to command: It generates the command using Whisper running locally (a fork of Buzz).
b. Command to Action: You give the command to a LangChain agent equipped with custom tools we wrote. These tools include controlling the operating system of the computer using AppleScript and controlling the active browser using JavaScript. Finally, like any good AI, we have the agent speak out the final result using AppleScript saying "{Result}" (try typing "Hello World!" into your Mac terminal if you haven’t used it before”).
A custom tool we made to have the LLM control the computer using AppleScript. The prompt is the docstring:
@tool
def computer_applescript_action(apple_script):
"""
Use this when you want to execute a command on the computer. The command should be in AppleScript.
Here are some examples of good AppleScript commands:
Command: Create a new page in Notion
AppleScript: tell application "Notion"
activate
delay 0.5
tell application "System Events" to keystroke "n" using {{command down}}
end tell
...
Write the AppleScript for the Command:
Command:
"""
p = subprocess.Popen(['osascript', '-'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate(applescript.encode('utf-8'))
if p.returncode != 0:
raise Exception(stderr)
decoded_text = stdout.decode("utf-8")
return decoded_text
If you are wondering how it works, GPT Automator converts your audio input to text using OpenAI's Whisper. Then, it uses a LangChain Agent to choose a set of actions, including generating AppleScript (for desktop automation) and JavaScript (for browser automation) commands from your prompt using OpenAI's GPT-3 ("text-davinci-003") and then executing the resulting script.
Just remember, this is not for production use. This project executes code generated from natural language and may be susceptible to prompt injection and similar attacks. This work was made as a proof-of-concept.
You can read the installation guide.
Let's look at some of the prompts and what it will do:
✅ Find the result of a calculation.
Prompt: "What is 2 + 2?"
It will write AppleScript to open up a calculator and type in 5 * 5.
✅ Find restaurants nearby.
Prompt: "Find restaurants near me"
It will open up Chrome, google search for a restaurant nearby, parse the page and then return the top results. Sometimes it’s cheeky, and instead will open up the Google Maps result and say “The best restaurants are the ones at the top of the page on Google Maps”. Other times it opens the top link on Google - and gets stuck on the Google accessibility page…
Here’s what’s printed to the terminal as it runs:
Command: Find a great restaurant near Manchester.
> Entering new AgentExecutor chain...
I need to search for a restaurant near Manchester.
Action: chrome_open_url
Action Input: https://www.google.com/search?q=restaurant+near+Manchester
Observation:
Thought: I need to read the page
Action: chrome_read_the_page
Action Input:
Observation: Accessibility links
Skip to the main content
... # Shortned for brevity
Dishoom Manchester
4.7
(3.3K) · £££ · Indian
32 Bridge St · Near John Rylands Library
Closes soon ⋅ 11 pm
Stylish eatery for modern Indian fare
San Carlo
4.2
(2.8K) · £££ · Italian
42 King St W · Near John Rylands Library
Closes soon ⋅ 11 pm
Posh, sceney Italian restaurant
Turtle Bay Manchester Northern Quarter
4.7
Thought: I now know the final answer
Final Answer: The 15 best restaurants in Manchester include El Gato Negro, Albert's Schloss, The Refuge, Hawksmoor, On The Hush, Dishoom, Banyan, Zouk Tea Room & Grill, Edison Bar, MyLahore Manchester, Turtle Bay Manchester Northern Quarter, San Carlo, The Black Friar, Mana, and Tast Cuina Catalana.
I cannot guarantee that those restaurants are worth it, visit at your own risk. haha!
✅ If you ask GPT Automator to wipe your computer it will.
Yes, it will wipe your computer if you ask!
My inner self screaming to do it :)
You can see the full demo here!
17. RepoChat - Chatbot assistant enabling GitHub repository interaction.
Repochat is an interactive chatbot project designed to engage in conversations about GitHub repositories using a Large Language Model (LLM).
It allows users to have meaningful discussions, ask questions, and retrieve relevant information from a GitHub repository. This README provides step-by-step instructions for setting up and using Repochat on your local machine.
They have made two branches with distinct functionalities which is kind of new to me.
✅ The main branch of Repochat is designed to run entirely on your local machine. This version of Repochat doesn't rely on external API calls and offers greater control over your data and processing. If you're looking for a self-contained solution, the main branch is the way to go.
✅ The cloud branch of Repochat primarily relies on API calls to external services for model inference and storage. It's well-suited for those who prefer a cloud-based solution and don't want to set up a local environment.
You can read the installation instructions.
Repochat allows you to engage in conversations with the chatbot. You can ask questions or provide input, and the chatbot will retrieve relevant documents from the vector database.
It then sends your input, along with the retrieved documents, to the Language Model for generating responses.
By default, I've set the model to codellama-7b-instruct
, but you can change it based on your computer's speed, and you can even try the 13b quantized model for responses.
The chatbot retains memory during the conversation to provide contextually relevant responses.
You can check the live website where you can check using the API key.
You can watch this demo!
I found one more alternative if you want to check it out.
Repochat has 200+ stars and deployed on Streamlit.
18. myGPTReader - read and chat with AI bots.
myGPTReader is a bot on Slack that can read and summarize any webpage, documents including ebooks, or even videos from YouTube. It can communicate with you through voice.
Some of the worthy features are:
✅ Use myGPTReader to quickly read and understand any web content through conversations, even videos (currently only YouTube videos with subtitles are supported).
✅ Use myGPTReader to quickly read the content of any file, supporting eBooks, PDF, DOCX, TXT, and Markdown.
✅ Practice your foreign language by speaking with your voice to myGPTReader, which can be your personal tutor and supports Chinese, English, German, and Japanese.
✅ A large number of prompt templates are built in, use them for better conversations with chatGPT.
✅ Every day myGPTReader sends out the latest hot news and automatically generates a summary, so you can quickly learn what's hot today.
You can visit the official website.
You can join the Slack channel on the repo that has more than 5000+ members to experience all these features for free.
They have 4.4k stars on GitHub and are built using Python like other projects in this list.
19. Marker - Convert PDF to markdown quickly with high accuracy.
Marker converts PDF, EPUB, and MOBI to markdown. It's 10x faster than nougat, more accurate on most documents, and has low hallucination risk.
We all know how helpful this can be especially in terms of doing it for a research paper.
✅ Support for a range of PDF documents (optimized for books and scientific papers).
✅ Removes headers/footers/other artifacts.
✅ Converts most equations to latex.
✅ Formats code blocks and tables.
✅ Support for multiple languages (although most testing is done in English). See settings.py for a language list, or to add your own.
✅ Works on GPU, CPU, or MPS.
They have also clearly documented the examples along with the results of Marker and Nougat.
Read the speed and accuracy benchmarks and instructions on how to run your own benchmarks.
For instance, check this PDF: Think Python and the markdown file of Marker vs Nougat.
Read the installation instructions.
They have also documented how to use it properly:
They have 8k+ stars on GitHub and I don't think it's maintained anymore.
20. Instrukt - Integrated AI in the terminal.
Instrukt is a terminal-based AI-integrated environment. It offers a platform where users can:
- Create and instruct modular AI agents.
- Generate document indexes for question-answering.
- Create and attach tools to any agent.
Instruct them in natural language and, for safety, run them inside secure containers (currently implemented with Docker) to perform tasks in their dedicated, sandboxed space.
Built using Langchain
, Textual
, and Chroma
.
Get started with the following command.
pip install instrukt[all]
There are a lot of exciting features such as:
- A terminal-based interface for power keyboard users to instruct AI agents without ever leaving the keyboard.
- Index your data and let agents retrieve it for question-answering. You can create and organize your indexes with an easy UI.
- Index creation will auto-detect programming languages and optimize the splitting/chunking strategy accordingly.
- Run agents inside secure docker containers for safety and privacy.
- Integrated REPL-Prompt for quick interaction with agents, and a fast feedback loop for development and testing.
- You can automate repetitive tasks with custom commands. It also has a built-in prompt/chat history.
You can read about all the features.
You can read the installation guide.
You can also debug and introspect agents using an in-built IPython console which is a neat little feature.
Instrukt is licensed with an AGPL license meaning that it can be used by anyone for whatever purpose.
It is safe to say that Instrukt is a Terminal AI Commander at your fingertips.
It is a new project so they have around 200+ stars on GitHub but the use case is very good.
21. Microagents - Agents Capable of Self-Editing Their Prompts.
It's an experimental framework for dynamically creating self-improving agents in response to tasks.
Microagents represent a new approach to creating self-improving agents. Small, microservice-sized (hence, microagents) agents are dynamically generated in response to tasks assigned by the user to the assistant, assessed for their functionality, and, upon successful validation, stored for future reuse.
This enables learning across chat sessions, enabling the system to independently deduce methods for task execution.
This is built using Python
, OpenAI's GPT-4 Turbo
and Text-Embedding-Ada-002
.
You can read the installation instructions. They have mentioned that you should have an OpenAI account with access to gpt-4-turbo and text-embedding-ada-002.
Let's see an example of fetching a Weather Forecast Agent.
You are an adept weather informant. Fetch the weather forecast by accessing public API data using this Python code snippet:
``python
import requests
import json
def fetch_weather_forecast(location, date):
response = requests.get(f"https://api.met.no/weatherapi/locationforecast/2.0/compact?lat={location[0]}&lon={location[1]}")
weather_data = response.json()
for day_data in weather_data['properties']['timeseries']:
if date in day_data['time']:
print(day_data['data']['instant']['details'])
break
``
# Example usage: fetch_weather_forecast((47.3769, 8.5417), '2024-01-22T12:00:00Z')
Note: Replace the (47.3769, 8.5417) with the actual latitude and longitude of the location and the date string accordingly.
If you're wondering how agents are created, then this architectural diagram explains it.
You can see the working demo.
They have around 700 stars on GitHub and are worth checking out.
22. Resume Matcher - a free tool to improve your resume.
Resume Matcher is an open source, free tool to improve your resume. To tailor your resume to a job description. Find the matching keywords, improve the readability, and gain deep insights into your resume.
How does it work?
The Resume Matcher reads your resume and job descriptions using Python, just like an ATS.
It suggests changes to make your resume ATS-friendly by:
✅ Parsing: It breaks down your resume and job description using Python.
✅ Keyword Extraction: The tool finds important keywords from the job description, like skills and qualifications.
✅ Key Terms Extraction: It identifies the main themes in the job description to understand its context.
✅ Vector Similarity: Using FastEmbedd, it compares your resume with the job description to see how closely they match. The better the match, the higher your chances of passing the ATS screening.
You can read the installation instructions.
You can check the live demo or the one attached in the readme.
Resume Matcher is an amazing project created by Saurabh Rai, who also writes great posts here on DEV!
It has 4.5k stars on GitHub and is still maintained properly.
23. Background Remover - lets you Remove Background from images and video using AI with a simple CLI.
This is a command line tool to remove background from images and videos using AI.
Get started by installing backgroundremover from pypi.
pip install --upgrade pip
pip install backgroundremover
It is also possible to run this without installing it via pip, just clone the git to locally start a virtual env install requirements and run.
Some of the commands that you can use:
- Remove the background from a local file image
backgroundremover -i "/path/to/image.jpeg" -o "output.png"
- remove the background from the local video and overlay it over an image
backgroundremover -i "/path/to/video.mp4" -toi "/path/to/videtobeoverlayed.mp4" -o "output.mov"
You can check all the commands that you can use with CLI.
You can even use it as a library.
from backgroundremover.bg import remove
def remove_bg(src_img_path, out_img_path):
model_choices = ["u2net", "u2net_human_seg", "u2netp"]
f = open(src_img_path, "rb")
data = f.read()
img = remove(data, model_name=model_choices[0],
alpha_matting=True,
alpha_matting_foreground_threshold=240,
alpha_matting_background_threshold=10,
alpha_matting_erode_structure_size=10,
alpha_matting_base_size=1000)
f.close()
f = open(out_img_path, "wb")
f.write(img)
f.close()
You can read the installation instructions and see the live demo.
The input vs The Output.
They have 6k stars on GitHub and we can definitely learn some crucial concepts using this.
24. Tkinter Designer - easy and fast way to create a Python GUI.
Tkinter Designer was created to speed up the GUI development process in Python. It uses the well-known design software Figma to make creating beautiful Tkinter GUIs in Python a piece of cake.
Tkinter Designer uses the Figma API to analyze a design file and create the respective code and files needed for the GUI.
If you're wondering how it works?
The only thing the user needs to do is design an interface with Figma and then paste the Figma file URL and API token into Tkinter Designer.
Tkinter Designer will automatically generate all the code and images required to create the GUI in Tkinter.
You can read the step-by-step guide](https://github.com/ParthJadhav/Tkinter-Designer/blob/master/docs/instructions.md) on how to use it properly which is available in multiple languages.
You can watch the demo here!
They have also shown examples of websites that you can easily replicate using this.
They have 8.3k stars on GitHub and are used by around 100 developers.
25. Open Interpreter - natural language interface for computers.
Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running $ interpreter after installing.
This provides a natural-language interface to your computer's general-purpose capabilities:
✅ Create and edit photos, videos, PDFs, etc.
✅ Control a Chrome browser to perform research Plot, clean, and analyze large datasets.
I don't know about you, but their website made me say WOW!
Quickstart using this command.
pip install open-interpreter
// After installation, simply run:
interpreter
You can read the quickstart guide.
You should read about the comparison to ChatGPT's Code Interpreter and the commands that you can use.
You can read the docs.
Open Interpreter works with both hosted and local language models. Hosted models are faster and more capable, but require payment. Local models are private and free but are often less capable.
They have 48k+ stars on GitHub and are used by 300+ developers.
The best way to gain experience and become better at coding is to build side projects.
I hope you will build some of these projects and or at least take inspiration.
Comment down to tell others about any other cool Python project :)
Have a great day! Till next time.
Follow Taipy for more content like this.
Top comments (8)
Hi everyone,
I have some awesome recommendations for you:
Open Interpreter - Run LLMs locally and chat with the tool itself. Just don't get too personal!
Resume Matcher - An excellent tool to improve your resume (created by @srbhr).
Instrukt - Integrated AI in the terminal. Too much AI, haha!
Marker - Convert PDFs to Markdown. Perfect for documenting an awesome list on a GitHub Repo!
Repochat - Chat with your repository. Great for understanding a codebase or finding specific files.
GPT Researcher - An autonomous agent for research purposes. Perfect for researchers and students!
Private GPT - Ask questions about your documents without needing the internet.
And, of course, all the Taipy apps!
Sometimes, the improvement in the tech field can feel a bit scary, but they definitely make our work easier :)
Thanks for mentioning resume matcher @anmolbaranwal .
It's an interesting project and has grown well. I haven't seen the codebase, but the concept is very good. Thanks for making it open source :)
Thanks a lot @anmolbaranwal Resume Matcher project is driven by community efforts. We're building it slow and steady.
Where do you find all thins? Always fun and interesting tools
Thanks, Marine!
I love open source projects so much that some might say I'm a GitHub addict 🤣
As for finding these tools, I often discover them through GitHub advanced search, tech communities, and internet. It can be tough sometimes but all is worth it!
How to go further with this list?
After you've conquered these projects, the next step is obviously world domination, haha!
But seriously, just "Build it, ship it". It’s more than enough.