DEV Community

Cover image for What is Vercel's AI tool, V0.dev and how do you use it?
Abdurrahman Rajab for OpenSauced

Posted on

What is Vercel's AI tool, V0.dev and how do you use it?

A few months ago, Vercel announced V0.dev for developers and designers to generate react code with AI—the only issue with the announcement was that v0.dev had a waitlist and was not open for anyone. Recently, I got access to the website from the waitlist, but it’s available for everyone with a Vercel account right now.

Such tools fill the gap between developers and designers and save time for many companies when they launch their projects and products. In this article, I will share the value of the project, how it works, and the impact of open source on such projects.

v0.dev is a ChatGPT-like tool that only focuses on generating the code for the user interface. It uses shadcn/ui and Tailwind CSS libraries to generate that code. After generating the code, the website gives you an npx install command to add the component to your software.

Testing Scenarios:

V0 provides you with the ability to write prompts to create the design. Besides that, V0 processes images and allows you to improve the design of your selected elements. With all of these features, I decided to test the project and benchmark it over the next questions:

  • Does it provide a production-ready code?
  • Does it understand the image you provide?
  • Does it work in other languages?
  • Could you use it on an established project?
  • Could you use it for new projects?

With these questions, I decided to do my experiments based on two projects that I am involved with: the first is OpenSauced, and the second is okuyun.org.

First experiment: a search engine example

My first experience was with a new project I wanted to start from scratch. The main goal was to have a search engine user interface with few examples for users. I got the result I wanted after having twelve prompts written for that, which is quite interesting and impressive. In the next image, you can find the last design that I got:

the v0 output, which is a search engine user interface. It includes a logo, title, search inbox then the options

The full prompts are in this link. The project seems to have a great result and provided a beautiful code that could be used in a project immediately. The only consideration I had in the code was that it’s a bit fragmented and does not provide a component that could encapsulate its results. This causes a bit of spaghetti code for a large project.

Here you can see the generated code:



  <div className="border-2 border-gray-300 rounded-md p-2">
            <Card>
              <div className="flex justify-between">
                <div className="flex items-center">
                  <Button className="bg-blue-500 text-white rounded-md px-2 py-1 ml-2">
                    <GoalIcon className="h-4 w-4" />
                  </Button>
                  <CardHeader className="font-semibold">OpenAI</CardHeader>
                </div>
                <CardContent className="text-gray-500">A research organization for AI.</CardContent>
              </div>
            </Card>
          </div>
          <div className="border-2 border-gray-300 rounded-md p-2">
            <Card>
              <div className="flex justify-between">
                <div className="flex items-center">
                  <Button className="bg-blue-500 text-white rounded-md px-2 py-1 ml-2">
                    <GoalIcon className="h-4 w-4" />
                  </Button>
                  <CardHeader className="font-semibold">Tailwind CSS</CardHeader>
                </div>
                <CardContent className="text-gray-500">A utility-first CSS framework.</CardContent>
              </div>
            </Card>
          </div>
          <div className="border-2 border-gray-300 rounded-md p-2">
            <Card>
              <div className="flex justify-between">
                <div className="flex items-center">
                  <Button className="bg-blue-500 text-white rounded-md px-2 py-1 ml-2">
                    <GoalIcon className="h-4 w-4" />
                  </Button>
                  <CardHeader className="font-semibold">GPT-4</CardHeader>
                </div>
                <CardContent className="text-gray-500">A model by OpenAI.</CardContent>
              </div>
            </Card>
          </div>


Enter fullscreen mode Exit fullscreen mode

I would prefer a component that encapsulates the elements and provides parameters through it, which would be a great addition to a project. A simple example I would expect would be:



<serachResult title=GPT description=A model by OpneAI link=openai.com></searchResult>


Enter fullscreen mode Exit fullscreen mode

This would save the extra code, minimize the lines, and increase the readability of the code. I am aware that having such a result from an AI in the first shot would be a bit tricky, but this would make you understand how to use it, its limitations, and the system's ability.

Second experiment: an OpenSauced component

As for OpenSauced, my experiment was to check if I could use a Figma design in V0. I wanted to import the Figma design and then implement a user interface for the project. At the end of the experiment, I compared the output result with an actual component that Nick Taylor, a Senior frontend engineer, wrote for the OpenSauced project. As for OpenSauced, the project uses Tailwind and Radix, the framework that shadcn uses under the hood.

With this structure, I thought making the related design from Figma would be easy. The first issue I faced was the inability to import Figma files directly, so I had to take a screenshot of the component I needed to implement and provide to V0. This gave me the first result of the code, which was the first draft I worked on to improve and get the desired result. You can compare the first draft and the image I provided below.

Comparison between the example image and the ai output

With 17 iterations, I could get the result that I somehow expected. This iteration helped me to understand the logic behind v0 and how it works and be precise with the language—the final result.

Even though the result was quite interesting regarding the view, it differed from the result that Nick Taylor has written. If we do a quick comparison, you can find that Nick relied on the code written before for OpenSauced, which v0 is unaware of. This was the most significant issue you would face when using v0. Other than that, it would be the reusability of the component written with which I have mentioned in the first experiment.

Third Scenario: Arabic (non-Latin) Language

For this scenario, I tried the model with other languages. The main idea is to check if it’s diverse and allows other nations to use that or if it is only closed to English speakers. Enabling the tech in local languages will have a better and more diverse and inclusive community; at the same time, it will allow young children who have not learned new languages or are not confident about their languages to get access to these tools that will enable them to create a great future!

My experience with this was just having a prompt to ask to design a page, I did not expect much, but it was a bit interesting since it worked like the English language and provided the great results that I was expecting.

Here is the next prompt I used:
قم بتصميم محرك بحث للقرآن الكريم
Which means: Design a search engine for the Quran.

This provided an interesting result for the query with understanding the meaning of the Quran:

output user interface, with title, search input and results

With this step, I moved forward to have a two-screen side to show the result on one side and have the search engine on the other side and it provided a great example for that:

two screens example

Even checking the results of the project on the mobile screen provided a great responsive result which was impressive for me.

mobile example

Looking at the code, you would notice that V0 used the tailwind flex class and used semantic HTML, which provides a great result for responsivity and rendering. Here is a part of the generated code:



 <main className="flex flex-col lg:flex-row items-start lg:items-center justify-center h-screen bg-[#fafafa] p-6 lg:p-12">
      <div className="w-full max-w-md lg:max-w-lg">
        <h1 className="text-3xl font-bold text-center mb-6 text-gray-700">Quran Search Engine</h1>



Enter fullscreen mode Exit fullscreen mode

One of the the main issue I noticed is that the output code and user interface do not include any words from Arabic. This could turn off non-English speakers, yet using translation tools would help them. Besides that, the output user interface does not align with RTL (right to left) standards, which are the standards for languages like Arabic. For that reason, I did write an extra prompt asking to fix both of these issues, here is the prompt:
النتيجة ليست باللغة العربية قم بتحويل النصوص إلى اللغة العربية وتحويل نظام الشاشة من اليمين إلى اليسار
Which translates to: The result is not in Arabic, translate the text into Arabic and turn the screen mode into RTL.

The result was quite interesting, in the next image you can notice that v0.dev has fully converted the output code to RTL and translated the text, with minor details that might need to be considered by the developer.

RTL showcase

With this result I could get to the conclusion of being able to use different languages for user interface generation and enabling more communities to get access to such tools. The developers will need to take care of minor details to have the user interface fully adjusted for their needs.

The results

To get back to the questions that I asked at the beginning of this article, I could conclude with the next question:

Does it provide a production-ready code?

Yes, the code that V0 provides could be used in production with minor tweaks and checks. By tweaks, I mean converting the large bits into reusable components, checking the accessibility issues, and evaluating the code with your code standards. The main point here is to have it integrated with your stack. If you are using react, tailwind, and shadcn then you would be in the right direction to use the software.

The only issue here is that it would not understand your design system. You would need to think about the design system; if you are using an atomic design system, then a great way for you to benefit from this tool is to request it to write the atoms and molecules, then you would need to use them in your project.

Does it understand the image we provided?

Yes, but the image process does require a few improvements. I think the area for this improvement would be a great research field for AI researchers, where they would need to have a user interface-focused image analysis and benefit from the algorithms that could process images. If some work has already been done in academia, then converting that research to a real product that would satisfy the users would be quite challenging.

Does it work in other languages?

Yes, As we have tested it in Arabic. I believe that the result would be great as well in other languages. The issue with such tools is that they might not have a full understanding of the language-specific issues like RTL for the Arabic language. You must address these issues by yourself and do your checks to fully adjust the design for your needs.

Could we use it on an established project?

I believe that using v0 on an established project would be quite challenging if you think to take the result immediately, yet if you would think about using it as a friendly helper or someone who would do the basic work, then you would adhere to that work on your project then it would be helpful.

Could we use it for new projects?

Using v0 for new projects would be much easier to kick off. Overtime, you would need to improve the quality and understand the system to extract reusable components that could help you have a faster development cycle for your project.

Improved v0

With the experiments that I made. I believe v0 will be better in a few places and would love to see it improved. In the next section, I will mention the areas that Vercel could improve and write the impact that developers and designers would reach:

Design System integrations

One of the central and most significant issues V0 would need to solve is understanding the design systems. Understanding the design system would help companies integrate with it more easily since some companies might have used their design system in production for years. Since the v0 and GPT revolution is relatively new in industry usage, tackling this issue would be exciting and provide value for using the software in production.

One of the simple ways to get into that is to have a client-side for the v0 that Vercel would integrate with the code editor, an approach that GitHub copilot is following. With this approach, the v0 could create an extension in VS Code, for example. With this extension, V0 would take the file names and information from the project to understand the design system, and then it should be directed to the related design system and provide the code and component based on that. This approach would be similar to the multimodal AI approach that Gemini and OpenAI use.

Another approach that Vercel could use in Beta is a GitHub Copilot labs-like extension; the GitHub Copilot labs used to have directed inputs for their GPT model. The directed inputs used to be code cleaning, writing tests, and even code refactoring options. The extension was discontinued after integrating it with the Copilot chat model. Yet, for v0, having directed options and inputs like choosing a design system, writing molecules, writing atoms, or even components would help to integrate more with the production systems that people have and give a great iteration cycle for Vercel to understand their customers and community needs.

Importing from design software

One of the great features such a system would need to have is the ability to import from the design software like Figma. The approach I used in this review was to take a screenshot of the Figma design and provide it as a prompt to v0. Yet, if Vercel implements the ability to import from Figma, V0 would take a new leap to help programmers and designers.

This improvement could help import and integrate with the current behavior of designers and developers. Over time, it would be more convenient to enable the design software to have prompts and iterations over them.

Improved image processing:

With the second experiment I have done in this review, the layout of the component and the first output are slightly different than the input image. Besides that, you would notice the missed icons in the first output image, which is odd for a great sophisticated system like v0. In the future, it would be great to have a way that the system would understand the icons and layout provided in the image.

an image showing the difference between output and the input image

Such improvement is possible by analyzing the images and dividing the process into multiple shots for prompts or even having a multimodal design that could understand the layout and icons. Such improvements could make the system more robust and minimize the back-and-forth prompts for the system to get the expected results.

I believe that creating a model that allows you to get the names of the icons in the images, the layout, and the CSS features would be a promising system that would enable the integrity of the AI model in industry applications.

Improving the User experience of the website

The v0 website has two great features you could use to develop and give prompts. One of them is enabling you to edit the code immediately and write the prompts based on that. The other feature is the ability to choose a specific element, allowing you to write prompts for that particular element.

These features are super helpful and extraordinary, yet I would like to see minor improvements in them. As for the code editor, I would love to have the ability to select an element from the user interface and get the code editor scrolled to that element immediately. This feature is similar to the HTML element inspector in browsers like Google Chrome and Mozilla Firefox. Enabling the code editor to point to the code from the view will allow you to quickly make the changes you would do as a developer instead of scrolling and using the search feature to find the related code.

The other improvement that V0 can implement is enabling the element selector to select multiple elements; the software's current phase allows you to choose one HTML element over time, yet having a way that will enable you to choose various elements would be a significant improvement. Enabling multiple element selection will save you time if you edit a page in the v0 and help you have a consistent design for the whole page.

a gif showing how to select and edit an element in v0

Enabling the element selector could be done by adding extra buttons in the update menu, and saving the memory of the elements to provide to the model, or even by allowing the user to click and drag through the mouse. Here is an example of an extra button that could be added to enable this feature:

an image showing the select feature of v0

Final thoughts

With my experiments, I feel that v0 is quite sophisticated and helpful for programmers and designers. It will bridge the gap between them and even have a faster development cycle for web apps. You can use V0 for new projects with the same tech stack. In the future, I expect the Vercel team to support new tech stacks and design systems, which would have a considerable impact and more extensive reach for the community.

On the other hand, these improvements and programs will put a huge load on developers and designers to continue learning new tools and improving their mindset; otherwise, they would risk their jobs. The developers must increase their knowledge of accessibility, multilanguage support, design systems, and user experience. Those skills with the developer would allow them to be more productive and create great results from such tools. For designers, I envision that other companies like Adobe and Figma will try to make V0-like tools. The V0-like tools will be integrated more with their design software, and they will need to understand the output code to have more flexibility and power over their results.

These tools would not be able to test all the results and scenarios that programmers might face, like accessibility, as Vercel mentioned in their docs, or even responsibility. Having them beside you would be helpful if you knew how to use them.

Top comments (4)

Collapse
 
pavelee profile image
Paweł Ciosek

Great post! 👏

Collapse
 
a0m0rajab profile image
Abdurrahman Rajab

Thanks for the comment Pawel. 🤗

Collapse
 
nickytonline profile image
Nick Taylor

Great writeup @a0m0rajab! I wonder if v0 has plans to allow uploading a Figma file.

Hot Rod saying Cool beans!

Collapse
 
a0m0rajab profile image
Abdurrahman Rajab

Thanks for the support nick! I am not sure about Figma; it would be great to have that, yet they have not mentioned anything about it. What they mentioned is the integration of new design systems. I feel having Figma would be a matter of time; if they did not add that, I feel a new Figma extension would be born with a similar concept to v0.