DEV Community

Cover image for Introducing Langflow.new: Frictionless AI
Tejas Kumar for DataStax

Posted on • Originally published at datastax.com

Introducing Langflow.new: Frictionless AI

Langflow is DataStax’s flagship product for building generative AI flows and agents. Today, we’re advancing the democratization of generative AI with our technical preview of Langflow.new: a fully open path to creating GenAI flows and agents for rapid prototyping and proofs of concept.

What you can do with Langflow.new

With Langflow.new, developers can immediately get started with Langflow and discover its value to build RAG pipelines, agentic flows, and more. When you land on the page, you’re immediately greeted with a basic AI agent flow. Let’s explore this in more detail: we see a series of blocks called components in Langflow.

The default start screen of Langflow.new

Specifically, we have:

  1. A URL component that visits a URL and returns its content
  2. A Calculator component that serves as a function that effectively does math
  3. A Chat Input component that accepts a prompt from a user
  4. An Agent component that accepts all other components as inputs
  5. A Chat Output component that renders output from the Agent

One important point about the URL and Calculator components here is that they are tools. Tools in the context of AI agents work very similarly to functions in programming, where the input parameters (called arguments) are generated and supplied by a language model, and their outputs (called return values) are returned to the language model.

If we consider the Calculator component as a simple calculate(num1, num2, operation) function that we expose to the agent, then the agent generates values for num1, num2, and operation based on the prompt from the user provided via the Chat Input. So if a user writes:

Get me the sum of 3 and 7

then the LLM returns structured output that signals to the application (in this case, Langflow) to call the function calculate with values 3, 7, and sum, effectively calling calculate(3, 7, “sum”). This structured output could be similar to:

{
    "functionName": "calculate",
    "args": [3, 7, "sum"]
}
Enter fullscreen mode Exit fullscreen mode

The application (Langflow) then processes this structured output from the LLM to call the tool; it then sends its return value back to the language model to continue the flow.

If we consider all of these tools working together when a user supplies a prompt like

Convert 3425 USD to INR

then the language model will generate tool calls for each tool available like so:

[
    { "functionName": "getFromUrl", args: ["https://exchangerates.com/today?from=USD&to=INR"] },
    { "functionName: "calculator", args: [usdValue, inrValue, "multiply"] }
]
Enter fullscreen mode Exit fullscreen mode

From here, the application executes these functions with these inputs and yields its output to the language model. When there are no further tool calls and the language model returns only text, then the application returns this text to the user via the Chat Output component. Langflow is the glue (or runtime) between the Language Model, the tools, and the user’s prompts and outputs.

With Langflow.new, you can experiment with any agentic flows and tools you wish, or get rid of them completely and create workflows that perform RAG, sentiment analysis, or whatever else you like.

Once your flow is ready to go, you can download it and use it with any deployed Langflow instance: either hosted by DataStax as part of DataStax Langflow, the cloud offering, or self-hosted.

Inspiration

Taking cues from Guillermo Rauch, CEO and founder of Vercel, we have opted to remove as much friction as possible between you, the users, and Langflow’s core resource: meaningful and useful GenAI flows.

A post from Vercel CEO Guillermo Rauch stating that common feedback he gives to founders is to remove friction between users and resources.With one less hurdle and increased access to a tool like Langflow, we’re excited to see what you come up with and where Langflow can best support the needs of your organization. Share your experiences, ideas, and stories with us on 𝕏 or Discord.

Top comments (0)