DEV Community

Cover image for Announcing WunderGraph Cloud: The future of Serverless API Development is now
Stefan  🚀
Stefan 🚀

Posted on • Edited on • Originally published at wundergraph.com

Announcing WunderGraph Cloud: The future of Serverless API Development is now

Today, we are excited to announce the launch of WunderGraph Cloud!
Here are the key reasons why you should try out our service:

  1. Deploy your API in as little as 13 seconds, enabling more than just quick iteration
  2. You get an Edge Gateway for free, with Caching, Rate Limiting and Analytics close to your users
  3. The source of truth is your Git repository, everything is driven by GitOps and CI/CD
  4. WunderGraph Cloud is Serverless without the limitations of Serverless like long cold starts or request timeouts
  5. At the same time, your API is deployed close to your data sources, allowing for low latency and high throughput
  6. WunderGraph is designed to integrate perfectly with frameworks like NextJS, React, SolidJS, Svelte, Vue and Frontend Platforms like Vercel and Netlify
  7. You get Analytics and Monitoring out of the box, with distributed tracing coming soon
  8. You can use any npm package in your API, including packages that require native dependencies
  9. You define APIs using TypeScript; or integrate existing APIs using OpenAPI, GraphQL, PostgreSQL, and many other data sources
  10. On each deployment, you get a generated Postman collection, so you can easily test and share your API

Let's discuss who WunderGraph Cloud is for, give a quick overview of the architecture, and then dive into the details of the features mentioned above.

Who is WunderGraph for?

WunderGraph Cloud is for developers who want to build and iterate on APIs faster than ever before;
all without having to worry about infrastructure, deployment, monitoring, analytics, coordination, etc.

In contrast to other existing PaaS offerings, WunderGraph Cloud is architected to be the glue without locking you into a specific technology stack.

Bring your own frontend, backend, database, OpenID Connect Provider, S3 bucket, etc., integrate them with WunderGraph Cloud, and enjoy the benefits of a PaaS without the lock-in.

Whenever you want to "eject" the batteries-included package, you can just deploy WunderGraph (Apache 2.0 licensed) on your own infrastructure. It's as simple as running a single Docker container.

WunderGraph Cloud is also ideal for small teams, like startups, who want to move fast, focus on finding product-market fit, and concentrate on customer needs rather than technology choices.

For larger teams and bigger companies, WunderGraph Cloud acts as a self-service platform for developers to build, deploy, and share APIs.

If you're an agency or consultant, there's no faster way to build prototypes for your customers.

But WunderGraph is not just great for prototyping. We've designed WunderGraph in a way that it can easily be added to existing infrastructure with its rich data source integrations.

Finally, as you can easily use WunderGraph with your own infrastructure, it's also a great fit for enterprises with strict security and compliance requirements.

If that's enough info for you, you can sign up for free and deploy your first WunderGraph app in under 2 minutes.

Architecture Overview

WunderGraph Cloud Overview

Each project in WunderGraph Cloud maps to a Git repository.
This git repository is usually a monorepo, containing the WunderGraph API code and the frontend code side by side.

For each project, we deploy a WunderGraph Edge Gateway, which is a multi tenant, highly available API Gateway written in Golang hosted across the globe.

The Edge Gateway is responsible for caching, rate limiting, analytics, and distributed tracing.

We call it "Heimdall", after the Norse god of protection, because it's the first line of defense for your API.
Heimdall was born on the edge of the world, and that's why we call it the Edge Gateway.

He was born with the ability to see everything that happens in the world, just like our Edge Gateway with OpenTelemetry.

Your WunderGraph Application is deployed in one or more regions, dependent on your needs.

It will contain the WunderGraph API Gateway (written in Golang) and the WunderGraph Application Server (fastify),
allowing you to implement APIs and extend the API Gateway using TypeScript.

Alongside the runtime components, we also deploy a dedicated "Builder" per project.

Thanks to various optimizations, we can deploy a new version of your API in as little as 13 seconds.

In the future, we will also support additional stateful services like queues, pub/sub, resilient workflows, etc.,
which need to be "bridged" into the Serverless world.

It's still in development, but we're excited to share more details soon. We call this Bridge between stateful and Serverless services "Bifrost".

Together, Heimdall, Bifrost, the WunderGraph Gateway, and the Application Server form the WunderGraph Cloud Platform.

Ship code in as little as 13 seconds, enabling more than just rapid iteration

Slow continuous integration and deployment (CI/CD) pipelines is the number one reason why teams don't ship code as often as they should.

Being able to deploy a new version of your API in as little as 13 seconds not just enables rapid iteration but also gives you a much faster feedback loop.

Here's a short video on just how fast you can deploy a new version of your API:

deploying a new version of API

In the near future, we will also support Preview Deployments.

For each Pull Request, you will get a fully isolated deployment of your API with a unique domain name, so you can easily test and share your API with your team and customers and test it in production-like conditions.

You can also easily manage one branch per environment, like main, staging, and production;

or even one branch per customer, should you need to make changes for that customer specifically.

But in general, the concept of Staging and Production is not really necessary anymore.

With WunderGraph Cloud, you can create a new "production" deployment on every commit or branch if you want to.

WunderGraph Cloud makes traditional API Gateways obsolete

In addition to the pure speed of deployment, we're enabling another super important use case:

you don't need a traditional API Gateway anymore.

The "old" model of the API Gateway is that you have a cluster of API Gateways in front of your application servers.

When you're building an API, you have to deploy it to your infrastructure first. Then you have to use a GUI, CLI or APIs to configure your API Gateway.

The problem with this approach is that it complicates the development process, adds a lot of operational overhead,
and you cannot easily create an isolated environment.

In such an "old" world, you would have to create a dedicated staging environment for a new experimental feature,
which could possibly break your production environment.
With WunderGraph Cloud, you just create and push a new branch.

Environments are a thing of the past.

Large monolithic API Gateways are a thing of the past.
The future of API Management is to create lightweight, isolated deployments of your API on every commit.

WunderGraph Cloud is inexpensive like Serverless but with the capabilities of a traditional deployment

WunderGraph Cloud is not just fast, it's also very flexible and inexpensive.

What you get when you deploy on WunderGraph Cloud is a fully-functional, isolated container environment.

You can use any npm package, including packages that require native dependencies.

Additionally, WunderGraph is able to hibernate your API when you're not using it, and we've minimized the cold start time.

It's about 1 second for a simple API, but we're working on reducing it even further.

This allows us to charge you only for build minutes and data throughput. Simple and straight-forward.

The current trend is to move as much as possible to the edge—or so it seems.

Many developers are jumping on the edge computing bandwagon without really understanding what it is, or what it's good for.

Deploy your API closely to both your users and the origin

Edge computing is great for performing tasks close to the user that don't require coordination with a backend. For example, rate limiting, caching, and analytics, etc.

However, while "the edge" is close to the user, it's not necessarily close to the origin.

Moving all your business logic to the edge introduces a lot of latency between this logic and the database or other services.

Wouldn't it be great if you could move some parts of your API to the edge while keeping the rest of your API close to the origin?

And that's exactly what WunderGraph Cloud allows you to do.
Through OpenTelemetry, you'll be able to see the latency between the edge and the origin, so you can optimize where to execute your business logic.

One Operation might be ideal to execute in Frankfurt because you've got a database there.

But another Operation might be ideal to execute in New York as you're integrating with an API that's hosted there.
Just blindly moving everything to the edge is not the right approach.

We'll make this transparent and help our users to place their business logic where it makes most sense.

Distributed Tracing & Insights from Edge to Origin: Analytics, Monitoring, and Distributed Tracing through OTEL (OpenTelemetry)

We're still working on this feature, but we're excited to share more details soon.

Our goal is to provide you with telemetry data from where your API users are located, all the way to the origins of your API.

It's not enough to just focus on Time to First Byte (TTFB), optimizing server-side rendering (SSR), or reducing the size of your JavaScript frontend.

We'll be able to give you insights into what data is required in which part of the world, so you can optimize your API and decide where to place backend services and databases, etc.

Your database probably shouldn't be located in Frankfurt if you're serving users in New York.

Or perhaps the database should be split into multiple databases, one for each region? We provide you with the insights so that you can build better APIs for your users.

Native Integration with NextJS, Vercel, Netlify, and other frameworks and platforms

In recent years, we've seen a huge shift towards Serverless and Jamstack.

Jam stands for JavaScript, APIs and Markup, and it's a great way to build modern web applications.

WunderGraph Cloud is not just great for building "headless" APIs but perfectly integrates with NextJS, Vercel, Netlify,
SolidJS, and other frameworks and platforms.

Here's a short example of how you can use WunderGraph and NextJS together:

Example API using TypeScript

First, define an API:

// .wundergraph/operations/users/get.ts
import { createOperation, z } from '../../generated/wundergraph.factory';

export default createOperation.query({
    // by specifying the input schema, we're automatically creating a JSON Schema for input validation
    input: z.object({
        id: z.string(),
    }),
    handler: async ({ input }) => {
        // here you can do whatever you want, like calling an external API, a database, or other operations via the internalClient
        return {
            id: input.id,
            name: 'Jens',
            bio: 'Founder of WunderGraph',
        };
    },
});
Enter fullscreen mode Exit fullscreen mode

Then call this API from your NextJS application:

// pages/api/users/[id].tsx
import { useQuery, withWunderGraph } from '../../components/generated/nextjs';

const Users = (props) => {
    const id = props.query.id;
    const { data } = useQuery({
        operationName: 'users/get',
        input: {
            id,
        },
    });
    return (
        <div style={{ color: 'white' }}>
            <div>{data?.id}</div>
            <div>{data?.name}</div>
            <div>{data?.bio}</div>
        </div>
    );
};

export default withWunderGraph(Users);
Enter fullscreen mode Exit fullscreen mode

WunderGraph Cloud offers integrations, e.g., with Vercel,
so you can easily deploy your NextJS application to Vercel and synchronize it with the API on WunderGraph Cloud.

Example API using GraphQL

If you've got existing APIs that you want to use from within your WunderGraph API, you'll enjoy our concept of the Virtual Graph.

WunderGraph allows you to treat APIs like dependencies.
Introspect one or more APIs you'd like to use, add them to your Virtual Graph, and you can use them like a virtual database.

Step one: define two data sources:

// wundergraph.config.ts
const weather = introspect.graphql({
  apiNamespace: 'weather',
  url: 'https://weather-api.wundergraph.com/',
})

const countries = introspect.graphql({
  apiNamespace: 'countries',
  url: 'https://countries.trevorblades.com/',
})

configureWunderGraphApplication({
    apis: [weather, countries],
});
Enter fullscreen mode Exit fullscreen mode

Step two: define a GraphQL Query to join data across the two data sources:

query (
  $continent: String!
  # the @internal directive removes the $capital variable from the public API
  # this means, the user can't set it manually
  # this variable is our JOIN key
  $capital: String! @internal
) {
  countries_countries(filter: { continent: { eq: $continent } }) {
    code
    name
    # using the @export directive, we can export the value of the field `capital` into the JOIN key ($capital)
    capital @export(as: "capital")
    # the _join field returns the type Query!
    # it exists on every object type so you can everywhere in your Query documents
    _join {
      # once we're inside the _join field, we can use the $capital variable to join the weather API
      weather_getCityByName(name: $capital) {
        weather {
          temperature {
            max
          }
          summary {
            title
            description
          }
        }
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Step three: once we've deployed this API, we can call it like this:

curl http://jens.wundergraph.dev/operations/weather/forContinent?continent=Europe
Enter fullscreen mode Exit fullscreen mode

Get a Postman Collection for each deployment

So far, we've seen how you can integrate WunderGraph with NextJS, or use the API directly using cURL,
but that's not all. For each deployment, we generate a Postman Collection, so you can easily test and share your API using Postman, one of the most popular API testing tools.

Here's a short video showing this feature:

We're also working on supporting other tools like Insomnia, and we'll be exporting OpenAPI 3.0 specifications
for each deployment as well.

Roadmap / What's next for WunderGraph Cloud?

In summary, WunderGraph Cloud drastically improves the developer experience for building APIs, greatly reduces the development time for new APIs, and makes it easy to integrate with existing APIs.

But that's not all there is to it. We're working on a lot of other features, and we're excited to share some of them with you.

Native Integrations with Netlify, Planetscale, Neon.tech, and more

As you'll see below, we're keen on adding some native capabilities to WunderGraph Cloud, like pub/sub, streams, key-value store, etc.

But we're not intending to build a clone of AWS—quite the opposite!

We believe that diversity is key to a healthy ecosystem, and we want to make it easy to integrate with other services.

That's why we're working on native integrations with Netlify, Planetscale, Neon.tech, and more.

If you provide a service that you'd like to integrate with WunderGraph Cloud, please reach out to us—we'd love to hear from you.

That said, we don't take the word "integration" lightly.
Integration means that we're making sure that Previews, Branching, and Deployments on every commit work seamlessly with integrations.

For example, if you're creating a new branch in your repository, we'll automatically create a new deployment for that branch.

We will then connect it to the preview environment of your frontend application, connect the backend to the preview
branch of your database, and automatically migrate the database schema to the required version.

This will allow you to test your Pull Requests in a real environment without sacrificing the developer experience.
And once you merge your Pull Request, we'll automatically deploy your changes to production.

The domain will be the only difference between a preview environment and production.

OpenTelemetry: Distributed Tracing & Insights from Edge to Origin

We've teased it above, and we are super excited about the upcoming integration with OpenTelemetry.

Everybody knows that observability is key to a healthy API ecosystem, and we're working on making it easy to integrate with OpenTelemetry.

Thanks to being able to trace requests from edge to origin,
we'll be able to provide you valuable insights into your API ecosystem.

Consequently, you'll easily be able to identify bottlenecks and improve the performance of your API by placing your systems where your users are.

Edge Caching for any API (REST, GraphQL, etc.) with smart invalidation

As we're speaking about Edge already, we're also working on a native caching solution. Edge caching services exist, but they're not easy to integrate with your existing API.

While not trivial, caching is actual the easiest part of the problem. The hard part is invalidation.

What makes WunderGraph Cloud different is that we're not just caching API results on the edge, but we're making cache invalidation a first-class citizen of the WunderGraph application.

One of the most common ways to invalidate a cache is to use surrogate keys. With surrogate keys, you can tag your API responses with a list of keys, and then invalidate the cache for those keys.

However, most services require you to build your own cache invalidation logic and manage surrogate keys yourself.
The WunderGraph Open Source framework, together with WunderGraph Cloud, will allow you to define cache invalidation together with the API implementation.

As we're already allowing you to define your API using TypeScript, or extend GraphQL Operations with TypeScript hooks, we can easily extend the API definition with cache invalidation logic.

This way, caching and cache invalidation is not an afterthought, but a first-class citizen of the API.

Our philosophy is to look at building APIs as a holistic problem and find solutions that address the root cause,
not just the symptoms.

Pub/Sub & Streams: Realtime APIs made easy

One of the most requested features is Pub/Sub and Streams.
A lot of people keep asking for a scalable solution to build realtime APIs that are not dependent on polling.

With native support for Pub/Sub and Streams, we'll be able to provide you with a solution that's infinitely scalable,
globally distributed, and fault-tolerant.

Any data source can easily turn into a pub/sub event system.
If pub/sub is not enough for you, you can also create persistent streams similar to Kafka.

You can then easily build event-driven architectures with asynchronous workflows, which leads us to the next feature...

WunderGraph Machines: fault-tolerant, infinitely scalable, long-running operations with retries, timeouts, etc.

WunderGraph Machines are a new concept that we're working on -
a combination of a resilient state machine and a serverless function. You can have any number of machines; they can be running anywhere in the world; and they can sleep for as long as you want.

You can persist the state of your machine. You can send it events. You can use CRON expressions to schedule it. You can query and subscribe to its state.

Think of WunderGraph Machines as millions of tiny persistent state machines.

Let's take an example of how Machines could be used to simplify your Architecture.

Imagine we're building Twitter. In a regular architecture, you'd have a database that stores all the users and tweets.
You can easily scale the database in the beginning, until you run into performance issues because of popular users
with millions of followers.

At that point, you'd have to think about efficient and resilient ways to send a tweet to 100M users.

With WunderGraph Machines, you can build this application in a scalable way from the very beginning.

When a user signs up, you create a Machine for them.
This Machine is responsible for managing the state of that user.

For instance, it can store the user's profile picture, their bio, their followers, etc.

If we want to follow a user, we can send an event to the Machine of the user that we want to follow.
If we want to see the tweets of a user, we can send a query to their Machine.

If a user with 100M followers tweets something, we need to commence a process that sends this tweet to all 100M followers.

This "process" is just another Machine that implements the algorithm to put the tweet into the followers' timelines.
So, instead of having to think about how to scale the database to 100M users, we can just create a Machine for each user and let them do their work asynchronously.

Machines will fluently integrate into the "Virtual Graph" so you can use queries, mutations and subscriptions to interact with them, just like tiny GraphQL APIs.

A simple programming model that asynchronously scales with your business.

Another example of how Machines could be used is to build a scalable, fault-tolerant, and resilient integration
with a third-party API.

Let's say that we're building an integration using the APIs of fly.io, which we use internally.

For example, when a new branch is created, we want to create a new fly application, build the application, and deploy it.
Each step could separately take a few minutes, and we have to make many API calls to facilitate this process.

During the deployment, all of these API calls can fail; the build can fail, etc., and we have to handle all of these cases.

Machines will not only make it easy to build this integration in a resilient way without too much boilerplate;
you will also be able to easily "subscribe" to the state of the Machine, so that you can just as easily build a UI that shows the progress of the deployment.

Currently, we're using temporal to handle deployments, but the setup is quite complex and requires a lot of in-depth
knowledge of temporal. What if we could reduce the complexity of resilient workflows to one single TypeScript file?

And that's what we're working on.

K/V Store: Infinitely scalable, low-latency, and strongly consistent key-value store

Machines are great for building long-running operations.
But we need a way to store state that's not tied to a specific process because our machine might fail at any time, and we don't want to lose the state.

For that reason, we're working on a native key-value store that's infinitely scalable, globally distributed, and fault-tolerant.

Machines can run in any region, and so you'll be able to store the state of your machines in the same region as well.

That said, K/V stores are not only useful for machines.
You can also use them from any other part of your application, allowing you to manage state in a globally distributed way.

WunderGraph Hub: GitHub for APIs—share, discover and reuse APIs with collaboration features

All of this is great and helps you to build better APIs, ship and iterate faster, and build more resilient and scalable applications. But we also want to make it easy to share and reuse APIs.

That's why we're working on WunderGraph Hub, a GitHub for APIs.

Everything we did so far with WunderGraph contributes to the vision of WunderGraph Hub.

Unify and normalize APIs across heterogeneous data sources. But what then?

WunderGraph Hub will allow you to share APIs and compositions of APIs with your team, your company, or the world.

Sharing an API will be as simple as publishing a package to npm.

On the Hub, you can discover APIs, and you can easily reuse them in your own applications. On the API consumer side, "importing" an API is as simple as installing a package from npm.

That's why we call ourselves the "package manager for APIs" sometimes.

Conclusion

It's exciting times for the API ecosystem.
No matter whether it's WunderGraph or someone else,
we're pushing the boundaries of what's possible with APIs and helping developers to build better APIs faster.

We're already seeing the beginning of exponential growth in the number of APIs.

For every problem, there will be an API (or more).
And we're excited to fuel this growth while keeping APIs manageable because teams of developers can't scale at the same rate.

...And who knows? Maybe at some point, all of this will result in AI technologies that enable non-technical people
to use APIs in ways we can't even imagine today.

Top comments (0)