My name is Sebastian, and I have been working as a TypeScript & JavaScript developer for many years, primarily as a freelancer.
Throughout my ...
For further actions, you may consider blocking this person and/or reporting abuse
Very interesting, we've adopted a similar approach with an in house built framework. We use an events system that can cross servers and all the way to the front end if needed. Our modules can be loaded all into one box for the developer or spread across lambdas and multiple production servers, or (as it's Node) across multiple processes on a single server.
The events system also provides hooks which mean that with no changes to the main code we can implement specific client functionality or purchasable modules that alter functionality or augment or change UI.
For me this is the best approach for growing teams, we very rarely have merge conflicts and can easily implement PR environments that behave the same as multi lambda/server distributions.
Excited to dive into your work and see how you've gone about it.
Thanks for your comment.
It is very exciting and relieving, to hear, that this approach is not totally rubbish.
I've also ideas and parts from event driven design implemented. Based on my experienced on a CQRS/event driven architecture project.
For example, each message can have an event name attached and subscriptions can subscribe for certain events.
This should allow implementing an event driven architecture easily.
I'm a bit in love with this event driven idea in general.
I prefer this event driven or message based approach with some broker more than regular microservices via HTTP for one big reason:
When working with HTTP based microservices, it can become quickly hard to keep the separation clean and to handle errors correctly.
It starts on very simple things - to take the example from the article:
If you have a User service with some sign-up, and you need to send an email for new users, normally the User service would call the Email service.
But what happens if the Email provider is unreachable and the Email service is failing? How to handle this, who is responsible for retries and stuff? The User service which might get re-deployed before the Email service is working correctly?
PURISTA also provides hooks, but they are more for transforming inputs & outputs or to separate something like authentication/authorization from the business code.
So I have two thoughts on the connectivity point, and as a framework author I think you only have one real choice, but I guess I have two!
The first time I implemented such an architecture was about 5 years ago, I ended up building a light weight message queue (it used Redis and MySQL as a job queue). I had the basic principle of jobs that would be flagged as complete or in error and then I had much lighter weight plugins that actually raised these jobs in response to the events before the raiser completes. Basically what I'm saying is that the function raising an event would encounter an exception if the job wasn't raised (i.e. any plugin event handler threw an exception). Basically all my API calls were then jobs (super helpful for debugging and really fast because of Redis etc).
So my job queue runner basically pulled an event from the queue and offered it up as an event, if anything could handle it then it ran. This way some servers could be dedicated just to certain heavy lifting simply by configuring them to only run that code, leaving other servers available for the quick turn around stuff. In other words all control was inverted. There was then a monitoring plugin that flagged whether jobs couldn't run (nothing could handle them for some reason), had continually failed to execute etc.
My most recent version of this has been able to be much lighter. I use GraphQL as an end point and allow code to be flagged for execution on other boxes using a HOF. The HOF basically also gets used to identify the code to be loaded on Lambdas or on sub processes. The DX is still "I call a function" for the caller but the developer of the function can choose to run it elsewhere.
This doesn't have all the retry/job management stuff my old framework had - simply because the requirements just aren\t there for it on the new project, but the core principles and DX are the same.
A couple of years ago I wrote an article as a briefing for my team on the principles.
A SOLID framework - Inversion of Control Pt 1
Mike Talbot ⭐ ・ Jun 19 '20 ・ 17 min read
And here's an example of my new systems HOF, the function wrapped by
makeLambda()
will run on a lambda or a sub process, but otherwise there is no "thinking" by the developer to make this happen. Under the surface this is using events to pass all of the information necessary.What you're proposing is basically an heavyweight version of hexagonal architecture.
What if I want, in the future, to move away from the Purista framework?
In my opinion there is no framework able to solve this problem in an universal way. Because, by design, you need to use a framework and deal with it. Of course, a well designed and focused framework, like Purista looks like, could help.
But in the end, in order to be really hands-free in making or deferring decisions and be minimal, some points should be taken into account:
Those are prescriptions that are beyond any framework (and language).
WDYT?
Thanks for your thoughts.
It is quite interesting for me to see, that you mentioned hexagonal architecture.
About my background:
I was working in projects, where the buzzwords were event-driven architecture, domain-driven design and functional programming.
So, I did not have the word "hexagonal architecture" in mind at any time, but as they all share a lot of ideas & concepts, it's totally right to call PURISTA so.
Regarding the question of how to move away from PURISTA in the future:
PURISTA at its core does not really provide a big set of features, which you use to implement the business logic.
The real framework specific functions are the builders, but they orchestrate and organize things. If you look into them, they are stupid simple. Only having the correct TypeScript types is sometimes not so straight forward.
Also, there is no fancy stuff like decorators or similar things, which couples your code directly to some framework functionality.
It is more defining interfaces, the structure and how to orchestrate things. There are also only a few dependencies for the core package (zod schema, OpenTelementry sdk, pino logger).
Because of this, your business code will stay clean and plain as much as possible, and you can use your preferred tools & packages for implementing the business logic.
The framework provides some packages, which contain some ready-to-use implementation of that defined interfaces.
Moving away from PURISTA is possible in general, as the business code is isolated.
But the more interesting question is, why you would do this and what is the "new thing". Because this most likely sounds more like changing the architecture.
About your last sentence in your comment:
You're right, and we are on the same page here.
Tbh, I'm also always struggling with "framework", as PURISTA is more mindset, patterns, structuring, organization and orchestration of things.
Sometimes, I fear, that people are expecting some more framework specific features, if I call it framework. Something like "oh, I only need to call this framework function, instead of writing 50 lines of business code on my own".
You know what I mean?
Yes, that's hexagonal.
That's a question for the infra guys that have cost reduction as KPI and engineering managers that "we must shift onto this new fancy technologies".
Jokes apart, the point is that writing code as ready-to-fly-away from the framework lets to identify dangerous coupling points with framework (as you mentioned) and helps in writing more decoupled and clean code. "It's more on the travel than the destination".
I understand perfectly. And this ends in projects with 200 dependencies, and misused frameworks. At scale, having 200 dependencies and framework used in the wrong way ends in being unmaintainable.
Being careful in choosing the right framework and leverage its pure power to address the implementation we need and balance the libraries adoption is key. "mindset, patterns, structuring, organization and orchestration of things" are fundamental bricks to guarantee long term maintainability and performances.
I'd be interested in learning more about the testing story of PURISTA. Looking at the getting started the only ref is a jest config along with test files. these primarily serve the business logic unit test side of things. In the light of a microservice, event based architecture integration testing becomes much more important though.
So what I'd really like to see is a sample of user and email service, but how one use case like the signup can be properly integration tested in collab
Integration tests are currently not part of the framework itself.
Finding some general approach is difficult - especially because PURISTA is highly modular and stuff like the communication method (MQTT, AMQP...) is not fixed.
You can find a - tbh simple & stupid - example on how to test it, in the repo.
There are some basic integration tests for the event bridges.
github.com/sebastianwessel/purista...
This will work in mono-repos, but as soon as you have multi-repos, it will not be possible this way.
But, some smart people are working on some interesting stuff:
github.com/traceloop/jest-opentele...
I didn't try it out yet, but on first look, it's promising and might be an option, as PURISTA provides OpenTelemetry out of the box.
so your example is exactly what I was looking for. Creating a fake queue, ramping up a service and performing a command. I'd really add this to your docs as an example for how to run integration testels, even if as you said, the use case is narrowed down to specific constraints. It still gives a good understanding of how to approach it.
After receiving feedback on this article from Reddit, I decided to update it with the assistance of ChatGPT in order to enhance its readability and, hopefully, improve its overall comprehensibility.
Totally agree with your statements. Trying to make it easier by building your on fmk is very brave, i wish you all the best
Thanks!
Even if it fails - there are a lot of learnings, experiences and I enjoy doing it.
So, it is not wasting of time in any case for me.
You keep referencing AWS specific services when talking about cloud (you use Lambda a reference to cloud based container structures, for example).
Is this meant to be AWS specific or is it also useful for other cloud vendors?
It’s not AWS specific. I used AWS more as a reference/example, as AWS Lambda is a widely known synonym for function as a service.
Also, AWS provides a whole stack which you can use - like AWS Eventbridge.
On a high level, you only need some runtime for the functions, and something which can act as message broker for communication.
The business implementation is decoupled from the choice of the solution you use for runtime and broker.
That’s the main idea and benefit with this framework.