DEV Community

Jesse Warden
Jesse Warden

Posted on • Edited on • Originally published at jessewarden.com

How to use Dependency Injection in Functional Programming

Dependency Injection is a technique to make the classes in Object Oriented Programming easier to test and configure. Instead of a class instantiating its own concrete implementations, it instead has them injected into it. In Functional Programming, that’s a fancy way of saying “calling a function with parameters”. However, it’s not that these parameters are data, but rather the same thing type of dependencies you’d use in OOP: some type of module or function that does a side effect, and you want to make your function easier to test.

In this article we’ll show you how OOP uses DI to make the classes easier to test, then we’ll show the same technique in FP using JavaScript for both implementations. Code is on Github. After reading this you’ll understand how to make your FP code easier to test and configure, just like you do in OOP style coding.

Mark Seemann did a conference talk about using Partial Application to do Dependency Injection in Functional Programming.

I loved his video. I felt if you’re new, you don’t need to know how partial application works in Functional Programming to understand how to do Dependency Injection. It really is just passing arguments to functions. Once you learn that, the you can go learn about Partial Applications and continue to use your Dependency Injection skills in more advanced ways.

If you already know what Dependency Injection is and how to use it in Object Oriented Programming, you can skip to the Functional Programming explanation.

What is Dependency Injection?

Dependency Injection is a technique for instantiating classes that conform to an interface, and then instantiating another class that needs them, and passing them into that class’ constructor. A dependency is a class that typically does some complex side effect work, such as connecting to a database, getting some data, and parsing its result. It’s also sometimes called Inversion of Control because you have a DI container manage creating all these classes and giving them to who needs them vs. you, the developer making a parent class, then hard-coding internally those composed classes; computer vs. you, dependencies are given to class instead of class making them itself. You as the developer just give the DI container some configuration in the form of “This class needs this interface” ( a la TypeScript). In tests, the DI container will give it the stub/mock/fake implementation. When your program runs for real, the DI container will give it the real/concrete implementation; same code, but different dependencies depending upon if you’re running in test mode or real mode.

There is much that DI helps, but for the focus of this article, it makes testing classes easier. While they can abstract and encapsulate their behavior, you can leave them open to modify and configure how they work without having to change the class itself.

The Un-Testable OOP Problem

Classes are built to encapsulate state. State could be an internal variable, a bunch of variables, database connections, and many things happening at once. This is typically a good thing in the OOP world: you abstract away complexity so those who use your class have a simple way to interact with and control that complexity.

There are 2 challenges with that:

  1. How do you know it actually works?
  2. Do you actually feel like it’s a simple design that you like to use?

For the first, we use some type of integration tests; using real data and real connections or even functional tests knowing that piece of code will be tested with the rest. This lets us know in an automated way if it works now, and if we change things later, it continues to work then.

For the 2nd, we try to use a test first methodology like Test Driven Development, to start consuming our class’ API before it even exists, and design what we like. Once we have something we might like, we make the class work with the bear minimum of code. Later, we can then refactor and tweak the design to our hearts content… or some deadline.

Let’s not do that. Let’s show a class that was just built to work without being testable first, with no dependency injection. Here is one called Config that reads what environment we’re in, QA or Production, based on reading a JSON file. This is a common need in server and client applications where you use a configuration file or environment variables to tell your application what URL’s to use for REST API’s. In QA, you’ll use 1 set of URL’s, and in Production, a different set. This allows your code to work in multiple environments by just configuring it.

import JSONReader from './JSONReader.mjs'

class Config {
Enter fullscreen mode Exit fullscreen mode

Notice it imports a JSONReader class who’s sole job is read a JSON file from disk, parse it, and give back the parsed JSON Object. The only public method in this class is a one that takes no parameters, and gives back a URL to use for QA or Production:

getServerURL() {
    let environment = this.#getEnvironment()
    let url = this.#getURLFromEnvironment(environment)
    return url
}
Enter fullscreen mode Exit fullscreen mode

The getEnvironment private method abstracts away how that works; we just want to know is it “qa” or “production”? Once we have one of those 2, we can call the getURLFromEnvironment private method and it’ll give us the correct URL based on environment.

If we look at the private getEnvironment method:

#getEnvironment() {
    return new JSONReader('config.json')
        .getConfigJSON()
        .env
}
Enter fullscreen mode Exit fullscreen mode

We see it’s using that concrete implementation of the JSON reader to read a file, and pluck off the “env” variable which will be “qa” or “production”.

The getURLFromEnvironment private method is our only pure function here:

#getURLFromEnvironment(environment) {
    if(environment === 'production') {
        return 'http://server.com'
    } else {
        return 'http://localhost:8000'
    }
}
Enter fullscreen mode Exit fullscreen mode

If you give it a string, it’ll give you a string back. There are no side effects; this is our only logic in the class.

So unit testing this class in a whitebox manner is hard; the only way you can configure this thing is by changing a “config.json” file on disk that is relative to where this class is. Not really that configurable, and it has disk access which isn’t necessarily slow nowadays, but some other side effect that is required to be setup to make this class work, so not fun to work with.

The Testable OOP Class

Let’s slightly modify this class to be easier to configure; namely the JSONReader that does the main side effect, we’ll make that a constructor parameter instead.

class Config {

    #JSONReader

    constructor(JSONReader) {
        this.#JSONReader = JSONReader
    }
Enter fullscreen mode Exit fullscreen mode

Now, we pass our JSONReader as a parameter when we instantiate the class. This means we can pass a stub in our tests, and a real implementation in our integration tests and in our application, all while using the same class. None of the implementation details change; instead of using the concrete implementation, our private methods just now use the private internal instance variable:

#getEnvironment() {
    return this.#JSONReader
        .getConfigJSON()
        .env
}
Enter fullscreen mode Exit fullscreen mode

Great! Now we can write a unit test that stubs this disk & JSON parsing side effect into something that’s deterministic and fast. Here’s our stub:

class JSONReaderStub {
    getConfigJSON() {
        return { env: 'qa' }
    }
}
Enter fullscreen mode Exit fullscreen mode

This class will always work and always return QA. To setup our Config class, we’ll first instantiate our stub, then our Config class, and pass our stub into the constructor:

let jsonReaderStub = new JSONReaderStub()
let config = new Config(jsonReaderStub)
Enter fullscreen mode Exit fullscreen mode

This new implementation change makes the Config class configurable now, we can do the same thing for unhappy paths as well, such as the when the file doesn’t exist, we don’t have permission to read the file, we read the file but it fails to successfully parse as JSON, it parses as valid JSON, but the environment is missing, and the environment is there but it is not QA or Production… all of these are just stubs passed in, forcing Config to handle those code paths.

Now, we can test the functionality with confidence:

let url = config.getServerURL()
expect(url).to.equal('http://localhost:8000')
Enter fullscreen mode Exit fullscreen mode

Integration Test

Your integration tests, used to validate your Config class can successfully read a config JSON file and glean the correct HTTP URL to use based on the environment, require a real JSON file reader. Our JSONFileReader class follows the same practice of making it self configurable:

class JSONReader {

    #FileReader
    #configFileName

    constructor(FileReader, configFileName) {
Enter fullscreen mode Exit fullscreen mode

Which means in the unit test, that FileReader would be a stub, and in our integration tests, would be real. We do that by using the injected dependency is a stored private variable:

getConfigJSON() {
    return JSON.parse(this.#FileReader.readFileSync(this.#configFileName))
}
Enter fullscreen mode Exit fullscreen mode

This means we can configure it to work for real in the integration tests with our Config. We’ll make it real:

let jsonReader = new JSONReader(fs, './test/integration/qa-config.json')
let config = new Config(jsonReader)
Enter fullscreen mode Exit fullscreen mode

The fs is the Node.js module that reads and writes files. The file path to qa-config.json is a real file we have setup to verify this class can read it and give us the correct URL. The test looks the same… because it is, the only difference is the dependencies are real instead of stubs:

let url = config.getServerURL()
expect(url).to.equal('http://localhost:8000')
Enter fullscreen mode Exit fullscreen mode

Functional Programming Config

Doing the equivalent functionality in Functional Programming requires a function to read the file, parse it, snag off the environment, and determine which URL to return based on that environment. We do that by making each of those steps a function, and composing them together. We’re using the Stage 2 JavaScript pipeline operator below in F# style:

import fs from 'fs'

const getServerURL = fileName =>
    fileName
    |> fs.readFileSync
    |> JSON.parse
    |> ( json => json.env )
    |> ( environment => {
        if(environment === 'production') {
            return 'http://server.com'
        } else {
            return 'http://localhost:8000'
        }
    })
Enter fullscreen mode Exit fullscreen mode

Before we proceed, if you’re uncomfortable with or have never the pipeline operator, just think of it as synchronous way to chain functions together, just like you do using Promises. Here is the Promise version of the code:

const getServerURL = fileName =>
    Promise.resolve( fileName )
    .then( fs.readFileSync )
    .then( JSON.parse )
    .then( json => json.env )
    .then( environment => {
        if(environment === 'production') {
            return 'http://server.com'
        } else {
            return 'http://localhost:8000'
        }
    } )
Enter fullscreen mode Exit fullscreen mode

Right off the bat, the FP code has the same problem as the OOP code; the reading from disk and parsing JSON side effects are encapsulated away. The fs module is imported up top as a concrete implementation, and used inside the function closure. The only way to test this function is to muck around with config files; lamesauce.

Let’s refactor it like we did with the OOP code to have the dependency be injectable; aka able to be passed in as a function parameter:

const getServerURL = (readFile, fileName) =>
    fileName
    |> readFile
    |> JSON.parse
Enter fullscreen mode Exit fullscreen mode

Nice, now readFile, formerly the concrete implementation fs.readFileSync can be passed in as a parameter. This means this function can be configured in multiple ways, but 2 important ones: a stub readFile for the unit test, and a real readFile for the integration test. Here’s the unit test stub:

const readFileStub = () => `{ "env": "qa" }`
Enter fullscreen mode Exit fullscreen mode

It’s guaranteed to work, and JSON.parse will always succeed with it, and our function should in theory always return our QA url; let’s test:

const url = getServerURL(readFileStub, 'some config.json')
expect(url).to.equal('http://localhost:8000')
Enter fullscreen mode Exit fullscreen mode

Our integration test is much the same:

const url = getServerURL(fs.readFileSync, './test/integration/qa-config.json')
Enter fullscreen mode Exit fullscreen mode

Instead of our stub, it’s just the real FileSystem module using the real readFileSync method.

Conclusions

Dependency Injection, specifically class constructor injection, is a technique used in Object Oriented Programming to make the classes configurable, and easier to test. Any class dependency that does some kind of side effect that could lessen your class’ functional determinism, you make that a dependency so you can test the more pure code in your class. In Functional Programming, you can use the same technique by passing those module or function dependencies as parameters to your function, achieving the same goals.

This isn’t true for all functional languages, though. In Elm, for example, this technique isn’t used because Elm does not have side effects since all functions are pure. In ReScript, however, you would because while it’s Functional, it still has the exact same side effect model as JavaScript because it compiles to JavaScript.

Top comments (20)

Collapse
 
redbar0n profile image
Magne

How would you compare Elm and ReScript? And which do you prefer?

Collapse
 
jesterxl profile image
Jesse Warden • Edited

which do you prefer?

The only reason I use ReScript is because Elm doesn't officially work on the server, I use AWS at jobs not Lamdera, and Roc isn't ready for prime time yet. That said... I've started to love their "we're imperative and don't care about Category Theory nonsense" style/attitude these OCAML kids have. I've never met a community like that before, it's neat, I'm learning a lot from them. You can learn more about my journey in my video about JavaScript to ReScript.

Ok, comparison.

Elm works on the browser, not the server. ReScript works on both the browser and server. There are ways to hack Elm into a headless state, but it's not fun. It is fun to watch others do it, though.

Elm has no side effects, so all functions are pure. ReScript is like TypeScript; it just compiles to JavaScript, so doesn't have a runtime engine like Elm does. Thus, ReScript's side effects operate just like JavaScript, and there is no I/O Monad type insanity to worry about. Instead, you have side effects everywhere insanity just like you do in JavaScript. This makes ReScript require more unit tests. WAYYY less than JavaScript to be sure, but you need stupider tests around side effects that you don't need in Elm; meaning you have to focus on testing more things.

Elm is good enough with it's types to ensure no runtime exceptions/errors, ever. ReScript is even MORE strict, yet still allows unsafe things through, and has escape hatches making it dangerous if you're not careful. For example, Elm has Maybe, specifically Nothing to handle what we in JavaScript would handle for null or undefined. ReScript is so strict and accurate about compiling to JavaScript, it has 2 completely different modules (well 4, but...) to handle both Undefined and Null in a typed way, including ways to convert back and forth. It really is a symptom of OCAML being a lower-level system language sometimes, and people from that style of language thinking in exacts. That said, it's not thorough, because if you convert an undefined to ReScript's version of Maybe called Optional, you'll get a None, but sometimes you'll get a Some in the case of null vs. undefined. It's better to use the Js.Nullable class to safely convert when you have to deal with JavaScript types coming in like user input or parsing weird JSON. I bet some people love this level of accuracy, and claim it is needed for some reasons, but I prefer Elm's simpler way of dealing with undefined/null: erasing it from existence vs. ReScript giving you thick gloves so you don't burn yourself playing with fire.

Elm's compiler is fast; faster than TypeScript. I've not used it for a gigantor project yet, but I've seen various ways to speed it up so I'm not too worried yet. However, ReScript is light speed. While I tend to do teency microservice/functions in Serverless to ensure my code doesn't get into a large monolith on purpose, I just LOVE how fast I can iterate in ReScript. It's unbelievable how fast the compiler is in a monorepo with like 3 Lambdas and dozens of supporting files. This is the main reason I don't want to use TypeScript compared to ReScript; it's just crazy fast.

I like Elm libraries better. Both languages assume a lot so if you're a beginner it can be pretty overwhelming to even get started. Like, GraphQL library for Elm doesn't tell you how to generate; they just assume you know it's a CLI and will run cli --help and not the default loading from Github. Elm's whole focus is on the beginner and making the complex simple, and many of the library authors are in education or passionate about pedagogy, so this is the exception to the rule most times. My issue with ReScript is were still in this weird time where Reason and ReScript split, but you can still use the Reason libraries in ReScript. It's not really clear, and sometimes when I try, things don't work and there are no errors and I'm like "wtf do I do now?". This is par for the course for JavaScript stuff, though, so I give a ton of leeway to that community; Elm, the opposite, and that works well, because their libraries... always work.

The Elm compiler error messages are better, EVEN IF don't type your functions. Yes, with types they're better, but ReScript ones are nowwhere near as user friendly. Even if you do data-first programming, ReScript still is like "yeah, somewhere your stuff is broke". "Hey, uh... ReScript, how about a... you know... line number to start my investigation, eh, what do you think?" "No, good luck!" Elm's all line numbers, and formatted, and colors, and pretty, and friendly, and hinty... it's just night and day.

I like how Elm has no overuse of (), no need of {}, semi-colons, and isn't as mean as Python about spacing. ReScript has MUCH less need for {}, and doens't need semi-colons either, but you still sometimes have to type it like TypeScript to get better compiler error messages, un-confuse the compiler, and I've grown to like the ML Elm typing style better than the Java esque inline style:

Elm:

add : Int -> Int -> Int
add a b =
  a + b
Enter fullscreen mode Exit fullscreen mode

vs ReScript's inline

let add = (a:number, b:number):number => a + b
Enter fullscreen mode Exit fullscreen mode

ReScript's style is to NOT type because the type inference in OCAML style compilers is just so good, but... maybe I'm doing something wrong, but I've found it's just better to type because the compiler gives messages I can actually read, and sometimes it requires a type to get "unconfused" in long chains.

I like how Elm is data last programming like normal Functional Programming, and ReScript is data-first. I also don't like how ReScript has this data last baggage and makes new packages that are data first to be more friendly to JavaScript developers. I think it's the same stupid tactic the JavaScript devs are trying to do with the Hack style vs. the F# style in the new pipeline operator. If you're a functional programmer, you'll learn to love data last, and all the literature matches. But nNnnNNNnooooOOOo, that's not how the OCAML kids jam. Just hurts my brain to switch back and forth between Elm and ReScript is all, minor nitpick.

I like how Elm has 1 architecture and "that's it". ReScript is like "Dude, we're just a fast compiler with better types than TypeScript and we support functional things". Which means you could use it in Angular or React even if both code bases were heavily Object Oriented. Some find that amazing, I'm like "ugh". That said, it makes it easy for me to use in Serverless, Server, and CLI style architectures, including the Browser when I have to do some things in JavaScript because Elm doesn't support it (i.e. document.cookies). This is just where Elm and ReScript aren't really comparing apples to apples; one is a language, compiler, framework, and runtime whereas the other, ReScript, is just a language and compiler. There's no need for a runtime "because JavaScript" or for a framework "because JavaScript".

... that said, as a team to be full stack ish? They're the shit. I love it.

Collapse
 
redbar0n profile image
Magne • Edited

btw, did you consider Golang or Clojure for the backend?

Golang is more imperative than functional, so that would be a sacrifice. But it is apparently so easy it could be learned in a few days. If you absolutely must have FP, then Clojure could be an alternative, though a bit more foreign to most.

Both Clojure and Golang have best-in-class concurrency (goroutines aka. core/async), and are so fast they use much less resources than Node.

Thread Thread
 
jesterxl profile image
Jesse Warden • Edited

Golang: I did some Golang for 3 months and don't like it. I get why some do, it's a small language with good built-ins/standard library, super fast compiler, super fast language, and the concurrency is easy to grok. Perhaps if I had learned it before my FP days I might be more liking, but I don't need that kind of speed in the stuff I do; mainly back-end API's for front-ends or Lambda functions that aren't long running nor doing a lot of concurrency.

Clojure: It's... weird. So I like the Clojure community; it has some pretty prolific people, and bloggers who've taught me stuff. But, while I respect the JVM's power, I hate configuring/using it, the JVM blows on AWS Lambda in terms of startup time (if I were doing long running Lambdas, my tune would change), and I can't live without my static/strong types.

The above is why I gave up on Elixir/Erlang (even Gleam); I refuse to use EC2's or Docker.

We've used Golang in a 12 Lambda function orchestrated Step Function; 11 were in JavaScript, but we used Go for the one that had to run for 15 minutes, and she was beast (parsing megs of SOAP XML and doing other things at the same time). So I respect it, but again... edge case. I never do perf related work, more UI guy or orchestration API guy more concerned with correctness and data munging. So while I do a lot of HTTP REST concurrency in Node.js, that's the extent of it. If I run out of resources, I just turn up the Lambda memory slider, lelz

Thread Thread
 
redbar0n profile image
Magne • Edited

Thanks, that's very insightful.

I refuse to use EC2's or Docker.

I presume that's because they are not serverless? Just thought I'd mention that with Google Cloud Run you can actually run a Docker container as serverless. It has quite a few benefits over Cloud Functions, one of them being that you can use at least 4 vCPU's instead of just 1 [*], and you can reuse instances concurrently [**]. Thus, if you have a service in Golang you'd be able to utilize all 4 vCPU cores concurrently. Seems like the optimal setup if one wants to maximise resource utilization and minimise cost (haven't done the exact cost calculation, tho). Given that one has a boatload of simultaneous incoming requests, thus the need for speed, that is. Otherwise, one might as well go with Node.js (running ReScript compiled JS) on 1 vCPU.

[*] - But you can also set the Cloud Run vCPU count to just 1: cloud.google.com/run/docs/containe... Which is the better alternative for the single-threaded Node.js, if you don't want to muck around with the Node Cluster module to take advantage of the extra cores (.

[**] I was particularly surprised to find out that with Cloud Functions:

“One of the hidden costs of using serverless Cloud Functions is that the runtime limits the concurrent requests to each instance to one. Arguably, this can simplify some of the programming requirements because developers don’t have to worry about concurrency and usage of global variables is allowed. However, this severely underuses the efficiency of Node.js event-driven IO, which allows a single Node.js instance to serve many requests concurrently. In other words, when using Cloud Functions, the server is functionally busy during the lifetime of a single request. The result of this restricted concurrency in Cloud Functions is that the function may be scaled up if there are more requests than there are instances to handle those requests. For a service under heavy load, this can quickly result in a large amount of scaling. That scaling can have unexpected and possibly detrimental side effects.” source

Thread Thread
 
jesterxl profile image
Jesse Warden

Yeah, that's half of it. The other half is the Docker workflow is just miserable. I always loved in dynamic languages how you could go "node index.js" 50 times a minute, then once you feel it's working, going aws lambda update-function and seconds later invoke your function to test while it's deployed. Docker build even with caching is just slow and miserable and does NOT utilize my skillset. Like, I really don't care about Unix and apk vs apt-get, and why I need these things installed, and what base image to extend, blah blah blah. I just don't find any of that fun. Lambda runs my code fine, I'm not worried about missing some core piece of functionality. Different story in Gitlab pipeline, oh jeez... Alpine is slow installing Ninja for ReScript, but Debian is fast.

That's really weird that Google Cloud does that. AWS doesn't have that Lambda concurrency constraint issue. I'm an AWS kid, so not sure what Google cloud or Azure has over AWS.

Thread Thread
 
jesterxl profile image
Jesse Warden

Yeah, AWS Lambda can go up to 10 gigs of memory and 6 vCPU's but most of my code is I/O bound (meaning waiting on HTTP stuff), so I don't really need that much power. However, I'm not sure I'm smart enough to use 'em yet hah. Maybe someday I'll need to parse lots of stuff or something.

Thread Thread
 
redbar0n profile image
Magne • Edited

Ah, the workflow issue makes sense. It would be more of a hassle, indeed.

That's really weird that Google Cloud does that. AWS doesn't have that Lambda concurrency constraint issue.

It seems that AWS Lambdas work the same way:

When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated,

docs.aws.amazon.com/lambda/latest/...

So, theoretically, you can't use the idle time, when the Node.js instance is waiting for I/O for one request, to process new incoming requests from the same Lambda function. Most Lambda function runtime would thus be idle time (waiting for I/O). Just like with Google Cloud Functions. Resulting in massive under-utilization of resources (Lambda runtime). But you want the Lambda to do what Node is good at: process massive amounts of requests without ever idling waiting for the I/O.

Note that this is different from AWS retaining an instance of a Lambda for later reuse (i.e. will AWS Lambda reuse function instaces?), because that happens after the original request has finished. What we want is to reuse an instance while it is being used by one request (simply switching between requests when one request is waiting for I/O).

If the above described is indeed the case, it looks like the dirty little secret of serverless deployments... AWS and Google don't really want you to reuse instances, because that reduces their total runtime which is how they bill you.

On how to circumvent it, see for instance: How We Massively Reduced Our AWS Lambda Bill With Go (Update: on second reading it seems like this is talking about doing multiple outgoing I/O requests from within the same Lambda function invocation, like you mentioned Promise.all could have handled in Node.js. But what I am talking about is reusing Lambda invocations to serve multiple new incoming requests while it would otherwise be waiting for the DB response for the first request). In Cloud Run (with containers), this reuse of idle I/O timeshould be possible with Node too, but without the immediate benefit of multi-core concurrency (unless resorting to a Node clustering setup).

Thread Thread
 
jesterxl profile image
Jesse Warden

I think we're talking about different things or I'm just misunderstanding. You can, and do, make multiple HTTP outgoing REST calls from a single Lambda execution context. Even if you set your Lambda reserved concurrency to 1, ensuring you only ever get 1 Lambda execution context, and only 1 actual instance being used, you can do a Promise.all with 5 HTTP calls, and they all work at the same time. Now "by same time" I don't mean parallelism vs. concurrency, or Node.js's fake concurrency vs. Go, I just mean that you can see it take 30ms with multiple calls going out and responding vs. doing a regular Promise chain of 1 call after another. Now, yes, you can lower the Lambda's memory to like less than 512k which I think gives you a super small Raspberry PI vCPU and so your network calls and code in general goes much slower, but my point is you CAN do concurrency; I've seen it work.

What that article is talking about regarding Lambda is that "running Node.js code" doesn't act like Express.js where you have a single EC2/server it's hosted on having some ALB throwing 50 calls at it, and Express.js keeps accepting those and handling them in turn using Node's non-blocking I/O concept. So while you're writing code that feels synchronous, you may have 50 routes all executing "at the same time", but because you wrote it in an immutable way and are using Express.js, you don't really know this. You also don't even know that you're 1 of 3 in a Container on an ECS cluster. Whereas, with Lambda, you typically have 1 ALB or API Gateway URL firing up "1 Lambda per request". So 50 requests == 50 Lambda instances. AWS's Firecracker has a way to re-use those instances, though, so you may actually get 50 requests only spawning 30 Lambdas; 20 of those are a single Lambda Execution Context handling 2 calls each. Meaning your handler function is called twice, but the code up top that goes const fetch = require('fetch') is only run once... because you're in the same instance.

When you're building Serverless Lambdas, your Lambda is invoked by that event; it's not like Express.js where you "just have Express listening on some port and it just fires the wired up function for that route every time it gets an incoming HTTP request". So Lambdas don't care who invoked them, you just have a handler that runs, and you return a value, and that's it. There's no "listening for other events while your Lambda is running"; it's litterly just const handler = async event => console.log("got an event:", event) ... and that's it. So with 1 Execution Context per call, that'd be 50 of those functions running independently. If it were 30, and 20 of them shared the same Execution Context, JavaScript/Node doesn't care because it's just running the handler function twice, there's no shared global variables/state between them, and that's fine.

AWS has done an amazing job of making SQS, API Gateway, ALB, SNS, etc. all "handle their own calling and retry and some batching", so your code doesn't care about any of that as opposed to the stupid process.on('SIGTERM', process.exit) crap you see in some Express/Restify code bases.

So again, that article is talking about "Your Lambda runs and stops, you can't have it sit there waiting for new API Gateway, or S3 or whatever requests triggered it". That's a fundamental of how AWS Lambda works. I have zero idea if Google/Azure works the same way.

We're on the same page with the I/O, though, again, most of my Lambdas run in 10 seconds or less; if they need more time, too bad; we just make 'em timeout. That said, I have this one AppSync Lambda function that makes 14 concurrent requests that's super close to 9.8 seconds, but I blame the Go developers I'm calling and their lack of caching vs. being constructive and helping them on said cache 😜.

Yes, ok, read your other paragraph, yeah, we're on the same page there too on re-used instances.

Yeah, you can do with Node.js what they're doing with Go. Slower, sure, but like SQS or AppSync or Dynamo; all 3 support batching. Rather than "1 SQS message is parsed by 1 Lambda", instead, you can say "no no, 200 messages are handled by this Lambda", and you either map each to a Promise to do stuff like copy data to S3, OR do Promise.all if it can be concurrently. And it works, we've done it. Now is that Lambda somehow flattening similiar to the browser those "200 HTTP requests into 1 TCP/IP call stack request, then unpacking on the receiving end to 200 requests to optimize network traffic"? Maybe, I have no idea, but I know Promise.all for our 200 DynamoDB messages in Node using Promise.all takes about 3 seconds to write back to Dynamo concurrently, but if you do one at a time, it's 20 seconds. So the end result is even if it's fake concurrency, it's working like we expect.

Thread Thread
 
jesterxl profile image
Jesse Warden

... ok I should play with Google and Azure more... so many toys...

Collapse
 
redbar0n profile image
Magne

I just LOVE how fast I can iterate in ReScript. ... This is the main reason I don't want to use TypeScript compared to ReScript; it's just crazy fast.

Is it the speed of the type inference that is slowing you down in TypeScript? Or is it the Webpack bundling, or TS transpiling? Curious what fast refers to: HMR, type inference, or transpiling? Or all?

Have you tried Vite?

"Vite uses esbuild to transpile TypeScript into JavaScript which is about 20~30x faster than vanilla tsc, and HMR updates can reflect in the browser in under 50ms."

vitejs.dev/guide/features.html#typ...

The only thing I don't think Vite addresses is the type inference in the IDE... but how slow are they anyway?

Thread Thread
 
jesterxl profile image
Jesse Warden • Edited

When I run the compiler in watch mode. TypeScript takes seconds, ReScript takes milliseconds. No webpack, hot hot module reloading, just writing writing code so I can immediately go node theFile.js or npm test:unit.

No, I haven't tried Vite, thanks for the link. My issue, really, is... me. 10 years ago, I learned of TypeScript. As a new Flash/Flex refugee, I was looking for something that had strong types and would compile to JavaScript since I still wanted to do web development, just not go back in time 5 years using JavaScript compared to ActionScript 3. Back then, compilers/transpilers were fringe, JavaScript the language didn't have as many features + browser support as today, and most of the community said "You don't need classes and types". While I didn't agree, it was still hard to implement this stuff for client work if you weren't a sole contractor architecting your own projects for clients. Once Angular RC1 came out, TypeScript was more mature and solidified to write not just UI's but back-end code, CLI's, and libraries. Angular RC1 is also when I stopped being an OOP fan, and started learning more Functional Programming. 10 years later today, TypeScript still isn't very friendly to a Functional Programmer. The language is heavily focused on OOP developers, or heavily Object based code bases that have a lot of internal state and side effects. Despite the herculean efforts of fp-ts, and the massive growth in the job market for TypeScript acceptance... I don't really care, I don't like it.

ReScript gives me the guarentee's I want and the speed I want, with zero configuration or having to muck around with compiler settings, bike shed with a team what settings we should/should not use, etc. It's friendly to an FP programmer and has many FP features built into the language and standard libraries.

Thread Thread
 
redbar0n profile image
Magne

10 years later today, TypeScript still isn't very friendly to a Functional Programmer. The language is heavily focused on OOP developers, or heavily Object based code bases that have a lot of internal state and side effects. Despite the herculean efforts of fp-ts, and the massive growth in the job market for TypeScript acceptance...

That is a very compelling reason to go for ReScript over TypeScript indeed. TS can too easily slide out into something you don't want, and everyone on a team being guided into doing the right thing is good.

PS: check out ts-belt if you have to use typescript, it's inspired by and built with ReScript.

Thread Thread
 
jesterxl profile image
Jesse Warden

Nice, thanks, I'll check it out.

Collapse
 
redbar0n profile image
Magne

ReScript still is like "yeah, somewhere your stuff is broke". "Hey, uh... ReScript, how about a... you know... line number to start my investigation, eh, what do you think?" "No, good luck!" Elm's all line numbers, and formatted, and colors, and pretty, and friendly, and hinty... it's just night and day.

When I try the example code at the ReScript playground, and insert an error, then it gives the line number..?

"
Type Errors
[E] Line 11, column 14:

The value ms can't be found
"

Thread Thread
 
jesterxl profile image
Jesse Warden • Edited

Welcome to the party, pal!

Thread Thread
 
redbar0n profile image
Magne

I meant it as a question, since you said it doesn't give the line number.

Thread Thread
 
jesterxl profile image
Jesse Warden

Oh haha, my bad. Yeah, sometimes it does, other times it's like "somewhere". ReScript compiler is "good", I'm just whiny and expect a lot!

Collapse
 
redbar0n profile image
Magne

just read this again, and must say thank you again for a very thorough answer! <3 I would turn it into a blog post called ReScript vs. Elm if I were you!

Collapse
 
redbar0n profile image
Magne

Thank you so much for a very candid and thorough answer! :D With a video too! <3