Lessons-learned from a 6-month Typescript/AWS Lambda project

I finished a 6-month project where we delivered a greenfield IT system. Here are my lessons learned, focusing on the technical side.

For the context, the system was written in Typescript, on top of AWS Lambda. Functionally, the system integrated two other systems via their REST APIs. The team consisted of two engineers, one project manager, and one person from the business. I was the engineer.

What went well: Tests

Automatic testing is such a delicate topic:

  • too many implementation-specific tests and code gets tedious to refactor and to structurally improve;
  • too few tests and one is afraid to refactor, afraid to break code that works.

So how to solve this? In this project, we used two batteries of tests. First, we added component tests. The tests were implementation-agnostic and used HTTP server mocking. Such tests were a great decision:

  • majority of code got covered by a few high-level component tests,
  • we gained the confidence to structurally refactor code,
  • we also enjoyed the development experience: tests with HTTP mocks were easy to work with, especially with Nock's automatic test recorder

The second battery of tests contained unit tests. We unit tested functions with nontrivial calculations, such as complex text parsing. With such tests, we gained confidence that the calculations were correct and all edge cases were covered.

What went well: Code structure

Speaking of refactoring: the closer we were to completion, the clearer it was how code should be organized. So towards the end of the project and thanks to the component tests, I heavily refactored the overall code structure. Complex things got decomplected. Afterward, when unexpected business requests arrived, we added them as a simple unit; instead of adding to a hairball of complexity.

What I take from this: it's OK to start adding functionality without worrying about high-level code structure. The structure will reveal itself. I wish I'd have added component tests even sooner, to enable continuous structural refactoring.

What went well: Idempotency

Where possible, I made components idempotent (wiki). What a relief it was to later perform data migration based on such an idempotent execution. We felt in control, even during complex data migration.

Admittedly, some parts were not idempotent. This was due to business constraints. For the next project, I'll try to advocate for idempotency more.

(By the way, as I'm writing this post, another team is facing an issue in the billing system. They discovered that the issue is related to AWS Lambda services not being idempotent. What a day for idempotency.)

What went well: Logging

I'm deepening my appreciation for verbose logging, especially with AWS Lambdas: function inputs, calls to external systems, currently executed steps. When things went unexpected, logging turned out to be half the success.

What went well: Validating AWS lambda input

What a time saver! Handler input validation turned out to be super helpful as we were switching systems that were responsible for invoking the AWS Lambda. From now on, I'm validating Lambda's handler input every time I write an AWS Lambda function. It's a low-hanging fruit that can save hours.

What went well: Error reporting

We ended up distinguishing between two types of errors: application errors and business warnings.

  • Application errors were unhandled exceptions or AWS Lambda timeouts. The responsibility to fix these is on our side, the engineers. We handled such errors with Sentry, to our great satisfaction.
  • Business warnings were data errors, out of our responsibility. When a data error was detected, we sent a Slack notification to the business.

The combination of application errors and business warnings worked well: we had full and immediate visibility into exceptions, and the business had immediate visibility into data issues. They were even eager to fix it..

All in all, the team was very happy with this immediate and proactive approach.

To improve: Using unfinished open-source library

We used an unfinished open-source library that wrapped API calls around one system. We wanted to gain speed. It turned out this was a bad investment: the library didn't cover edge-cases, missed functionality, behaved unpredictably, missed Typescript types. We ended up rewriting most of the functionality from scratch to fit our needs.

To improve: Function inputs

For some functions, I supplied more inputs than the function needed:

interface Foo {
  id: number
  name: string

// supplying the whole Foo object
const func1 = (foo: Foo) => {
  console.log(foo.name) // but only name is needed

// supplying only what's needed, Foo's name
const func2 = (name: Foo["name"]) => {
  // the function doesn't need to know about the whole Foo object

During heavier refactoring, I needed the form of func2 instead of func1. In retrospect, I'm not surprised: func1 complects an operation executed on a name with the whole Foo object. This complexity prevented me to have simpler components elsewhere.

Also, testing func2 got easier, we didn't need to supply unnecessary test data.

Open questions: Dependency injection in Typescript

Where possible, I aim to write functions that are honest about their inputs. However, passing all the context around soon becomes cumbersome. I'm wondering about possible solutions in Typescript that don't feel like magic. One approach I'd like to experiment with is the fp-ts Reader monad.

Open question: Code organization

I thought that organizing code by domains or features (feature A, feature B, etc.) is preferred over organizing by layers (db, api, etc.).

# organizing by features

# organizing by layers

My reasoning was that if code related to one feature sits together, it is easier to reason about it. However, I discovered that if I put all code related to one layer together, I spot more opportunities for simplification and decomplecting. So now I face this conundrum: organize code by domain or organize code by layer?

Open question: Duplication for the win?

For many years, I was a strong advocate of the DRY principle. However, I'm now discovering benefits of duplication:

  • duplicated code is easier to modify
  • duplicated code can be made more domain-specific and use-case specific
  • duplication can simplify code, such as functions that contain many if-else branches, where each branch denotes a different business use-case.

So now I face another delicate conundrum: when to duplicate code and when to make code DRY?

Open question: Risk of missed business outcome

This question is not technical, but to me, it is the most important one. We were asked to deliver a technical solution that didn't immediately bring business value. Only after the project had been delivered, another team started to monetize the system we had built. But what if the monetization won't go as planned? All our efforts will be wasted.

The client was determined to go ahead with the implementation, despite the risk of wasted effort. As an engineer, I face a conundrum: build the system knowing the system could be a waste; or push for the business impact. There are differing views on the topic. What I'm contemplating is to start from myself by reflecting on the outcome of everything we release.

Would you like to connect? Subscribe via email or RSS , or follow me on Twitter!