Skip to main content
← Back to the liblab blog

How to setup your TypeScript project for success: Tutorial

So you have started a new project. The field is green, the air is fresh, there are no weeds, how exciting!

But where do you start? What is the first thing you do?

Surely you should write some code?

Well, no.

You must have had shiny new projects before, but they mostly turned out sour at some point, became hard to understand and to collaborate in, and very slow to add new changes.

If you are wondering why, we will explore the common causes, and more importantly, solutions that we can adopt to prevent such things from happening.

These causes can range from more than one naming convention, multiple contradicting rules, improperly formatted code, no tests, which results in an overall very frightening environment to make changes in.

Which rules should you follow?

How should you format your code?

How can you be sure that your changes have not broken anything?

It would be good if we knew all the answers to these questions.

It would be even better if you didn't have to concern yourself with these issues and could solely focus on coding.

It would be best if anyone on our project didn’t have to worry about them and could just focus on coding.

The key resource in software development is time, and the faster we are able to make changes and move forward, the greater advantage we will have in the market.

There is a saying, preparation is half the battle and in this blog post we will explore various techniques we can apply to our project to help us maintain velocity and quality throughout the lifetime of the project.

Chapter 1: Setup

So you open the project in your favorite IDE and it is basically empty.

You don’t know where to start or what to do first?

Ideally, before writing any code, we should invest time into setup. To elaborate, by investing we mean paying a certain price now in order to reap greater benefits in the long run. The main intention behind this time investment is to make it easy and fast for us to make changes in the existing codebase no matter how large it grows. This also means that new members that join the project can understand the system and its convention with as little effort as possible and be confident in the changes they are making.

But what should we invest in?

If we can sum up what makes a software project good it’s very simple:

The code should be easy to understand, test and change.

That may sound too simple, but ultimately, it’s the truth.

As programmers, we want to write super powerful and reusable code, but in practice that results in files and functions that span hundreds, if not thousands of lines, have tens of parameters and behave in a myriad of ways, depending on how we call them. This makes them very hard to understand and test, which means that it takes a lot of time to change them. And if there is one constant in software: is that it changes. Setting us up correctly will save a lot of time in the long run and make it less frightening to make changes.

Code repository

Even if you are going to be working alone on a project, it is a very good idea to use a VCS (version control system).

So naturally the first thing, even before opening your IDE, should be to setup the code repository of your choice. This means that you should pick your main branch and protect it. No one, including yourself, should be allowed to directly push to it, instead all changes should be made through pull requests.

Yes, if you are working alone, you should be reviewing your own PRs. This additional filter will catch many ill-committed lines before they reach production code.

Linting

A red sign with please stay on the path on it in white writing.

Linters are tools that analyze source code for potential logical errors, bugs, and generally bad practices. They can help us enforce rules which helps improve code quality, readability, maintainability, and consistency across a codebase.

There are many linters to choose from:

  1. ESLint
  2. JSLint
  3. JSHint

How they are setup varies widely on the specific providers, but most of them support a common set of rules.

The most popular and recommended provider is ESLint, below are some important rules that every project should have:

  • complexity The number one time consumer in understanding and changing code is complexity, luckily we can enforce simplicity in code using this rule. This rule analyses and limits the number of logical statements in one function:

    function a(x) {
    if (true) {
    return x; // 1st path
    } else if (false) {
    return x + 1; // 2nd path
    } else {
    return 4; // 3rd path
    }
    } // complexity = 3
  • no explicit any The pseudo type any means that our variable or object can have literally any field or value. This is the equivalent of just removing typing. There might be times where we think about reaching for this unholy type, but more often than not we can avoid it by using other typing mechanisms such as generics. The following example shows how to resist the temptation and use careful thinking to solve a type “problem”

    function doSomethingWithFoo(foo: any): any {
    ... // do something with foo
    return foo;
    }
    function doSomethingWithFoo<T>(foo: T): T {
    ... // do something with foo
    return foo;
    }

    However, if you don’t have access to a certain type, you can use the built-in helpers such as:

    ReturnType<someLibrary['someFunction']> and Parameters<someLibrary['someFunction']>

    Alternatively you can use unknown instead of any which is safer because it will require you to cast the operand into a type before accessing any of it’s fields.

  • explicit return types Enforces explicit return types in functions. Although it is possible for the language interpreter to infer the return types of functions, it is recommended to be explicit about them so that know how some function is intended to be used, instead of guessing.

  • no-undef Disallow usage of undeclared variables.

  • no-unused-vars This is a rule that does not allow for us to have unused variables, functions or function parameters.

    We can do this by adding this rule:

    "@typescript-eslint/no-unused-vars": ["error"]

    Unused code is an unnecessary burden, since we need to maintain it and fear deleting it once it arrives to our main branches, so it’s best to prevent this from even being merged. However, there will be cases such as method overloading or when implementing an interface, where we will need to match the signature of a method, including the parameters, but we might not be using all of them.

    Imagine we have an interface:

    interface CanSchedule {
    schedule(startTime: Date, endTime: Date);
    }

    Now we want to implement this interface, however, we won’t be using both of the arguments:

    class Scheduler implements CanSchedule {
    // throws an error since endTime is unused!
    schedule(startTime: Date, endTime: Date) {
    console.log(`scheduling something for ${startTime.toDateString()}`);
    }
    }

    In that case we can add an exception to this rule, not to apply to to members with a prefix such as _. This can be done in eslint with the following rules:

    "@typescript-eslint/no-unused-vars": [
    "error",
    {
    "argsIgnorePattern": "^_",
    "varsIgnorePattern": "^_",
    "caughtErrorsIgnorePattern": "^_"
    }
    ],

    Now we can write something like:

    class Scheduler implements CanSchedule {
    // No longer throws an error
    schedule(startTime: Date, _endTime: Date) {
    console.log(`scheduling something for ${startTime.toDateString()}`);
    }
    }
  • typedef Enforces us to define types for most of the fields and variables.

    No cutting corners!

💡 However, if you find it too time consuming to set up lint rules manually, you can probably find an already configured linter with the rules that best suite your taste.

Here is a useful list of popular linter configurations for typescript:

github.com/dustinspecker/awesome-eslint

Prettier

A red lipstick

There is a saying in my native language: A hundred people, a hundred preferences.

Now imagine a project where every developer introduced their preference in coding style. Yeah, it’s terrifying for me too.

Now imagine that you can avoid all of that. Good thing is that we don’t have to imagine, we can just use a prettier. Prettier enforces a consistent code-style, which is more important than one developer’s preference.

It is very simple to setup and use:

# install it
npm install --save-dev --save-exact prettier
# add an empty configuration file
echo {}> .prettierrc.json
# format your code
npx prettier --write .

Configure it however you prefer, no one can tell you which style is good or bad, however only two important javascript caveats comes to mind:

  • Please use semicolons.

    Why?

    Javascript compilers will automatically insert semicolons in the compilation stage ASI, and if there are none, they will try to guess where they should be inserted which may result in undesired behavior:

    const a = NaN
    const b = 'a'
    const c = 'Batman'
    (a + b).repeat(3) + ' ' + c

    Now you might think this code will result in 'NaNaNaNaNaNa Batman' but it will actually fail with Uncaught TypeError: "Batman" is not a function (unless there is a function named Batman in the upper scope).

    Why is that?

    Javascript compilers will interpret this as

    const a = NaN;
    const b = 'a';
    const c = 'Batman'(a + b).repeat(3) + ' ' + c;

    due to the lack of explicitness in regards to semicolons.

    Luckily, the semi rule is enabled by default, so please don’t change it;

  • Use trailing commas,

    This is often overlooked, and might seem like it makes no difference but there is one:

    It means when you add a new property, you will need to add a comma AND the property, which is not only more work, but will result as 2 line changes in VCS (git).

    const person = {
    age: 30,
    - height: 180
    + height: 180,
    + pulse: 60,
    }

    instead of

    const person = {
    age: 30,
    height: 180,
    + pulse: 60,
    }

Ok, now what?

Ok so you have setup types, lint and formatting.

But you have to fix lint and prettier errors all the time and your productivity is taking a nose dive.

Oh but wait, there are commands you can run that will fix all linting errors and pretty your code? That’s really nice but only if you didn’t have to manually run these commands…

Automated ways of linting and prettying

Now if you’re smart (or lazy like me) you can just configure some tool to do this tedious job for you.

Some of the options are:

  1. Configure your IDE to run this on save
  2. Using onchange
  3. Introduce a pre-commit hook

Ideally, you want to run lint error fixing formatting on every save, but if your IDE or machine does not support this, you can run it automatically prior to every git commit command.


Ok, now you are ready and probably very eager to go write some code, so please do so, but come back for chapter 2, because there are important things to do after writing some code.

Or if you prefer TDD, jump straight to chapter 2.

Chapter 2: tests

So you have written and committed some nicely linted and formatted code (or you prefer writing tests first).

That is amazing, but is it enough?

Simply put, no.

It might look like a waste of time, and a tedious task, but tests are important, mmkay?

Mr Mackey from South Park with the caption Test are important Mmkay

So why is having tests important?

  1. Ensures code quality and correctness: Automated tests serve as a safety net, allowing you to validate the functionality and behavior of your code. By running tests regularly, you can catch bugs, errors, and regressions early in the development process, preferably locally, even before you push them upstream!
  2. Facilitates code maintenance and refactoring: As projects evolve, code often needs to be modified or refactored. Automated tests provide confidence that the existing functionality remains intact even after changes are made. They act as a safeguard, helping you identify any unintended consequences or introduced issues during the refactoring process.
  3. Encourages collaboration and serves as documentation: When multiple developers work on a project, automated tests act as a common language and specification for the expected behavior of the code. They promote collaboration by providing a shared understanding of the system's requirements and functionality. Also, since tests can be named whatever we want, we can use this to our advantage to describe what is expected from some component that might not be that obvious.
  4. Reduces time and effort in the long run: While writing tests requires upfront investment, it ultimately saves time and effort in the long run. Automated tests catch bugs early, reducing the time spent on manual debugging.
  5. Enables continuous integration: Since tests serve as some sort of a contract description, we can now make changes in functionality while asserting and validating if have broken their contract towards other components. They enable continuous integration by providing a reliable filter for potential bugs and unwanted changes in behavior. Developers can quickly detect any issues introduced by new code changes, allowing for faster iteration and deployment cycles.

Writing code without tests is like walking a rope without a safety net. Sure, you may get across, but failing might be catastrophic.

Let’s say that we have some complex and unreadable function but we have a test for it:

function getDisplayName(user: { firstName: string; lastName: string }): string {
let displayName = '';

for (let i = 0; i < user.firstName.length; i++) {
displayName = displayName.concat(user.firstName.charAt(i));
}

displayName = displayName.concat(' ');

for (let i = 0; i < user.lastName.length; i++) {
displayName = displayName.concat(user.lastName.charAt(i));
}

return displayName;
}
describe('getDisplayName', () => {
// because we can name these tests, we can describe what the code should be doing
it('should return user\'s full name', () => {
const user = { firstName: 'John', lastName: 'Doe' };
const actual = getDisplayName(user);

expect(actual).toEqual('John Doe');
});
});

Now we are able to refactor the function while being confident that we didn’t break anything:

function getDisplayName(user: { firstName: string; lastName: string }): string {
// test will fail since we accidentally added a ,
return `${user.firstName}, ${user.lastName}`;
}

Now you see how tests not only assert the desired behavior, but they can and should be used as documentation.

There is a choice of test types you could introduce to help you safely get across.

If you are unsure which might be the right ones for you, please check out this blog post by my colleague Sean Ferguson.

Ideally you should be using more than one type of tests. It is up for you to weigh and decide which fit your needs best, but once you do, it is very important not to cut corners and to invest into keeping a high coverage.

This is the most important investment in our codebase. It will pay the highest dividends and it will keep us safe from failing if we do this part well.

The simplest and fastest tests to write are unit tests, but they are often not enough, because they don’t assert that the users using our system are experiencing it as expected.

You can even use AI tools like Chat GPT to generate unit tests based on production code (although they will not be perfect every time).

This is done by using integration or e2e tests, albeit it takes longer to set them up and to write individual tests, it is often the better investment, since we can rely on them to cover our system from the perspective of anyone using it.

Ok, so you are convinced and you add a test suite which you will maintain. You also added a command to run tests, and you do so before committing your code. That is very nice but what if someone in the team doesn’t do the same? What if they commit and merge code without running tests? 😱

If only there is a way to automate this and make it public.

Chapter 3: tying it all together

Now all these enhancements make sense, and you feel pretty happy about them, but without any forcing functions that make running these mandatory, they don’t bring much value since there will be people bypassing them.

Luckily most code repositories like GitHub provide us with automated workflows that can make it very easy to automate and force these checks and not let code be merged if it doesn’t pass the necessary checks.

Now we can write a workflow that will check all of this for us!

A GitHub workflow that would install run linting, unit and e2e tests would look something like:

name: Linting and Testing

on: [pull_request]

jobs:
linting-and-testing:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-[email protected]
with:
access_token: ${{ github.token }}

- name: Checkout
uses: actions/checkout@v3 # or whatever is the highest version at the time

- name: Setup Node
uses: actions/setup-node@v3
with:
node-version: '18.12' # or whatever the latest LTS release is
cache: 'npm'

- name: Install dependencies
run: npm i

- name: Run ESLint check
run: npm run lint:ci # npx eslint "{src,tests}/**/*.ts"

- name: Run unit tests
run: npm run test

- name: Run e2e tests
run: npm run test:e2e

Can we code now?

Yes we can!

But as we said, preparation is half the battle.

The other, longer and harder part, is yet to come and it is paramount to stay disciplined, consistent and to keep it simple. It would be easiest to do so by practicing being pragmatic, having pride and maturity in your approach to work and having a mindset that helps not only yourself but others that you work with grow.

This is best explained by a dear and pragmatic colleague of mine, Stevan Kosijer, in a blog post series starting with Pragmatic Engineer’s Philosophy.

Conclusion

Although we might instinctively think that writing code is the most productive way to initially invest our time in a software development project, without proper setup that is almost never the case. Having confidence in your changes through having automated tests, having enforced rules we find useful and having consistent formatting will greatly impact the velocity and quality of our work.

If your project is integrating with an API, which most likely it is, my honest advice would be to use and SDK. However, if you want a high quality SDK that can be built and updated on demand, along with documentation, tests, and easy integration with your CI/CD pipeline, please check out our product and perhaps even schedule a demo at liblab.com.