Skip to main content

4 posts tagged with "TypeScript"

View All Tags
← Back to the liblab blog

So you have started a new project. The field is green, the air is fresh, there are no weeds, how exciting!

But where do you start? What is the first thing you do?

Surely you should write some code?

Well, no.

You must have had shiny new projects before, but they mostly turned out sour at some point, became hard to understand and to collaborate in, and very slow to add new changes.

If you are wondering why, we will explore the common causes, and more importantly, solutions that we can adopt to prevent such things from happening.

These causes can range from more than one naming convention, multiple contradicting rules, improperly formatted code, no tests, which results in an overall very frightening environment to make changes in.

Which rules should you follow?

How should you format your code?

How can you be sure that your changes have not broken anything?

It would be good if we knew all the answers to these questions.

It would be even better if you didn't have to concern yourself with these issues and could solely focus on coding.

It would be best if anyone on our project didn’t have to worry about them and could just focus on coding.

The key resource in software development is time, and the faster we are able to make changes and move forward, the greater advantage we will have in the market.

There is a saying, preparation is half the battle and in this blog post we will explore various techniques we can apply to our project to help us maintain velocity and quality throughout the lifetime of the project.

Chapter 1: Setup

So you open the project in your favorite IDE and it is basically empty.

You don’t know where to start or what to do first?

Ideally, before writing any code, we should invest time into setup. To elaborate, by investing we mean paying a certain price now in order to reap greater benefits in the long run. The main intention behind this time investment is to make it easy and fast for us to make changes in the existing codebase no matter how large it grows. This also means that new members that join the project can understand the system and its convention with as little effort as possible and be confident in the changes they are making.

But what should we invest in?

If we can sum up what makes a software project good it’s very simple:

The code should be easy to understand, test and change.

That may sound too simple, but ultimately, it’s the truth.

As programmers, we want to write super powerful and reusable code, but in practice that results in files and functions that span hundreds, if not thousands of lines, have tens of parameters and behave in a myriad of ways, depending on how we call them. This makes them very hard to understand and test, which means that it takes a lot of time to change them. And if there is one constant in software: is that it changes. Setting us up correctly will save a lot of time in the long run and make it less frightening to make changes.

Code repository

Even if you are going to be working alone on a project, it is a very good idea to use a VCS (version control system).

So naturally the first thing, even before opening your IDE, should be to setup the code repository of your choice. This means that you should pick your main branch and protect it. No one, including yourself, should be allowed to directly push to it, instead all changes should be made through pull requests.

Yes, if you are working alone, you should be reviewing your own PRs. This additional filter will catch many ill-committed lines before they reach production code.

Linting

A red sign with please stay on the path on it in white writing.

Linters are tools that analyze source code for potential logical errors, bugs, and generally bad practices. They can help us enforce rules which helps improve code quality, readability, maintainability, and consistency across a codebase.

There are many linters to choose from:

  1. ESLint
  2. JSLint
  3. JSHint

How they are setup varies widely on the specific providers, but most of them support a common set of rules.

The most popular and recommended provider is ESLint, below are some important rules that every project should have:

  • complexity The number one time consumer in understanding and changing code is complexity, luckily we can enforce simplicity in code using this rule. This rule analyses and limits the number of logical statements in one function:

    function a(x) {
    if (true) {
    return x; // 1st path
    } else if (false) {
    return x + 1; // 2nd path
    } else {
    return 4; // 3rd path
    }
    } // complexity = 3
  • no explicit any The pseudo type any means that our variable or object can have literally any field or value. This is the equivalent of just removing typing. There might be times where we think about reaching for this unholy type, but more often than not we can avoid it by using other typing mechanisms such as generics. The following example shows how to resist the temptation and use careful thinking to solve a type “problem”

    function doSomethingWithFoo(foo: any): any {
    ... // do something with foo
    return foo;
    }
    function doSomethingWithFoo<T>(foo: T): T {
    ... // do something with foo
    return foo;
    }

    However, if you don’t have access to a certain type, you can use the built-in helpers such as:

    ReturnType<someLibrary['someFunction']> and Parameters<someLibrary['someFunction']>

    Alternatively you can use unknown instead of any which is safer because it will require you to cast the operand into a type before accessing any of it’s fields.

  • explicit return types Enforces explicit return types in functions. Although it is possible for the language interpreter to infer the return types of functions, it is recommended to be explicit about them so that know how some function is intended to be used, instead of guessing.

  • no-undef Disallow usage of undeclared variables.

  • no-unused-vars This is a rule that does not allow for us to have unused variables, functions or function parameters.

    We can do this by adding this rule:

    "@typescript-eslint/no-unused-vars": ["error"]

    Unused code is an unnecessary burden, since we need to maintain it and fear deleting it once it arrives to our main branches, so it’s best to prevent this from even being merged. However, there will be cases such as method overloading or when implementing an interface, where we will need to match the signature of a method, including the parameters, but we might not be using all of them.

    Imagine we have an interface:

    interface CanSchedule {
    schedule(startTime: Date, endTime: Date);
    }

    Now we want to implement this interface, however, we won’t be using both of the arguments:

    class Scheduler implements CanSchedule {
    // throws an error since endTime is unused!
    schedule(startTime: Date, endTime: Date) {
    console.log(`scheduling something for ${startTime.toDateString()}`);
    }
    }

    In that case we can add an exception to this rule, not to apply to to members with a prefix such as _. This can be done in eslint with the following rules:

    "@typescript-eslint/no-unused-vars": [
    "error",
    {
    "argsIgnorePattern": "^_",
    "varsIgnorePattern": "^_",
    "caughtErrorsIgnorePattern": "^_"
    }
    ],

    Now we can write something like:

    class Scheduler implements CanSchedule {
    // No longer throws an error
    schedule(startTime: Date, _endTime: Date) {
    console.log(`scheduling something for ${startTime.toDateString()}`);
    }
    }
  • typedef Enforces us to define types for most of the fields and variables.

    No cutting corners!

💡 However, if you find it too time consuming to set up lint rules manually, you can probably find an already configured linter with the rules that best suite your taste.

Here is a useful list of popular linter configurations for typescript:

github.com/dustinspecker/awesome-eslint

Prettier

A red lipstick

There is a saying in my native language: A hundred people, a hundred preferences.

Now imagine a project where every developer introduced their preference in coding style. Yeah, it’s terrifying for me too.

Now imagine that you can avoid all of that. Good thing is that we don’t have to imagine, we can just use a prettier. Prettier enforces a consistent code-style, which is more important than one developer’s preference.

It is very simple to setup and use:

# install it
npm install --save-dev --save-exact prettier
# add an empty configuration file
echo {}> .prettierrc.json
# format your code
npx prettier --write .

Configure it however you prefer, no one can tell you which style is good or bad, however only two important javascript caveats comes to mind:

  • Please use semicolons.

    Why?

    Javascript compilers will automatically insert semicolons in the compilation stage ASI, and if there are none, they will try to guess where they should be inserted which may result in undesired behavior:

    const a = NaN
    const b = 'a'
    const c = 'Batman'
    (a + b).repeat(3) + ' ' + c

    Now you might think this code will result in 'NaNaNaNaNaNa Batman' but it will actually fail with Uncaught TypeError: "Batman" is not a function (unless there is a function named Batman in the upper scope).

    Why is that?

    Javascript compilers will interpret this as

    const a = NaN;
    const b = 'a';
    const c = 'Batman'(a + b).repeat(3) + ' ' + c;

    due to the lack of explicitness in regards to semicolons.

    Luckily, the semi rule is enabled by default, so please don’t change it;

  • Use trailing commas,

    This is often overlooked, and might seem like it makes no difference but there is one:

    It means when you add a new property, you will need to add a comma AND the property, which is not only more work, but will result as 2 line changes in VCS (git).

    const person = {
    age: 30,
    - height: 180
    + height: 180,
    + pulse: 60,
    }

    instead of

    const person = {
    age: 30,
    height: 180,
    + pulse: 60,
    }

Ok, now what?

Ok so you have setup types, lint and formatting.

But you have to fix lint and prettier errors all the time and your productivity is taking a nose dive.

Oh but wait, there are commands you can run that will fix all linting errors and pretty your code? That’s really nice but only if you didn’t have to manually run these commands…

Automated ways of linting and prettying

Now if you’re smart (or lazy like me) you can just configure some tool to do this tedious job for you.

Some of the options are:

  1. Configure your IDE to run this on save
  2. Using onchange
  3. Introduce a pre-commit hook

Ideally, you want to run lint error fixing formatting on every save, but if your IDE or machine does not support this, you can run it automatically prior to every git commit command.


Ok, now you are ready and probably very eager to go write some code, so please do so, but come back for chapter 2, because there are important things to do after writing some code.

Or if you prefer TDD, jump straight to chapter 2.

Chapter 2: tests

So you have written and committed some nicely linted and formatted code (or you prefer writing tests first).

That is amazing, but is it enough?

Simply put, no.

It might look like a waste of time, and a tedious task, but tests are important, mmkay?

Mr Mackey from South Park with the caption Test are important Mmkay

So why is having tests important?

  1. Ensures code quality and correctness: Automated tests serve as a safety net, allowing you to validate the functionality and behavior of your code. By running tests regularly, you can catch bugs, errors, and regressions early in the development process, preferably locally, even before you push them upstream!
  2. Facilitates code maintenance and refactoring: As projects evolve, code often needs to be modified or refactored. Automated tests provide confidence that the existing functionality remains intact even after changes are made. They act as a safeguard, helping you identify any unintended consequences or introduced issues during the refactoring process.
  3. Encourages collaboration and serves as documentation: When multiple developers work on a project, automated tests act as a common language and specification for the expected behavior of the code. They promote collaboration by providing a shared understanding of the system's requirements and functionality. Also, since tests can be named whatever we want, we can use this to our advantage to describe what is expected from some component that might not be that obvious.
  4. Reduces time and effort in the long run: While writing tests requires upfront investment, it ultimately saves time and effort in the long run. Automated tests catch bugs early, reducing the time spent on manual debugging.
  5. Enables continuous integration: Since tests serve as some sort of a contract description, we can now make changes in functionality while asserting and validating if have broken their contract towards other components. They enable continuous integration by providing a reliable filter for potential bugs and unwanted changes in behavior. Developers can quickly detect any issues introduced by new code changes, allowing for faster iteration and deployment cycles.

Writing code without tests is like walking a rope without a safety net. Sure, you may get across, but failing might be catastrophic.

Let’s say that we have some complex and unreadable function but we have a test for it:

function getDisplayName(user: { firstName: string; lastName: string }): string {
let displayName = '';

for (let i = 0; i < user.firstName.length; i++) {
displayName = displayName.concat(user.firstName.charAt(i));
}

displayName = displayName.concat(' ');

for (let i = 0; i < user.lastName.length; i++) {
displayName = displayName.concat(user.lastName.charAt(i));
}

return displayName;
}
describe('getDisplayName', () => {
// because we can name these tests, we can describe what the code should be doing
it('should return user\'s full name', () => {
const user = { firstName: 'John', lastName: 'Doe' };
const actual = getDisplayName(user);

expect(actual).toEqual('John Doe');
});
});

Now we are able to refactor the function while being confident that we didn’t break anything:

function getDisplayName(user: { firstName: string; lastName: string }): string {
// test will fail since we accidentally added a ,
return `${user.firstName}, ${user.lastName}`;
}

Now you see how tests not only assert the desired behavior, but they can and should be used as documentation.

There is a choice of test types you could introduce to help you safely get across.

If you are unsure which might be the right ones for you, please check out this blog post by my colleague Sean Ferguson.

Ideally you should be using more than one type of tests. It is up for you to weigh and decide which fit your needs best, but once you do, it is very important not to cut corners and to invest into keeping a high coverage.

This is the most important investment in our codebase. It will pay the highest dividends and it will keep us safe from failing if we do this part well.

The simplest and fastest tests to write are unit tests, but they are often not enough, because they don’t assert that the users using our system are experiencing it as expected.

You can even use AI tools like Chat GPT to generate unit tests based on production code (although they will not be perfect every time).

This is done by using integration or e2e tests, albeit it takes longer to set them up and to write individual tests, it is often the better investment, since we can rely on them to cover our system from the perspective of anyone using it.

Ok, so you are convinced and you add a test suite which you will maintain. You also added a command to run tests, and you do so before committing your code. That is very nice but what if someone in the team doesn’t do the same? What if they commit and merge code without running tests? 😱

If only there is a way to automate this and make it public.

Chapter 3: tying it all together

Now all these enhancements make sense, and you feel pretty happy about them, but without any forcing functions that make running these mandatory, they don’t bring much value since there will be people bypassing them.

Luckily most code repositories like GitHub provide us with automated workflows that can make it very easy to automate and force these checks and not let code be merged if it doesn’t pass the necessary checks.

Now we can write a workflow that will check all of this for us!

A GitHub workflow that would install run linting, unit and e2e tests would look something like:

name: Linting and Testing

on: [pull_request]

jobs:
linting-and-testing:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-[email protected]
with:
access_token: ${{ github.token }}

- name: Checkout
uses: actions/checkout@v3 # or whatever is the highest version at the time

- name: Setup Node
uses: actions/setup-node@v3
with:
node-version: '18.12' # or whatever the latest LTS release is
cache: 'npm'

- name: Install dependencies
run: npm i

- name: Run ESLint check
run: npm run lint:ci # npx eslint "{src,tests}/**/*.ts"

- name: Run unit tests
run: npm run test

- name: Run e2e tests
run: npm run test:e2e

Can we code now?

Yes we can!

But as we said, preparation is half the battle.

The other, longer and harder part, is yet to come and it is paramount to stay disciplined, consistent and to keep it simple. It would be easiest to do so by practicing being pragmatic, having pride and maturity in your approach to work and having a mindset that helps not only yourself but others that you work with grow.

This is best explained by a dear and pragmatic colleague of mine, Stevan Kosijer, in a blog post series starting with Pragmatic Engineer’s Philosophy.

Conclusion

Although we might instinctively think that writing code is the most productive way to initially invest our time in a software development project, without proper setup that is almost never the case. Having confidence in your changes through having automated tests, having enforced rules we find useful and having consistent formatting will greatly impact the velocity and quality of our work.

If your project is integrating with an API, which most likely it is, my honest advice would be to use and SDK. However, if you want a high quality SDK that can be built and updated on demand, along with documentation, tests, and easy integration with your CI/CD pipeline, please check out our product and perhaps even schedule a demo at liblab.com.

← Back to the liblab blog

TypeScript, a statically typed superset of JavaScript, has become a go-to language for many developers, particularly when building SDKs that interact with web APIs. TypeScript's powerful type system aids in writing cleaner, more reliable code, ultimately making your SDK more maintainable.

In this blog post, we'll provide a focused exploration of how TypeScript's type system can be harnessed to better manage API routes within your SDK. This post is going to stay focused and concise. We’ll be looking solely at routing tips and intentionally eschewing some of the other aspects of SDK authoring, such as architecture, data structures, handling relations, and other aspects of SDK development. Our SDK will be simple: it is going to simply list a user or users. These tips will help your route definitions be less error prone and easier to read for other engineers.

At the end, we’ll cover the limitations of the tips in this post, what’s missing, and one way in which you can avoid dealing with having to author these types altogether.

Let’s get started.

Alias your types

Type aliasing is important! It can sometimes be overlooked in TypeScript, but aliases are an extremely powerful documentation and code maintenance tool. Type aliases provide additional context as to why something is a string or a number. As an added bonus, if you alias your types and make a different decision (such as shifting from a numeric ID to a GUID) down the road, you can change the underlying type in one place. The compiler will then call out most of the areas in which your code needs to change.

Here are a couple of examples that we’ll build upon later on:

type NoArgs = undefined;
type UserId = number;
type UserName = string;

Note that UserId is a number here. That may not always be the case. If it changes, finding UserId is an easier task than trying to track down which references to number are relevant for your logic change.

Aliasing undefined with NoArgs might seem silly at first, but keep in mind that it’s conveying some extra meaning. It indicates that we specifically do not want arguments when we use it. It’s a way of documenting your code without a comment. Ditto for UserName. It’s unlikely to change types in the future, but using a type alias means that we know what it means, and that’s helpful.

Note: there’s a subtlety here that’s worth calling out. NoArgs is a type here, while undefined is a value. NoArgs is not the value undefined, but is a type whose only acceptable value is undefined. It’s a subtle difference, but it means you can’t do something like const args = NoArgs. Instead, you would have to do something along these lines: const args: NoArgs = undefined.

Statically define your data structures wherever possible

This is similar to the above, and is generally accepted practice. This essentially boils down to avoiding the any keyword and avoid turning everything into a plain object ({[key: string]: any}). In this simple SDK, this means only the following:

type User = {
id: UserId;
name: UserName;
//other fields could go here
}

When we need a User or an array of Users, our SDK engineers will now have all the context they need at design-time. Types such as UserName can be more complex as well (you can use Template Types, for example), allowing you to further constrain your types and make it more difficult to introduce bugs. The intricacies of typing data structures is a much larger subject, so we’ll stick to simple types here.

Make your routes and arguments more resistant to typos

You’ve likely done it before: you meant to call the users endpoint and accidentally typed uesrs. You don’t find out until runtime that the route is wrong, and now you’re tracking it down. Or maybe you can’t remember if you’re supposed to be getting name or userName from the response body and you’re either consulting the spec, curling, or opening Postman to get some real data. Keeping your routes defined in one place means you only need to consult the API spec once (or perhaps not at all if you follow the tip at the end of the post) in order to know what your types are. Your SDK maintainers should only need to go to one place to understand the routes and their arguments:

type Routes = {
'users': NoArgs;
'users/:userId:': UserId;
};

Note that the pattern :argument: was used here, but you can use whatever is best for the libraries/helper methods that you already have. In addition, this API currently only has GET endpoints with no query parameters, so we’re keeping the types on the simple side. Feel free to declare some intermediate types that clearly separate out route, query, and body parameters. Then your function(s) that actually call API endpoints will know what to do with said parameters when it comes time to actually call an endpoint. This is a good segue into the next point:

Use generics to make code reuse easy

It’s hard to overstate how powerful generics can be when it comes to maintaining type safety while still allowing code reuse. It’s easy to slap an any on a return value and just cast your data in your calling function, but that’s quite risky, as it prevents TypeScript from verifying that the function call is safe. It also makes code harder to understand, as there’s missing context. Let’s take a look at a couple of types that can help out for our example.

type RouteArgs<T extends keyof Routes> = {
route: T;
params: Routes[T];
};

const callEndpoint = <Route extends keyof Routes, ExpectedReturn>(args: RouteArgs<Route>): ExpectedReturn => {
//your client code goes here (axios, fetch, etc.) Include any error handling.

//Don't do this, use a type guard to verify that the data is correct!
return [{id: 1, name: "user1"}, {id: 2, name: "user2"}] as unknown as ExpectedReturn
}

Note the T extends keyof Routes in our generic parameter for the type RouteArgs. This builds upon the Routes type that we used earlier, making it impossible to use any string that is not already defined as a route when you’re writing a function that includes a parameter of this type. This also enables you to use Routes[T], meaning that you don’t have to know the specific type at design-time. You get type safety for all of your calling functions.

Note that we also do not assign a type alias to the type of callEndpoint. This type is intended to only be used once in this code base. If you are defining multiple different callEndpoint functions (for example, if you want to separate out logic for each HTTP verb), aliasing your types to make sure that no new errors are being introduced would be highly recommended.

Note that type guards are mentioned in the comment. This code lives at the edge of type safety. You will never be 100% sure that the data that comes back from your API endpoint is the structure you expect. That’s where type guards come in. Make sure that you’re running type guards against these return types. Type guards are outside of the scope of this post, but guarding for concrete types in a generic function can be complex and/or tedious. Depending on your needs, you may choose to use an unsafe type cast similar to the example and put the responsibility of calling the type guard on the calling function. We won’t cover strategies for ensuring these types are correct in this post, but this is an area you should study carefully.

Tying it all together

What do we get for our work? Let’s take a look at the code that an SDK maintainer might write to use the types that we’ve defined:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: undefined})

return users
}

Hopefully it’s clear that we’ve gotten some value out of this. This call is entirely type safe (shown below), and is quite concise and easy to read.

Note that we also don’t have to specify the generic types here. TypeScript is inferring the types for us. If we make a mistake, the code won’t compile! Here are a couple of examples of bad calls and their corresponding errors:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'user', params: undefined})
//Type '"user"' is not assignable to type 'keyof Routes'. Did you mean '"users"'?
return users
}

Look at that helpful error message! Not only does it tell us we’re wrong, it suggests what might be right.

What if we try to pass an argument to this route? If you remember, we defined it to explicitly accept no arguments.

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: 'someUserName'})
//Type 'string' is not assignable to type 'undefined'.(2322)
//{file and line number go here}: The expected type comes from property 'params' which is declared here on type 'RouteArgs<"users">'
return users
}

This is also helpful, though there is some limitation. TypeScript will not pass through the alias that we defined (NoArgs), unfortunately. However, it does tell us exactly where the source of the error is, allowing an engineer to trace exactly why a string won’t work. The engineer will then see that NoArgs type and have a clear understanding of what went wrong.

What’s missing/limitations?

The examples here could still be improved upon. Note that ExpectedReturn is part of callEndpoint. This means that an SDK maintainer would need to have some knowledge of which type to pick (if not the specific structure). Why not include this information our Routes type? That may make a good exercise for the reader.

As previously mentioned, type aliases do not get passed through to compiler errors. There are some workarounds, however.

Depending on how you’re handling various verbs, your type guards/generic functions can get quite complex. This won’t have an impact on those maintaining your SDK, but there can be an up-front cost to defining these types. It’s up to you to decide whether to pay that cost.

What was that about avoiding all this?

Hopefully with the tips in this article, you feel more confident about making maintainable SDKs. However, wouldn’t it be nice if you just didn’t have to develop an SDK at all? After all, you have an API spec; and that should be enough to generate the code, right? Fortunately, the answer is yes, and liblab offers a solution to do just that. If you don’t want to think about challenges like error handling and maintainability for your SDK, liblab’s SDK generation tools may be able to help you.

← Back to the liblab blog

Publishing packages to NPM is not a particularly difficult challenge by itself. However, configuring your TypeScript project for success might be. Will your package work on most projects? Will the users have type-hinting and autocompletion? Will it work with ES Modules (ESM) and CommonJS (CJS) style imports?

After reading this post, you will understand how to make your TypeScript package more accessible and usable in any (or most) JavaScript and TypeScript projects, including browser support!

Creating a TypeScript Project

Chances are that if you are reading this, you already have a TypeScript project set up. If you do, you might want to skip to the next steps or stay around to check for discrepancies.

Let's start by creating our base Node.js project and adding TypeScript as a development dependency:

npm init -y
npm install typescript --save-dev

You likely want to structure your code inside a src folder. So let's create your package's entry point inside of it:

mkdir src
touch src/index.ts

Now, Node.js and browsers don't understand TypeScript, so we need to set up tsc (TypeScript compiler) to compile our TypeScript code to JavaScript. Let's add a tsconfig.json file to our project by running:

npx tsc --init

If we run npx tsc now, it will scan our folder and create .js files in the same directories as our .ts files (which is not desirable). Let's add better configuration before we run that and make a mess.

Add the following lines to tsconfig.json:

{
"compilerOptions": {
// ... Other options
"rootDir": "./src", // Where to look for our code
"outDir": "./dist", // Where to place the compiled JavaScript
}

Let's also add a "build" script to our package.json:

{
"scripts": {
"build": "tsc"
}
}

If we run npm run build now, a new dist folder will appear with the compiled JavaScript. If you're using Git, make sure to add the dist folder to your .gitignore.

Setting up tsc for Optimal Developer Experience

We can already compile our TypeScript to JavaScript. However, if you publish it to npm as is, you'll only be able to use it seamlessly in other JavaScript projects. Also, the default target configuration is "es2016," and modern browsers only support up to "es2015." So let's fix that!

First, let's change our target to es2015 (or es6 since they're the same). esModuleInterop is true by default. Let's leave it as is since it increases compatibility by allowing ESM-style imports.

We are all using TypeScript for a reason: types! But if you build and ship your package right now, no types will be shipped with it. Let's fix that by setting declaration to true. This will generate declaration files (.d.ts) alongside our .js files. With that alone, your package will be usable in TypeScript projects from the get-go and provide type hints even in JavaScript projects.

The declaration files already go a long way in improving support and developer experience. However, we can go further by adding declarationMap. With that, sourcemaps (.d.ts.map) will be generated to map our declaration files (.d.ts) to our original TypeScript source code (.ts). This means that code editors can go to the original TypeScript code when using "Go to definition," instead of the compiled JavaScript files.

While we're at it, sourceMap will add sourcemap files (.js.map) that allow debuggers and other tools to display the original TypeScript source code when actually working with the emitted JavaScript files.

Using declarationMap and/or sourceMap means we also need to publish our source code with the package to npm.

With all that, here is our final tsconfig.json file:

{
"compilerOptions": {
"target": "es2015",
"module": "commonjs",
"strict": true,
"esModuleInterop": true,
"rootDir": "./src",
"outDir": "./dist",
"sourceMap": true,
"declaration": true,
"declarationMap": true,
}
}

package.json

Things are much simpler around here. We need to specify the entry point of our package when users import it. So let's set main to dist/index.js.

Other than the entry point, we also need to specify the main types declaration file. In this case, that would be dist/index.d.ts.

We also need to specify which files to ship with the package. Of course, we need to ship our built JavaScript files, but since we are using sourceMap and declarationMap, we also need to ship src.

Here's a reference package.json with all of that:

{
"name": "the-greatest-sdk", // Your package name
"version": "1.0.3", // Your package version
"main": "dist/index.js",
"types": "dist/index.d.ts",
"scripts": {
"build": "tsc"
},
"keywords": [], // Add related keywords
"author": "liblab", // Add yourself here
"license": "ISC",
"files": [
"dist",
"src"
],
"devDependencies": {
"ts-node": "^10.9.1",
"typescript": "^5.0.4"
}
}

Publishing to NPM

Publishing to NPM is not difficult. I strongly recommend taking a look at the official instructions, but here are the general steps:

  1. Make sure your package.json is set up appropriately.
  2. Build the project (with npm run build if you followed the guide).
  3. If you haven't already, authenticate to npm with npm login (you'll need an npm account).
  4. Run npm publish.

Keep in mind that if you update your package, you'll need to increase the version option in your package.json before publishing again.

There are more sophisticated (and recommended) ways to go about publishing, like using GitHub actions and releases, especially for open-source packages, but that’s out of scope for this post.

Conclusion

By following the discussed approach your typescript npm packages will now provide better type-hinting, auto-completion and support ES Modules (ESM) and CommonJS (CJS) style imports, making them more accessible and usable by a wider audience.

Here at liblab, we know that preparing your project for NPM can be annoying. That's why our TypeScript SDKs come prepared with all the necessary adjustments for proper publishing to NPM. We'll even help you set up your CI/CD for seamless publishing. Contact us here to learn more about how we can help automate your API’s SDK creation.

← Back to the liblab blog

This post will take you through the steps to write files to GitHub with Octokit and TypeScript.

Install Octokit

To get started we are going to install Octokit.

npm install @octokit/rest

Create the code

Then we can create our typescript entry point. In this case src/index.ts

import { Octokit } from '@octokit/rest';
const client = new Octokit({
auth: '<AUTH TOKEN>'
});

We instantiate the Octokit constructor and create a new client. We will need to replace the <AUTH TOKEN> with a personal access token from GitHub. Checkout the guide to getting yourself a personal access token from GitHub.

Now that we have our client setup we are going to look at how we can create files and commit them to a repository. In this tutorial I am going to be writing to an existing repo. This will allow you to write to any repo public or private that you have write access to.

Just like using git or the GitHub desktop application we need to do a couple of things to add a file to a repository.

  1. Generate a tree
  2. Commit files to the tree
  3. Push the files

Generate a tree

To create a tree we need to get the latest commits. We will use the repos.listCommits method and we will pass an owner and repo argument. owner is the username or name of the organization the repository belongs to and repo is the name of the repository.

const commits = await client.repos.listCommits({
owner: "<USER OR ORGANIZATION NAME>",
repo: "<REPOSITORY NAME>",
});

We now want to take that list of commits and get the first item from it and retrieve its SHA hash. This will be used to tell the tree where in the history our commits should go. To get that we can make a variable to store the commit hash.

const commitSHA = commits.data[0].sha;

Add files to the tree

Now that we have our latest commit hash we can begin constructing our tree. We are going to pass the files we want to update or create to the tree construction method. In this case I will be representing the files I want to add as an Array of Objects. In my example I will be adding 2 files. [test.md](http://test.md) which will hold the string Hello World and time.txt which will store the latest timestamp.

const files = [
{
name: "test.md",
contents: "Hello World"
},
{
name: "time.txt",
contents: new Date().toString()
}
];

Octokit will want the files in a specific format:

interface File {
path: string;
mode: '100644' | '100755' | '040000' | '160000' | '120000';
type: 'commit' | 'tree' | 'blob';
sha?: string | null;
content: string;
}

There are a couple of properties in this interface.

  • path - Where in the repository the file should be stored.
  • mode - This is a code that represents what kind of file we are adding. Here is a quick run down:
    • File = '100644'
    • ExecutableFile = '100755'
    • Directory = '040000'
    • Submodule = '160000'
    • Symlink = '120000'
  • type - The type of action you are performing on the tree. In this case we are making a file commit
  • sha - The last known hash of the file if you plan on overwriting it. (This is not needed)
  • content - Whatever should be in the file

We can map to transform our file array into this proper format:

const commitableFiles: File[] = files.map(({name, contents}) => {
return {
path: name,
mode: '100644',
type: 'commit',
content: contents
}
})

Now that we have an array of all the files we want to commit we will pass them to the createTree() method. You can think of this as adding your files in git.

const {
data: { sha: currentTreeSHA },
} = await client.git.createTree({
owner: "<USER OR ORGANIZATION NAME>",
repo: "<REPOSITORY NAME>",
tree: commitableFiles,
base_tree: CommitSHA,
message: 'Updated programatically with Octokit',
parents: [CommitSHA],
});

Afterwards we have the variable currentTreeSHA . We will need this when we actually commit the files.

Next we go to actually make a commit on the tree

const {
data: { sha: newCommitSHA },
} = await client.git.createCommit({
owner: "<USER OR ORGANIZATION NAME>",
repo: "<REPOSITORY NAME>",
tree: currentTreeSHA,
message: `Updated programatically with Octokit`,
parents: [latestCommitSHA],
});

Push the commit

Then we push the commit

await client.git.updateRef({
owner: "<USER OR ORGANIZATION NAME>",
repo: "<REPOSITORY NAME>",
sha: newCommitSHA,
ref: "heads/main", // Whatever branch you want to push to
});

That is all you need to do to push files to a GitHub repository. We have found this functionality to be really useful when we need to push files that are automatically generated or often change.

If you find yourself needing to manage SDKs in multiple languages from an API, checkout liblab. Our tools make generating SDKs dead simple with the ability to connect to the CI/CD tools you are probably already using.

liblab!