Skip to main content

26 posts tagged with "SDK"

View All Tags
← Back to the liblab blog

TL;DR - liblab can generate dev containers for your SDKs so you can start using them instantly. See our docs for details.

As software developers, our productivity is constantly increasing - with better tools, and even AI powered coding to allow us to deliver faster, and focus on building better solutions to more complex problems. The downside of this is the initial setup - getting our tools configured and our environments ready can be a big time sink. Anyone who has ever joined a new company with poor on-boarding documentation can attest to this! Days or even weeks trying to find the right versions of tools, and getting them configured correctly. It's almost a rite of passage for a new developer to re-write the onboarding document to add all the new tools and setup needed to get started.

This is something we see at liblab - our customers generate SDKs to increase their productivity, or the productivity of their customers, but to validate each SDK means setting up environments to run TypeScript, Java, Python. C#, Go and more. Although this is a one time cost, we decided to reduce this setup time so you can test out your generated SDKs in seconds - using dev containers!

What is a dev container?

A development container, or dev container, is a pre-configured isolated development environment that you can run locally or in the cloud, inside a container. Containers come with all the tools you need pre-installed, your code, and any configuration you need to get started, such as building or installing your code. They were designed to solve the setup problem as well as reliably deployments - set up your container once for a particular project, and everyone can share that setup. If the setup changes, the dev container changes, and everyone gets the new setup. It's also the closest thing we have to a "works on my machine" solution - if it works in the dev container, it will work for everyone. Literally like shipping your machine to your customers!

Running dev containers

Dev containers can be run locally through your favorite IDE, such as Visual Studio Code or IntelliJ, as long as you have container tooling installed such as Docker. You can also run containers in the cloud, using things like GitHub codespaces.

For example, you can create a dev container for Python development using one of your projects, that is based upon a defined version of Linux, has Python 3.11 installed, is configured to use the Python Visual Studio Code extension, and will install the required pip packages from your projects requirements.txt file when the container is created. As soon as you open this in Visual Studio Code, it will spin up the container, make sure the Python extension is installed, install your pip packages and you are instantly ready to start coding.

Dev containers isolation

Because dev containers are just that - containers, they run isolated from your local machine, so you don't need to worry about conflicting tool versions, or other conflicts. What is installed in the container just lives in the container, and what you have locally is not available inside that container. This means you can have multiple dev containers for different projects, and they won't conflict with each other. For example - you could have Python 3.11 installed locally, have one project in a dev container using Python 3.10 and another for legacy work using Python 2! Each will be isolated and have no idea about the other.

How are dev containers set up?

A dev container is defined by having a folder called .devcontainer in the root of your project. This folder contains a devcontainer.json file, which defines the container image to use, and any configuration you need to run your code. This devcontainer.json file can reference a pre-defined environment, such as Python, or NodeJS with TypeScript, or you can define your own environment using a Dockerfile.

.
└── .devcontainer/
└── devcontainer.json
└── Dockerfile

In the devcontainer.json file, you can also define any extensions you want to install in your IDE. By having the configuration in one or more files, these dev containers are instantly reproducible - check them into source code control and when someone else clones your repo and opens the folder, they will instantly get exactly the same environment.

You can read more on configuring dev containers in the dev container documentation.

How can I use dev containers with my generated SDK?

liblab can be configured to create the dev container configuration files for your SDK. This is done by an option in your configuration file:

{
...
"customizations": {
"devContainer": true
},
...
}

When you run the SDK generation process using liblab build, the generated SDK folder will come with the .devcontainer folder all ready to go. The container will be configured to not only install the relevant language tooling, but your SDK will also be installed ready to be used.

Use your dev container to test your SDK

To test your SDK in a dev container, make sure you have Docker desktop (or another Docker compliant container tool) running, and open the SDK folder for your language of choice in Visual Studio Code. You will be prompted to open the folder in a container, select Reopen in Container and your dev container will be created. This may take a few minutes the first time, but subsequent times will be much faster.

The VS Code reopen in container dialog

Let's use the dev container created with a Python SDK as an example. In this example, I'm going to use the Python SDK for our llama store API sample. Open this SDK in VS Code, or even open it in a codespace in GitHub.

Once the container is opened, a script is run to install the SDK dependencies, build the Python SDK as a wheel, then install this locally. There's no need to use a virtual environment, as the container is isolated from your local machine, so the installed wheel is only available in the container.

The output from the terminal showing the llama store SDK installed

As well as installing the SDK, the dev container will also install PyLance, the Python extension for Visual Studio Code, so VS Code will be fully configured for Python development. You can see this in the extensions panel:

The VS Code extensions panel with PyLance and the Python language server installed

This is now available from any Python code you want to run. Every SDK comes with a simple example - liblab takes the default get method and creates a small sample using this, and you can find this in the examples/sample.py file. If you open this file, you will see that VS Code knows about the SDK, with intellisense, documentation and code completion available:

The sample.py file with both code completion and documentation showing

It's now much easier to play with the SDK and build out code samples or test projects to validate your SDK before you publish it.

Conclusion

Dev containers are a great way to test out a new SDK. liblab can generate the dev container configuration for your SDK, so you can get started instantly. To find out more, check out our documentation.

← Back to the liblab blog

At liblab we generate software development kits, or SDKs, for your APIs. But what do we mean by 'SDK generation', and how does it work? This post explains everything you need to know about SDK generation, and how it can help you make your APIs more accessible.

What is SDK Generation?

Put simply, SDK generation is the process of automatically generating SDKs from an API specification. You have an API exposed using something like REST, and you want to make it easier for developers to access that REST API.

You could just let them access the API directly, but this relies on your user not only being experts in their own domains, but also knowing how to make REST calls, and to a certain degree the best practices for using your API. By creating an SDK, you are building a layer of abstraction over that API, embedding those best practices into the internals of the SDK code, and providing a nicer interface to your API in the programming languages that the developer is using.

In my experience, every team of developers that accesses APIs will always build some kind of layer of abstraction themselves. This will contain things like wrapper objects for the requests and responses to avoid using JSON, and service classes that wrap the REST requests in methods. These layers of abstraction may also include things like authentication, refreshing access tokens, retries, and error handling. This takes a lot of work, and is often not shared between teams in an enterprise who are all using the same API.

By auto generating an SDK, you can provide this layer of abstraction for your API, and ensure that all developers are using the same best practices. This cuts down on the boilerplate code being written, and allows developers to focus on solving real problems instead of wrapping JSON and REST. You don't write swathes of code, you use a tool that will do all the hard work for you, taking in your API and spitting out an SDK.

A machine that coverts APIs to SDKs

This auto generation also handles updates - add a new endpoint to your API? Regenerate the SDK, and the new endpoint will be available to your users. This means that your users will always have access to the latest version of your API, and you don't need to worry about them using an old version of your SDK.

Read more on a comparison between APIs and SDKs.

How Does SDK Generation Work?​

SDK generation is computers writing code for you. A tool takes an API specification, and generates code that can be used to access that API. The SDK code is generated in the programming languages of your choice.

REST API Validation​

Every generated SDK starts from an API specification. These use standards like OpenAPI to define the API - including the endpoints that can be called, and the expected data that will be sent as the request body, or the response of the call.

For example, your spec might have an endpoint called llama/{llama_id} that takes a llama Id in the URL, and returns a JSON object containing the details of that llama. The spec will define the URL, the HTTP method (GET), and the expected response body.

/llama/{llama_id}:
get:
tags:
- llama
summary: Get Llama
description: Get a llama by ID.
operationId: get_llama_by_id
parameters:
- name: llama_id
in: path
required: true
schema:
type: integer
description: The llama's ID
title: Llama Id
description: The llama's ID
responses:
'200':
description: Llamas
content:
application/json:
schema:
$ref: '#/components/schemas/Llama'
type: array
items:
$ref: '#/components/schemas/Llama'
title: Response 200 Get Llama By Id

The endpoints use schemas to define the objects that are sent or returned. In the example above, the response is defined as an array of Llama objects. The schema for a Llama object might look like this:

    Llama:
properties:
name:
type: string
maxLength: 100
title: Name
description: The name of the llama. This must be unique across all llamas.
age:
type: integer
title: Age
description: The age of the llama in years.
color:
allOf:
- $ref: '#/components/schemas/LlamaColor'
description: The color of the llama.
rating:
type: integer
title: Rating
description: The rating of the llama from 1 to 5.
id:
type: integer
title: Id
description: The ID of the llama.
type: object
required:
- name
- age
- color
- rating
- id
title: Llama
description: A llama, with details of its name, age, color, and rating from
1 to 5.

Before the SDK can be generated, the API needs to be validated. For example - the llama/{llama_id} endpoint returns a llama object defined using the #/components/schemas/Llama schema. If this doesn't exist, then the SDK cannot be successfully generated. The validation will also look for other things - for example, does the endpoint have operationId defined for each verb, which is used to generate the method name in the SDK. Are there descriptions for each endpoint, which can be used to generate the documentation for the SDK.

liblab can validate your API spec, and will give you a list of issues to resolve before you generate your SDK. The better your spec, the better your SDK will be.

SDK Generation

Once your API has been validated, the SDK can be generated. This is done by taking the validated API specification, and adding a sprinkle of liblab magic to generate the SDK code. This 'magic' is smart logic to build out model objects that match the requests and responses, as well as generating services that wrap the REST calls. The models handle missing fields in the responses, or patterns that are defined (such as ratings needing to be between 1 ans 5, or an email address field needing to be a valid email). The services implement required logic such as retrying failed requests, handling authentication, refreshing access tokens, and handling errors.

SDK generation also understands the programming language that you want to generate the SDK in. This means that the generated code will be idiomatic for that language. For example naming will be idiomatic to the language, so for the above llama example, the service will be called Llama in Python and C#, but the method to get a llama by Id will be get_llama_by_id in Python, and GetLlamaByIdAsync in C# - using snake case for Python and Pascal case for C#. The generated code will also use the idiomatic way of handling asynchronous calls - for example, in C# the generated code will use Task and async/await to handle asynchronous calls, naming them with the Async suffix.

A lot of the features generated in the SDK can be customized. For example, you can customize how retries are handled, including how many attempts and the time between retries. You can even hook into the API lifecycle and add custom logic to handle requests before they are sent, or responses before they are returned to the caller.

Documentation Generation​

As well as generating SDKs, liblab also generates documentation for the SDK from the validated API. This documentation not only shows how to call the API directly using tools like curl, but also code samples showing how to use the SDK. This way developers can literally copy and paste code examples from your documentation to get started quickly using your SDK.

This documentation is built using the descriptions and examples from the API spec, so the better the documentation in the spec, the better the documentation for the SDK. Your code samples will include these examples.

Packaging​

Its all very well to create an SDK, but you need to be able to store the code of your SDK somewhere, and distribute it to your users, and this is typically via a package manager like PyPi, NPM or NuGet. liblab generates all the relevant package manifest files, and can raise a pull request against your repository to add the generated SDK code. You can then review the PR, merge it, and publish the package to your package manager of choice, either a public package manager, or an internal one.

Best Practices For SDK Generation

Here are some best practices for SDK generation, to help you get the most out of your generated SDKs.

Understand Your Users’ Needs

The most important part of any software development process is:

Know thy user

Understand what your user needs. This might be their requirements for your SDK, or it will be your knowledge of how the SDK should work to give your users a seamless experience.

Generate for the users preferred programming languages

What programming languages are your users likely to use? For example, if you are developing for a large enterprise, chances are a C# and Java SDK might have a larger audience than a Go SDK. Smaller companies might be more likely to use Python or TypeScript.

Build for your users - and the big advantage of using SDK generation tools like liblab is growing the languages you support can be as simple as adding a new language to your config file, then packaging up the SDK for distribution to your users.

Simplify the Authentication Process

Authentication is hard, and everyone hates writing code to authenticate against an API. Make it easy for your users by handling authentication in the SDK. This might be as simple as providing a method to set the access token, or it might be more complex, such as handling the OAuth flow for your users by hooking into the API lifecycle.

SDKs can also handle refresh tokens, so if you want a persistent connection to your API, you can handle refreshing the access token in the SDK, and your users don't need to worry about it. This is very useful when writing server code that will run for months or even years on end, rather than desktop apps where the user will log in each day.

Have a good API spec

The old adage of 'garbage in, garbage out' applies here. If your API spec is not well defined, then the generated SDK will not be well defined.

Make sure you have tags and operation Ids defined, so the SDK services can be well named and grouped correctly - SDK generation uses the tag property to group endpoints into services, and the operationId to name the methods in the service (adjusting for different programming languages of course). For example, with this spec:

/llama:
get:
tags:
- llama
operationId: get_llamas
/llama_picture:
get:
tags:
- llama_picture
operationId: get_llama_pictures

This will give:

Service nameMethod name (Python)Method Name (C#)
llamaget_llamasGetLlamasAsync
llama_pictureget_llama_picturesGetLlamaPicturesAsync

Without these, the endpoints will be grouped into one service, and the names will be generated from the method and the URL, which might not give what you want.

For help on improving your OpenAPI spec, check out our Why Your OpenAPI Spec Sucks blog post.

Have Examples and descriptions

When coding, examples always help, giving developers something to start from when using a new SDK or library (the old joke about most code bring copied and pasted from Stack Overflow). The same applies to SDKs - if you have examples in your API spec, then these will be used to generate examples in the SDK documentation, and your users can copy and paste these examples to get started quickly.

Examples also help your users understand what data they should send to your API, and what they will get back. This is especially important for complex objects.

Descriptions are converted into code documentation, both in the SDK and in the docs that accompany it. These make it easier for your users to understand what the SDK is doing, and how to use it. In the example below, the documentation comes from the description in the API spec. The spec is:

APITokenRequest:
properties:
email:
type: string
title: Email
description: The email address of the user. This must be unique across all users.
password:
type: string
title: Password
description: The password of the user. This must be at least 8 characters long, and contain
at least one letter, one number, and one special character.

This gives the following documentation in your Python SDK:

A documentation popup for an APITokenRequest showing the descriptions of the email and password properties

Conclusion

SDKs make APIs more accessible for your users, and automatically generating SDKs makes it easier for you to provide SDKs for your APIs. liblab can help you generate SDKs for your APIs, and we can help you with the process of generating SDKs, and the best practices for doing so. Get in touch to find out more.

← Back to the liblab blog

This is a guest post by Emmanuel Sibanda, a Full Stack Engineer with expertise in React/NextJS, Django, Flask who has been using liblab for one of his hobby projects.

Boxing data is very hard to come by, there is no single source of truth. One could argue that BoxRec is the 'single source of truth'. However, you will only find stats on a boxer's record and a breakdown of the fights they have had on BoxRec. If you want more nuanced data to better understand each boxer you would need to go to CompuBox to get data on punch stats recorder per fights. This doesn't include all fights, as they presumably only include fights that are high profile enough for CompuBox to show up and manually record the number and type of punches thrown.

Some time back I built a project automating retrieving data from BoxRec and enriching this with data from CompuBox. With this combination of data, I can analyze;

  • a boxer's record (eg. what is the calibre of the opponents they have faced, based on their opposition's track record)
  • a boxer's defense (eg. how many punches do their opponents attempt to throw at them in each recorded fight and on average, how many of these punches actually land). I could theoretically breakdown how well the boxer defends jabs, power shots
  • a boxer's accuracy using similar logic to above
  • how age has affected both a boxer's accuracy and defense based on the above two points
  • a comparison of whether being more defensive or accurate has a correlation to winning a fight (eg. when a fight goes the full length, do judges often have a bias towards; accuracy, aggression or defense)

These are all useful questions, if you want to leverage machine learning to predict the outcome of a fight, build a realistic boxing game or whatever reason, these are all questions that could help you construct additional parameters to use in your prediction model.

Task: Create an easily accessible API to access the data I have collected

Caveat: I retrieved this data around November 2019 - a lot has happened since then, I intend to fetch new data on the 19th of November 2023.

When I initially built this project out, initially a simple frontend enabling people to predict the outcome of a boxing match based on a machine learning model I built using this data, I got quite a few emails from people asking me how I got the data to build this model out.

To make this data easily accessible, I developed a FastAPI app, with an exposed endpoint for data retrieval. The implementation adheres to OpenAPI standards. I integrated Swagger UI to enable accessibility directly from the API documentation. You send the name of a boxer and receive stats pertaining their record.

Creating an SDK to enable seamless integration using liblab

I intend to continue iteratively adding more data and ensuring it is up to date. In order to make this more easily accessible I decided to create a Software Development Kit. In simple terms, think of this as a wrapper around the API, that comes with pre-defined methods that you can use, reducing how much code you would need to write to interact with the API.

In creating these SDKs, I ran into a tool; liblab, an SDK as a service platform that enables you to instantly generate SDK in multiple languages. The documentation was very detailed and easy to understand. The process of creating the SDK was even simpler. I especially like that when I ran the command to build my SDKs I got warnings with links to OpenAPI documentation to ensure that my API correctly conformed to OpenAPI standards, as this could result in creating a subpar SDK.

Here's a link to version 1 of the BoxingData API.

Feel free to reach out regarding any questions you have, data you want me to include and if you want me to send you the SDKs (Python and TypeScript for now). You can find me on LinkedIn and Twitter.

← Back to the liblab blog

SDK and API are 2 terms banded around a lot when developers think about accessing services or other functionality. But what are they, and what are the differences between them? This post will teach you everything you need to know and how SDKs can benefit your software development process!

Key Differences: SDK vs API

API (Application Programming Interface) is a set of rules that allow different software applications or services to communicate with each other. It defines how they can exchange data and perform functions. APIs are often used for integrating third-party services or accessing data from a platform, and they are language-agnostic.

SDK (Software Development Kit) is a package of tools, code libraries, and resources designed to simplify software development for a specific platform, framework, or device. SDKs are platform-specific and provide pre-built functions to help developers build applications tailored to that platform. They are typically language-specific and make it easier to access and use the underlying APIs of the platform they are designed for.

As we compare SDK vs API, here are some key differences:

APISDK
Pass data as JSONPass data as strongly typed objects
Call endpoints defined using stringsCall methods or function
No compiler or linter checkingCompiler and linter checking
No automatic retriesAutomatic retries can be defined in the SDK
You have to read the docs to discover services or learn what data to pass or receiveIntellisense and documentation
Can be called from any programming language, as well as tools like Postman or low/no-code toolsCan only be called from compatible languages

What is an Application Programming Interface (API)?

An API, or application programming interface is an interface to a system that application programmers can write code against. This reads like I'm just juggling the words around, so let's break down this statement.

There are many systems and services out there that you might want to integrate into your application. For example, you might want to use Stripe as a payment provider. As you program your application, you need to talk to these services, and these services define an interface you can use to talk to them - this interface lists all the things you can do with the service, and how to do them, such as what data you need to send or what you will get back from each call.

Application Programming Interfaces in the modern world

In the modern software development world we think of APIs as a way of making calls to a service over networks or the internet using standard protocols. Many services have a REST API - a set of web addresses, or URLs, you can call to do things. For example, a payment provider API will have endpoints you can call to create a payment or authorize a credit card. These endpoints will be called using HTTP - the same technology you use when browsing the internet, and can take or return data either in the URL, or attached to the call in what is called the body. These are called using verbs - well defined named actions, such as GET to get data, or POST to create data.

An API with 2 methods exposed, a GET and a POST on the /user endpoint An API with 2 methods exposed, a GET and a POST on the /user endpoint

Calling APIs

Calling an API endpoint is referred to as making a request. The data that is returned is referred to as a response.

APIs can be called from any programming language, or from tools like Postman.

There are many protocols APIs can use, including REST, gRPC, GraphQL and SOAP. REST is the most common, and the one I'll be referencing in this article.

What is a Software Development Kit (SDK)?

A software development kit is a code library that implements some kind of functionality that you might want to use in your application.

What can SDKs be used for?

SDKs can implement a huge range of functionality via different software components - they can include visual components for desktop or mobile apps, they can interact with sensors for embedded apps, or provide frameworks to speed up application development.

SDKs for your APIs

Another use case for SDKs is to provide a wrapper around an API to make it easier to call from your application. These SDKs make APIs easier to use by converting the calls you would make into methods, and wrap the data you send and receive in objects.

An SDK with 2 methods exposed, a getUser method that wraps the GET on /user and a createUser that wraps the POST on /user An SDK with 2 methods exposed, a getUser method that wraps the GET on /user and a createUser that wraps the POST on /user

These SDKs can also manage things like authentication or retries for you. For example, if the call fails because the service is busy, the SDK can automatically retry after a defined delay.

For the rest of this article, when I refer to SDKs I'll be talking about SDKs that wrap APIs.

How Do APIs Work?

APIs work by exposing a set of endpoints that you can call to do things.

API Endpoints

These endpoints are called using verbs that roughly align to CRUD (create, read, update, delete) operations.

For example, you might have a user endpoint that handles the following verbs:

VerbDescription
GETRead a user
POSTCreate a user
PUTUpdate a user
DELETEDelete a user

API Data

Data is typically sent and returned as JSON - JavaScript object notation. For example, a user might be returned as:

{
"id": 42,
"firstname": "Jim",
"lastname": "Bennett",
"email": "jimbobbеnnе[email protected]"
}

How Do SDKs Work?

Software Development Kits work by wrapping the API calls in code that is easier for developers to use.

Creating a user with an API

For example, if I wanted to create a user using an API, then my code would need to do the following:

  1. Create an HTTP client - this would be code from a library that can make HTTP calls.
  2. Create a JSON object to represent my user.
  3. If my API requires authentication, I would need to get an access token and add it to the request.
  4. Send the JSON object to the API endpoint using the HTTP client.
  5. Get a response and see if it was successful or not.
  6. If it was successful, parse the response body from JSON to get the user Id.

This is a number of steps, and each one is error prone as there is no compiler or linter to help you. For example, if you sent the first name in a field in the JSON object called "first-name" and the API expected it to be "firstname" then the API call would fail at run time. If you forgot to add the access token, the API call would fail at run time. If you forgot to check the response, your code would continue to run and would fail at some point later on.

Creating a user with an SDK

An SDK on the other hand would implement most of this for you. It would have a class or other strongly typed definition for the user object, and would handle things like authentication and error checking for you. To use an SDK you would:

  1. Create an instance of the SDK class.
  2. Set the authentication token once on the SDK so it can be used for all subsequent calls.
  3. Create an instance of the user object, and set the properties.
  4. Call the SDK method to create the user, passing in the user object.
  5. If this fails, an exception would be thrown, otherwise the user Id would be returned.

Benefits of Application Programming Interfaces

APIs are the perfect way to expose a service to your internal infrastructure or the world via the internet. For SaaS (Software-as-a-Service) companies like Auth0 and Stripe, their APIs provide the services that their customers use to integrate with their applications. Internally organizations can build microservices or other internal services that different teams can use to build applications. For example, a company might have a user service that manages users, and a product service that manages products. These services would expose APIs that other teams can use to build applications.

By using a standard protocol such as REST you are providing the most widely used interface - development tools like programming language and many low/no-code technologies can call REST APIs. This means that your service can be used by any application, regardless of the technology it is written in.

Pretty much. every service should have an API if it needs to be called from an application.

Benefits of Software Development Kits

SDKs on the other hand are software components that provide wrappers over APIs. They make it easier to call APIs without making mistakes by providing things like type safety and error handling. If you use the wrong name for a field on an object, your compiler or linter will tell you (probably as soon as you type it with a red squiggly in your IDE). If you forget to add an access token, the SDK will throw an exception, but once set it can be used for all calls, and not need to be set every time. If the API call fails, the SDK can retry for you.

The benefit of an SDK is this hand-holding - it's a wrapper around the API that makes your life easier. An SDK takes nothing away from your API, developers can still call it directly if they are so inclined, but the SDK makes it substantially easier to use.

How and When to Choose Between SDKs or APIs?

As a provider of a service, there is no choice as such. You have to provide an API so software developers can call you service. Should you provide an SDK as well as part of your development process? Well, yes - it improves the experience for your users, and makes it easier for them to use your service. If you don't provide an SDK, then your users will have to write their own, and that's a lot of work for them. Keep your customers and users happy, right?

Conclusion

In this post we've looked at the differences between SDKs and APIs, and when you might use one over the other. We've seen that APIs are the interface to a service, and SDKs are wrappers around APIs that make them easier to use. We've also seen that APIs are the best way to expose a service to the world, and SDKs are the best way to make it easier to use an API.

Can I automate SDK generation?

The obvious question now is how do I create an SDK for my API. You could write one yourself, but why do that when liblab can automate SDK generation for you as part of your software development process? Check out liblab.com for more information and to sign up!

← Back to the liblab blog

In the ever-evolving software development landscape, selecting the right tools can make or break your project's success. With a plethora of options available, it can be overwhelming to choose the best one. In this blog post, we will discuss why liblab stands out as a superior choice over its competitors in various aspects, including user-friendliness, customization, support, security, reliability, cost, number of supported languages, and documentation.

User-Friendliness: Human Readability and IDE Compatibility

liblab prides itself on its user-friendly nature. The code generated by liblab looks like it was written by a human rather than a machine, making it easier to read and understand. Additionally, liblab's code is easily picked up by Integrated Development Environments (IDEs), providing users with helpful type hinting for a seamless development experience.

Customization

liblab offers unique customizations tailored to your business’ needs, with over 147 hours of investment put into refining these features. Regardless of your needs, liblab can be customized to provide a solution that meets your unique requirements and ensures the best possible development experience.

Support: A Comprehensive Solution

Unlike many competitors, liblab is more than just a product; it is a complete solution that includes both product and service. With a dedicated Technical Account Manager (TAM), liblab ensures that you meet Rapyd's developer experience goals via SDKs and documentation.

Security: SOC2 Compliance and Best Practices

Security is paramount in today's digital world. liblab is SOC2 compliant and continuously incorporates best practices to ensure that your data and developers are protected at all times.

Reliability: On Call Support and Code Reliability

liblab offers on-call support with Service Level Agreements (SLAs) that guarantee a response to your requests within 12 hours. Furthermore, liblab generates tests for all its SDKs, ensuring code reliability and reducing the likelihood of unexpected issues.

Cost: Upfront Savings and Minimized Backend Costs

By choosing liblab, you can significantly reduce costs associated with building and maintaining your development infrastructure. liblab's upfront cost eliminates the need to hire a team and develop subject matter expertise over time, allowing your engineers to focus on higher ROI, mission-critical work.

Number of Supported Languages: Idiomatic and Quality-driven

By the end of the year, liblab plans to support six languages, with a focus on idiomatic patterns. This ensures that each language is of high quality and useful for developers. While competitors may offer more partially-maintained languages, liblab emphasizes quality first, with quantity following soon after.

Documentation: SDK Embedded Docs

liblab auto-generates powerful documentation that includes code examples from your SDKs, making it easier for developers to understand and use the software.

In conclusion, liblab outshines its competition in multiple aspects, making it the ideal choice for your development needs. With its user-friendly code, extensive customization, comprehensive support, strong security, impressive reliability, cost-effective pricing, commitment to quality-driven language support, and robust documentation, liblab is the clear winner in the race for the best development solution.

← Back to the liblab blog

liblab is excited to be sponsoring APIWorld 2023 where you can join thousands of global technical leaders, engineers, software architects, and executives at the world’s largest and longest-running API & microservices event – in its 11th year!

The API world logo

This conference is running in-person at the Santa Clara Convention Center, and online a week later. We will be there, both in person and online, so come and meet us and learn about how liblab can generate better SDKs for your APIs!

Get a free SDK

liblab is currently in beta, but if you want to skip the queue and get early access, talk to us at APIWorld. We'll be granting early access to everyone at the event.

We'll also be on hand to review API specs, and generate a high quality, human readable SDK for your API. You can then see how your developer experience is improved by a good SDK. Just leave us your email address and a link to your API spec at our booth, and we'll send you a copy of your SDK.

Learn from our experts

On our booth you will find some of the worlds finest (in our opinion) OpenAPI experts, who will be able to discuss your API and help you to produce the best API spec possible that will allow you to quickly generate high quality SDKs. We can talk you through some common problems, as well as best practices for API design. If you want a sneak preview of our expertise, check out our why your OpenAPI spec sucks post from Sharon Pikovski.

We also want to learn from you! We'll give you the chance to vote on your favorite SDK languages, and share your stories of the best and worst SDKs you've used. If you're up for it we'd love to share your tales on our YouTube channel.

I'll also be giving a demo-heavy session called From APIs to SDKs: Elevating your Developer Experience with automated SDK generation where I will talk through why SDKs are better for your customers than accessing APIs directly (yup, they really are). I'll also show how you can automate the process of generating SDKs by integrating liblab into your CI/CD pipelines. There will be plenty of code on show and resources to help you re-create the demos yourself. I'll be speaking on Thursday 26th October at 2:30pm in the Expo Hall.

A picture of Jim on a stage at a conference standing next to a podium with a laptop on it. Jim is wearing a black t-shirt and is working on the laptop

More details on the APIWorld session page.

Meet the team

We are busy preparing for our presence in the expo hall at the event. Literally - half of my home office is full of booth bits 😁. We'll be there to talk APIs and SDKs, with a load of demos showing how you can use liblab to quickly generate SDKs, and implement this SDK generation into your CI/CD pipelines.

As expected, we'll have some swag to give away - in our case, edible swag! I'm personally not a fan of a lot of conference swag, we all have our fill of cheap pens, USB cables that break, notebooks, and terrible mugs, and a lot of this ends up in landfill. To be more sustainable, we'd rather give you something that leaves a more pleasant taste in your mouth, literally. Come and see us to find out more!

We'll also have some stickers, after all, who doesn't love stickers?

2 liblab stickers on a wooden table. One has the liblab logo, a simple line drawn llama face with curly braces for the side of the head, and liblab.com, the other has a version of the liblab llama logo with hearts for eyes and the caption love your SDK

We'll be available on a virtual booth during the online event, so if you can't make it to Santa Clara, you can still come and meet us online. No stickers or edible swag at the virtual event though, sorry!

Meet the llama

You may also get the chance to meet a real life* llama 🦙. Snap a pic with our llama and tweet it with the hashtag #liblabLlama and tag @liblaber to get a special sticker!

A sticker with a llama mascot and the text I met the liblab llama

* This is not actually true, it's Sean in a llama costume.

See you there - on us!

We have a number of open tickets to give away for the in-person and virtual events with access to the expo hall and some of the sessions. If you want to come meet us, then sign up for our beta and we'll send you a ticket. We'll be giving away tickets on a first come, first served basis, so don't delay!

Otherwise, head to apiworld.co to get your tickets now. We can't wait to meet you!

← Back to the liblab blog

Our mission is to empower developers with cutting-edge tools and resources, and at the core of this mission is the assurance that their data is secure. The significance of data security cannot be overstated, and this is why few milestones are as transformative as achieving System and Organization Controls (SOC) compliance.

liblab has successfully completed a comprehensive SOC 2 Type II audit, conducted by Sensiba LLP, a leader in audit services. We are thrilled to share the significance of this accomplishment and why it is crucial not only for our organization but also for our customers. In the short read ahead we’ll discuss the importance of attaining SOC 2 certification, how it impacts our operations, and most importantly, how it benefits our valued customers.

SOC 2 compliance logo

The Road to SOC 2 Compliance

SOC 2 is a rigorous set of standards developed by the American Institute of Certified Public Accountants (AICPA) to assess the security, availability, processing integrity, confidentiality, and privacy of customer data within service organizations. It is a comprehensive framework that demands the highest level of commitment to data security and privacy. Achieving SOC 2 compliance was not a straightforward task for liblab. Here are some of the challenges we encountered along the way:

Complex Documentation and Policies

The foundation of SOC 2 compliance lies in meticulous documentation and well-defined policies and procedures. Developing comprehensive documentation, including data security policies, incident response plans, and access control procedures, can be a time-consuming and complex process. We had to ensure that our documentation was not only thorough but also aligned with the stringent requirements of SOC 2.

Resource Allocation

Achieving SOC 2 compliance requires a substantial allocation of resources, both in terms of time and personnel. We had to designate a dedicated team to work on compliance-related tasks, diverting their efforts from other critical projects. This reallocation of resources was necessary to ensure the successful completion of the SOC 2 audit process.

Continuous Monitoring

SOC 2 compliance is not a one-time achievement but an ongoing commitment. Continuous monitoring and assessment of controls and processes are required to maintain compliance. This means that we needed to establish a system for ongoing monitoring and assessment, which added to the complexity of compliance efforts.

Vendor Compliance

As part of our operations, we engage with third-party vendors and service providers. Ensuring that these vendors also adhere to the rigorous standards of SOC 2 was a challenge. We had to assess their security practices, contractual agreements, and data handling processes to ensure alignment with our commitment to data security.

The Importance of SOC 2 Certification for liblab

Now that we have discussed some of the difficulties we faced in achieving SOC 2 compliance, let's delve into why this certification is a pivotal milestone for liblab and how it profoundly impacts both our operations and our customers.

Elevating Customer Trust

At liblab, our customers rely on our SDK generation service to build secure and reliable software solutions. Achieving SOC 2 compliance serves as a badge of trust for our customers, assuring them that we have robust controls and processes in place to protect their sensitive data. In an era where data breaches and cyber threats are all too common, this trust factor is invaluable.

Regulatory Compliance

Our SDK generation service often involves handling customer data, which may be subject to various data protection laws and regulations, such as GDPR (General Data Protection Regulation) in Europe or CCPA (California Consumer Privacy Act) in the United States. SOC 2 compliance aligns with many of these regulations, ensuring that we are in compliance with the law. This not only mitigates legal risks but also avoids potential fines and reputational damage stemming from non-compliance.

Competitive Advantage

In a competitive marketplace, where organizations are increasingly concerned about data security, achieving SOC 2 compliance provides us with a distinct competitive advantage. It positions liblab as a trusted and secure partner, setting us apart from competitors who may not have undergone such rigorous audits. This certification becomes a compelling factor when potential customers are evaluating their options.

Strengthening Internal Processes

The process of achieving SOC 2 compliance necessitates the establishment of robust internal processes and controls. We had to identify vulnerabilities, implement security measures, and develop an incident response plan. Going through this process not only prepared us for the certification audit but also enhanced our overall security posture. Continuous monitoring and improvement of these processes further strengthen the protection of customer data and reduce the risk of data breaches.

Why SOC 2 Compliance Matters to Our Customers

For our customers, who rely on our SDK generation products to build secure software applications, data security is of paramount importance. It reassures them that their data is handled with the highest level of security.

Enhanced Data Security

The most direct benefit of SOC 2 certification for our customers is enhanced data security. By achieving this certification, we are demonstrating our dedication to safeguarding their data from potential threats and breaches. Customers can trust that their data is protected when they use our developer products.

Data Privacy Assurance

In addition to security, SOC 2 compliance addresses data privacy concerns. It requires us to have clear privacy policies and practices to protect customer data and ensure compliance with data protection regulations. Customers can be confident that their privacy rights are respected and upheld when they entrust us with their data.

Reduced Risk Exposure

Attaining SOC 2 compliance reduces the risk of data breaches and security incidents. Our customers benefit from our proactive approach to data security, knowing that we have robust controls and processes in place to prevent, detect, and respond to security threats. This reduces the likelihood of data breaches that could lead to data loss or exposure.

Business Continuity

Having a well-defined incident response plan as part of our SOC 2 compliance ensures that we are prepared to handle security incidents effectively. This not only protects our customers' data but also helps maintain business continuity. Customers can rely on our SDK generation products without disruption, even in the face of security challenges.

Vendor Trust

Our customers often rely on a network of vendors and partners to build their software solutions. SOC 2 compliance extends to vendor management, requiring us to ensure that our vendors meet the same stringent security standards we do. This provides an additional layer of assurance to our customers, knowing that the entire ecosystem they engage with maintains high data security standards.

Conclusion

Achieving SOC 2 compliance has been a challenging journey for liblab, but it is one that we embrace wholeheartedly. It serves as a testament to our commitment to data security and privacy. For our customers, it signifies a seal of trust, enhanced data security, privacy assurance, reduced risk exposure, and the assurance of business continuity. Maintaining our SOC 2 certification remains a cornerstone of our promise to secure the future for our customers and our developer tools startup. As we continue to innovate and provide cutting-edge SDK generation solutions, information security compliance remains at the core of our promise to safeguard data for liblab and our valued customers.

← Back to the liblab blog

TypeScript, a statically typed superset of JavaScript, has become a go-to language for many developers, particularly when building SDKs that interact with web APIs. TypeScript's powerful type system aids in writing cleaner, more reliable code, ultimately making your SDK more maintainable.

In this blog post, we'll provide a focused exploration of how TypeScript's type system can be harnessed to better manage API routes within your SDK. This post is going to stay focused and concise. We’ll be looking solely at routing tips and intentionally eschewing some of the other aspects of SDK authoring, such as architecture, data structures, handling relations, and other aspects of SDK development. Our SDK will be simple: it is going to simply list a user or users. These tips will help your route definitions be less error prone and easier to read for other engineers.

At the end, we’ll cover the limitations of the tips in this post, what’s missing, and one way in which you can avoid dealing with having to author these types altogether.

Let’s get started.

Alias your types

Type aliasing is important! It can sometimes be overlooked in TypeScript, but aliases are an extremely powerful documentation and code maintenance tool. Type aliases provide additional context as to why something is a string or a number. As an added bonus, if you alias your types and make a different decision (such as shifting from a numeric ID to a GUID) down the road, you can change the underlying type in one place. The compiler will then call out most of the areas in which your code needs to change.

Here are a couple of examples that we’ll build upon later on:

type NoArgs = undefined;
type UserId = number;
type UserName = string;

Note that UserId is a number here. That may not always be the case. If it changes, finding UserId is an easier task than trying to track down which references to number are relevant for your logic change.

Aliasing undefined with NoArgs might seem silly at first, but keep in mind that it’s conveying some extra meaning. It indicates that we specifically do not want arguments when we use it. It’s a way of documenting your code without a comment. Ditto for UserName. It’s unlikely to change types in the future, but using a type alias means that we know what it means, and that’s helpful.

Note: there’s a subtlety here that’s worth calling out. NoArgs is a type here, while undefined is a value. NoArgs is not the value undefined, but is a type whose only acceptable value is undefined. It’s a subtle difference, but it means you can’t do something like const args = NoArgs. Instead, you would have to do something along these lines: const args: NoArgs = undefined.

Statically define your data structures wherever possible

This is similar to the above, and is generally accepted practice. This essentially boils down to avoiding the any keyword and avoid turning everything into a plain object ({[key: string]: any}). In this simple SDK, this means only the following:

type User = {
id: UserId;
name: UserName;
//other fields could go here
}

When we need a User or an array of Users, our SDK engineers will now have all the context they need at design-time. Types such as UserName can be more complex as well (you can use Template Types, for example), allowing you to further constrain your types and make it more difficult to introduce bugs. The intricacies of typing data structures is a much larger subject, so we’ll stick to simple types here.

Make your routes and arguments more resistant to typos

You’ve likely done it before: you meant to call the users endpoint and accidentally typed uesrs. You don’t find out until runtime that the route is wrong, and now you’re tracking it down. Or maybe you can’t remember if you’re supposed to be getting name or userName from the response body and you’re either consulting the spec, curling, or opening Postman to get some real data. Keeping your routes defined in one place means you only need to consult the API spec once (or perhaps not at all if you follow the tip at the end of the post) in order to know what your types are. Your SDK maintainers should only need to go to one place to understand the routes and their arguments:

type Routes = {
'users': NoArgs;
'users/:userId:': UserId;
};

Note that the pattern :argument: was used here, but you can use whatever is best for the libraries/helper methods that you already have. In addition, this API currently only has GET endpoints with no query parameters, so we’re keeping the types on the simple side. Feel free to declare some intermediate types that clearly separate out route, query, and body parameters. Then your function(s) that actually call API endpoints will know what to do with said parameters when it comes time to actually call an endpoint. This is a good segue into the next point:

Use generics to make code reuse easy

It’s hard to overstate how powerful generics can be when it comes to maintaining type safety while still allowing code reuse. It’s easy to slap an any on a return value and just cast your data in your calling function, but that’s quite risky, as it prevents TypeScript from verifying that the function call is safe. It also makes code harder to understand, as there’s missing context. Let’s take a look at a couple of types that can help out for our example.

type RouteArgs<T extends keyof Routes> = {
route: T;
params: Routes[T];
};

const callEndpoint = <Route extends keyof Routes, ExpectedReturn>(args: RouteArgs<Route>): ExpectedReturn => {
//your client code goes here (axios, fetch, etc.) Include any error handling.

//Don't do this, use a type guard to verify that the data is correct!
return [{id: 1, name: "user1"}, {id: 2, name: "user2"}] as unknown as ExpectedReturn
}

Note the T extends keyof Routes in our generic parameter for the type RouteArgs. This builds upon the Routes type that we used earlier, making it impossible to use any string that is not already defined as a route when you’re writing a function that includes a parameter of this type. This also enables you to use Routes[T], meaning that you don’t have to know the specific type at design-time. You get type safety for all of your calling functions.

Note that we also do not assign a type alias to the type of callEndpoint. This type is intended to only be used once in this code base. If you are defining multiple different callEndpoint functions (for example, if you want to separate out logic for each HTTP verb), aliasing your types to make sure that no new errors are being introduced would be highly recommended.

Note that type guards are mentioned in the comment. This code lives at the edge of type safety. You will never be 100% sure that the data that comes back from your API endpoint is the structure you expect. That’s where type guards come in. Make sure that you’re running type guards against these return types. Type guards are outside of the scope of this post, but guarding for concrete types in a generic function can be complex and/or tedious. Depending on your needs, you may choose to use an unsafe type cast similar to the example and put the responsibility of calling the type guard on the calling function. We won’t cover strategies for ensuring these types are correct in this post, but this is an area you should study carefully.

Tying it all together

What do we get for our work? Let’s take a look at the code that an SDK maintainer might write to use the types that we’ve defined:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: undefined})

return users
}

Hopefully it’s clear that we’ve gotten some value out of this. This call is entirely type safe (shown below), and is quite concise and easy to read.

Note that we also don’t have to specify the generic types here. TypeScript is inferring the types for us. If we make a mistake, the code won’t compile! Here are a couple of examples of bad calls and their corresponding errors:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'user', params: undefined})
//Type '"user"' is not assignable to type 'keyof Routes'. Did you mean '"users"'?
return users
}

Look at that helpful error message! Not only does it tell us we’re wrong, it suggests what might be right.

What if we try to pass an argument to this route? If you remember, we defined it to explicitly accept no arguments.

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: 'someUserName'})
//Type 'string' is not assignable to type 'undefined'.(2322)
//{file and line number go here}: The expected type comes from property 'params' which is declared here on type 'RouteArgs<"users">'
return users
}

This is also helpful, though there is some limitation. TypeScript will not pass through the alias that we defined (NoArgs), unfortunately. However, it does tell us exactly where the source of the error is, allowing an engineer to trace exactly why a string won’t work. The engineer will then see that NoArgs type and have a clear understanding of what went wrong.

What’s missing/limitations?

The examples here could still be improved upon. Note that ExpectedReturn is part of callEndpoint. This means that an SDK maintainer would need to have some knowledge of which type to pick (if not the specific structure). Why not include this information our Routes type? That may make a good exercise for the reader.

As previously mentioned, type aliases do not get passed through to compiler errors. There are some workarounds, however.

Depending on how you’re handling various verbs, your type guards/generic functions can get quite complex. This won’t have an impact on those maintaining your SDK, but there can be an up-front cost to defining these types. It’s up to you to decide whether to pay that cost.

What was that about avoiding all this?

Hopefully with the tips in this article, you feel more confident about making maintainable SDKs. However, wouldn’t it be nice if you just didn’t have to develop an SDK at all? After all, you have an API spec; and that should be enough to generate the code, right? Fortunately, the answer is yes, and liblab offers a solution to do just that. If you don’t want to think about challenges like error handling and maintainability for your SDK, liblab’s SDK generation tools may be able to help you.

← Back to the liblab blog

Introduction

When working on applications and systems, we usually rely on APIs to enable integrations between services that make up our system.

The purpose of this article is to provide understanding of some important API metrics that are used to measure an API's performance. For each of those metrics, I will also touch on some factors affecting them and ways to improve them, which will in-turn enhance your API’s performance.

Overview of the key API metrics

To cover the API performance metrics metrics in a comprehensive manner, I divided this article into two parts. In this first part, I will talk about three key metrics, which are Response Time, Throughput, and Latency.

1. Response Time

Response time is basically the time it takes for an API to respond to a request from a client application. Response time gives us a measure of our application's responsiveness, which in-turn has an impact on user's experience.

Factors Affecting Response Time

  • Network Latency is simply the delay in network connection between client applications and your API. Congestion and increased physical distance between servers in a network are examples of situations that impact network latency.
  • If you make use of external or third-party services, then the overall response times of your API will also be dependent on the response times of those services
  • The response times of your API can also be affected by slow or poorly written database queries

Monitoring Your API's Response Time

Monitoring and analyzing response time can help identify bottlenecks, optimize API performance, and ensure service level agreements (SLAs) are met.

There are lots of tools out there that can be used to monitor your API's response time. Here are some popular ones:

  • Apache JMeter
  • Pingdom
  • Datadog
  • New Relic

Improving Your API's Response Time

There are several approaches you can take to improve the response time of your API:

  • Making use of a Load Balancer
  • Optimizing your code to ensure to reduce unnecessary computations, database queries, or network requests
  • Implementing caching mechanisms
  • making use of content delivery networks (CDNs)

2. Throughput

Throughput is simply the number of requests an API can handle within a given time period. It is an important metric for measuring an API's performance, and is usually measured in requests per second (RPS)

An API with higher throughput simply means the system can handle a larger volume of requests, which ensures optimal performance even during peak API usage periods.

Monitoring Throughput

Monitoring throughput in the context of API performance involves analyzing and optimizing various factors such as

  • Server capacity
  • Network bandwidth,
  • Request processing time.

Improving Your API's Throughput

By employing techniques such as horizontal scaling, load balancing, and asynchronous processing, you can ensure a higher throughput which will significantly improve your API's performance.

3. Latency

Latency is another key performance metric for analyzing the performance of an API. It's a measure of the time taken for a client to send a request and get back a response from an API server.

Factors affecting API Latency

Some known factors that affect latency includes:

  • API Response with large data sets
  • Network Congestion
  • Inefficient or poorly written code.

How To Minimize Latency

It is very important to reduce latency, as higher latency can lead to sluggish user experiences, increased waiting times, and reduced overall performance. Some ways to reduce latency includes

  • Employing caching mechanisms
  • Applying data compression techniques
  • Returning data in chunks
  • Optimizing network protocols

4. Request Rate

Request Rate is an API performance metric that measures the rate or frequency at which requests are being made to an API within a specific time frame.

It provides insights into the load or demand placed on the API and helps gauge its capacity to handle incoming requests.

By monitoring request rate, API providers can identify usage patterns, peak periods, and potential scalability challenges which will help to anticipate traffic spikes, and plan resource allocation accordingly.

Monitoring API Request Rate

Request rate is typically measured over specific time intervals, such as:

  • Requests per second (RPS),
  • Requests per minute (RPM),
  • or Requests per hour (RPH).

The different measurement intervals determines the granularity of the metric and allows you to analyze the request patterns over different time periods.

There are several tools available to measure and analyze request rates for your API. Here are some popular options:

  • AWS CloudWatch
  • Google Cloud Monitoring
  • Grafana
  • Datadog
  • Prometheus

Optimizing For Higher Request Rates

To be able to handle increasing request rates during peak periods or as a result of high usage of some particular business features, you can consider implementing the following techniques:

Horizontal scalingUsing horizontal scaling techniques, such as distributing the load across multiple servers or instances. By adding more servers or utilizing cloud-based solutions that provides on-demand scaling of resources, you can handle a higher volume of requests by leveraging the collective resources of multiple machines
Asynchronous processingBy Identifying time-consuming or resource-intensive operations that can be performed asynchronously, you can free up resources to handle more incoming requests. This prevents blocking and allows your API to handle a higher request rate, if such operations are offloaded to background tasks or queues
CachingCaching can significantly improve response times and reduce the load on your API, especially for static or infrequently changing data. Utilize caching techniques like in-memory caches or CDNs can help your API to efficiently handle higher request rates

5. CPU Utilization

CPU utilization is another important metric that measures the percentage of CPU resources used during the processing of an API request. It provides insights into the efficiency of resource allocation and can be a key indicator of API performance.

Factors that can impact CPU usage during API processing include inefficient code implementation, highly computational operations, and the presence of resource-intensive tasks or algorithms.

Monitoring CPU Utilization

To effectively monitor CPU utilization, developers can employ various tools to gain insights into CPU usage. Some examples are New Relic, Datadog, or Prometheus.

Ways To Improve CPU Utilization

Below are some ways to reduce CPU usage within your API:

Efficient Algorithm DesignAnalyze your API code for computational bottlenecks and optimize them by using efficient algorithms and data structures. This will help to reduce CPU usage for operations that would have been more CPU intensive
Throttling & Rate LimitingImplement throttling mechanisms or Rate limiters to control request rates and maximum number of API calls that can be made within a specific time. This will in-turn prevent overload on the CPU.
Load BalancingBy making use of a load balancer, you can distribute incoming requests across multiple servers, effectively distributing the CPU load.

6. Memory Utilization

Memory utilization refers to the amount of system memory (RAM) used by the API during its operation. Efficient memory management is crucial for optimal performance. Excessive memory usage can lead to increased response times, resource contention, and even system instability.

Ways To Improve Memory Utilization

Here are some key points to consider to improve memory usage within your API:

CachingEmploying the use of in-memory caching mechanisms to store frequently accessed data or computations. This reduces the need for repeated processing and improves response times by serving precomputed results from memory.
Data PaginationWhen dealing with large datasets, consider implementing pagination rather than loading the entire dataset into memory, fetch and process data in smaller chunks or stream it to the client as it becomes available. This approach reduces memory pressure and enables efficient processing of large datasets.
Memory profiling toolsUtilize memory profiling tools to identify memory bottlenecks and areas of high memory consumption within your API. These tools can help you pinpoint specific code segments or data structures that contribute to excessive memory usage.

Conclusion

In this article, we discussed the importance of measuring API performance and some key metrics that tells us how well our API is performing.

Improving API performance as well as building SDKs for an API are some of the many problems that most API developers face. Here are liblab, we offer a seamless approach to building robust SDKs from scratch by carefully examining your API specifications.

By leveraging services like liblab, API providers can generate SDKs for their APIs, further enhancing their developer experience and accelerating the integration process with their APIs.

← Back to the liblab blog

Introduction to REST API

We all understand the significance of APIs in software development, as they facilitate data sharing and communication across various software systems. Ensuring their proper functioning is paramount. Implementing proven conventions in your API can greatly enhance its scalability and maintainability. This post delves into versioning techniques and how leveraging existing tools can simplify the process.

Versioning is a key concept that enables your applications to maintain backward compatibility as your API evolves. Without proper versioning, any modifications made to your API could cause unexpected errors and disruptions in current client applications. REST API versioning allows you to introduce updates while ensuring earlier versions continue to function correctly.

Common Versioning Techniques

To implement versioning in your API, here are three popular methods:

  1. URL-Based Versioning: In this method, the version number is incorporated into the URL path. For instance, Version 1 of the API is represented by https://api.example.com/v1/resource.
  2. Query Parameter Versioning: This technique involves appending the version number as a query parameter in the API endpoint. For example, https://api.example.com/resource?version=1.
  3. Header-Based Versioning: With this approach, the version number is specified in a unique header field, such as Accept-Version or X-API-Version.

There is no unanimous consensus on the best approach, as each has its advantages. When choosing, consider the following:

Versioning typeProsCons
URL-based
  • Easy to shut down obsolete versions
  • Facilitates separation of authentication concerns for different versions
  • Compatible with most frameworks
  • Version is always clear and obvious
  • Requires adoption from the start; otherwise, it necessitates code refactoring
  • Difficulty in adding patch versions
Query parameter
  • Easy to implement in existing APIs
  • Allows for the addition of patch versions
  • Provides control over the default version provided to clients
  • Version might be optional
  • Challenging to separate authentication concerns
  • Harder to retire or deactivate obsolete versions
  • Potential confusion distinguishing between data version and API version
Header-based
  • Easy to implement in existing APIs
  • Allows for the addition of patch versions
  • Provides control over the default version provided to clients
  • Version might be optional
  • Challenging to separate authentication concerns
  • Harder to retire or deactivate obsolete versions

Now that you've selected a versioning technique, do you need to update all client applications every time a new version is deployed?

Ideally, keeping client applications up to date ensures optimal API utilization and minimizes issues. However, this doesn't have to be a complicated process if you employ the right tools: SDKs.

How SDKs Assist Client Applications in Adapting to Available Versions

SDKs (Software Development Kits) are libraries that handle API integration, including versioning, on behalf of developers. They offer the following benefits:

  1. Version Management and Compatibility: SDKs allow you to select the API version you want to use, simplifying the process of switching between versions.
  2. Handling Different API Versions: SDKs provide a unified interface for client developers, abstracting the differences between API versions. Regardless of the underlying version, developers can interact with the SDK using standardized techniques and models.
  3. Error Handling: Some versions might also handle errors differently, and SDKs will cover the required changes out of the box
  4. Compile-time errors: SDKs will also present you with compile-time errors when a major change has occurred between the versions, allowing you to save time on testing each change manually.
  5. Automatic updates: And last, but not least, if you are using an SDK provider, you don’t even have to worry about updating the SDK yourself, as all updates will be covered automatically.

To learn more about SDKs, check out this article on how SDKs benefit API management.

"You might wonder if building and maintaining an SDK is more challenging than adapting to newer API versions. After all, you would need to update the SDK to accommodate changes as well."

This is where liblab comes in. We offer an impressive suite of tools to effortlessly build robust and comprehensive SDKs from scratch. By analyzing your API spec, liblab can generate SDKs tailored to your API's needs. These SDKs are flexible and include all the necessary components out of the box.

If you love liblab, but your company hesitates to invest in new tools, check out this article on how to convince management to invest in the tools you need.

Conclusion

Properly versioning your REST API is crucial for its evolution and long-term stability. By utilizing versioning techniques such as URL-based, query parameter-based, or header-based approaches, you can manage changes while ensuring backward compatibility. Additionally, SDKs can assist client applications by abstracting API complexities, managing different versions, and providing consistent interfaces. By following best practices in REST API versioning, you can facilitate smoother transitions, enhance developer experience, and maintain strong relationships with your API consumers.