Skip to main content
← Back to the liblab blog

A mix of anticipation and dread washes over me as I open a new inbound email with an attached specification file. With a heavy sigh, I begin scrolling through its contents, only to be greeted by disappointment yet again.

The API request bodies in this specification file suffer from a lack of essential details, specifically the absence of the actual properties of the HTTP call. This makes it difficult to determine the expectations and behavior of the API. Not only will API consumers have a hard time understanding the API, but the lack of properties also hinders the use of external libraries for validation, analysis, or auto-generation of output (e.g., API mocking, testing, or liblab's auto SDK generation).

After encountering hundreds of specification files (referred to as specs) in my role at liblab, I’ve come to the conclusion that most spec files are in varying degrees of incompletion. Some completely disregard the community standard and omit crucial information while others could use some tweaking and refinement. This has inspired me to write this blog post with the goal of enhancing the quality of your spec files. It just so happens that this goal also aligns with making my job easier.

In the upcoming sections, we'll go over three common issues that make your OpenAPI spec fall short and examine possible solutions for them. By the end of this post you’ll be able to elevate your OpenAPI spec, making it more user-friendly for API consumers, including developers, QA engineers, and other stakeholders.

Three Reasons Why Your OpenAPI Spec Sucks

You’re Still Using Swagger

Look, I get it. A lot of us still get confused about the differences between Swagger and OpenAPI. To make things simple you can think of Swagger as the former name of OpenAPI. Many tools are still using the word "Swagger" in their names but this is primarily due to the strong association and recognition that the term Swagger had gained within the developer community.

If your “Swagger” spec is actually an OpenAPI spec (indicated by the presence of "openapi: 3.x.x" at the beginning), all you need to do is update your terminology.

If you’re actually using a Swagger spec (a file that begins with "swagger: 2.0”), it's time to consider an upgrade. Swagger has certain limitations compared to OpenAPI 3, and as newer versions of OpenAPI are released, transitioning will become increasingly challenging.

Notable differences:

  • OpenAPI 3 has support for oneOf and anyOf that Swagger does not provide. Let us look at this example:
openapi: 3.0.0
info:
title: Payment API
version: 1.0.0
paths:
/payments:
post:
summary: Create a payment
requestBody:
required: true
content:
application/json:
schema:
oneOf:
- $ref: "#/components/schemas/CreditCardPayment"
- $ref: "#/components/schemas/OnlinePayment"
- $ref: "#/components/schemas/CryptoPayment"
responses:
"201":
description: Created
"400":
description: Bad Request

In OpenAPI 3, you can explicitly define that the requestBody for a /payments POST call can be one of three options: CreditCardPayment, OnlinePayment, or CryptoPayment. However, in Swagger you would need to create a workaround by adding an object with optional fields for each payment type:

swagger: "2.0"
info:
title: Payment API
version: 1.0.0
paths:
/payments:
post:
summary: Create a payment
consumes:
- application/json
produces:
- application/json
parameters:
- name: body
in: body
required: true
schema:
$ref: "#/definitions/Payment"
responses:
"201":
description: Created
"400":
description: Bad Request

definitions:
Payment:
type: object
properties:
creditCardPayment:
$ref: "#/definitions/CreditCardPayment"
onlinePayment:
$ref: "#/definitions/OnlinePayment"
cryptoPayment:
$ref: "#/definitions/CryptoPayment"
# Make the properties optional
required: []

CreditCardPayment:
type: object
# Properties specific to CreditCardPayment

OnlinePayment:
type: object
# Properties specific to OnlinePayment

CryptoPayment:
type: object
# Properties specific to CryptoPayment

This example does not resemble the OpenAPI 3 implementation fully as the API consumer has to specify the type they are sending through a property field, and they also might send more than of the fields since they are all marked optional. This approach lacks the explicit validation and semantics provided by the oneOf keyword in OpenAPI 3.

  • In OpenAPI you can describe multiple server URLs while in Swagger you’re bound to only one:
{
"swagger": "2.0",
"info": {
"title": "Sample API",
"version": "1.0.0"
},
"host": "api.example.com",
"basePath": "/v1",
...
}
openapi: 3.0.0
info:
title: Sample API
version: 1.0.0
servers:
- url: http://api.example.com/v1
description: Production Server
- url: https://sandbox.api.example.com/v1
description: Sandbox Server
...

You’re Not Using Components

One way of making an OpenAPI spec more readable is by removing any unnecessary duplication — the same way as a programmer would with their code. If you find that your OpenAPI spec is too messy and hard to read you might be under-utilizing the components section. Components provide a powerful mechanism for defining reusable schemas, parameters, responses, and other elements within your specification.

Let's take a look at the following example that does not utilize components:

openapi: 3.0.0
info:
title: Nested Query Example
version: 1.0.0
paths:
/users:
get:
summary: Get users with nested query parameters
parameters:
- name: filter
in: query
schema:
type: object
properties:
name:
type: string
age:
type: number
address:
type: object
properties:
city:
type: string
state:
type: string
country:
type: string
zipcode:
type: string
...
/user/{id}/friend:
get:
summary: Get a user's friend
parameters:
- name: id
in: path
schema:
type: string
- name: filter
in: query
schema:
type: object
properties:
name:
type: string
age:
type: number
address:
type: object
properties:
city:
type: string
state:
type: string
country:
type: string
zipcode:
type: string
...

The filter parameter in this example is heavily nested and can be challenging to follow. It is also used in its full length by two different endpoints. We can consolidate this behavior by leveraging component schemas:

openapi: 3.0.0
info:
title: Nested Query Example with Schema References
version: 1.0.0
paths:
/users:
get:
summary: Get users with nested query parameters
parameters:
- name: filter
in: query
schema:
$ref: "#/components/schemas/UserFilter"
...
/user/{id}/friend:
get:
summary: Get a user's friend
parameters:
- name: id
in: path
schema:
type: string
- name: filter
in: query
schema:
$ref: "#/components/schemas/UserFilter"
...
components:
schemas:
UserFilter:
type: object
properties:
name:
type: string
age:
type: number
address:
$ref: "#/components/schemas/AddressFilter"

AddressFilter:
type: object
properties:
city:
type: string
state:
type: string
country:
type: string
zipcode:
type: string

The second example is clean and readable. By creating UserFilter and AddressFilter we can reuse those schemas throughout the spec file, and if they ever change we will only have to update them in one place.

You’re Not Using Descriptions, Examples, Formats, or Patterns

You finally finished porting all your endpoints and models into your OpenAPI spec. It took you a while, but now you can finally share it with development teams, QA teams, and even customers. Shortly after you share your spec with the world, the questions start arriving: “What does this endpoint do? What’s the purpose of this parameter? When should the parameter be used?”

Lets take a look at this example:

openapi: 3.0.0
info:
title: Sample API
version: 1.0.0
paths:
/data:
post:
summary: Upload user data
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
name:
type: string
age:
type: integer
email:
type: string
responses:
"200":
description: Successful response

We can deduce from it that data needs to be uploaded, but questions remain: What specific data should be uploaded? Is it the data pertaining to the current user? Whose name, age, and email do these attributes correspond to?

openapi: 3.0.0
info:
title: Sample API
version: 1.0.0
paths:
/data:
post:
summary: Upload user data
description: >
Endpoint for uploading new user data to the system.
This data will be used for personalized recommendations and analysis.
Ensure the data is in a valid JSON format.
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
name:
type: string
description: The name of a new user.
age:
type: integer
description: The age of a new user.
email:
type: string
description: The email address of a new user.
responses:
"200":
description: Successful response

You can’t always control how your API was structured, but you can control the descriptions you give it. Reduce the number of questions you receive by adding useful descriptions wherever possible.

Even after incorporating descriptions, you still might be asked about various aspects of your OpenAPI spec. At this point, you might be thinking, "Sharon, you deceived me! I added all those descriptions yet the questions keep on coming.”

Before you give up, have you thought about adding examples?

Lets take a look at this parameter:

parameters:
- name: id
in: path
required: true
schema:
type: string
description: The user id.

Based on the example, we understand that "id" is a string and serves as the user's identifier. However, despite your QA team relying on your OpenAPI spec for their tests, they are encountering issues. They inform you that they are passing a string, yet the API call fails. “That’s because you’re not passing valid ids”, you tell them. You rush to add an example to your OpenAPI spec:

parameters:
- name: id
in: path
required: true
schema:
type: string
example: e4bb1afb-4a4f-4dd6-8be0-e615d233185b
description: The user id.

After your update your spec a follow up question arrives: would "d0656a1f-1lac-4n7b-89de-3e8ic292b2e1” be a good example as well? The answer is no since the characters 'l' and 'n' in the example are not valid hexadecimal characters, making them illegal in the UUID format:

parameters:
- name: id
in: path
required: true
schema:
type: string
format: uuid
example: e4bb1afb-4a4f-4dd6-8be0-e615d233185b
description: The user id.

Finally your QA team has all the information they need to interact with the endpoints that use this parameter.

But what if a parameter is not of a common format? That’s when regex patterns come in:

parameters:
- name: id
in: path
required: true
schema:
type: string
pattern: "[a-f0-9]{32}"
example: 2675b703b9d4451f8d4861a3eee54449
description: A 32-character unique user ID.

By using the pattern field, you can define custom validation rules for string properties, enabling more precise constraints on the data accepted by your API.

You can read more about formats, examples, and patterns here.

Conclusion

This list of shortcomings is certainly not exhaustive, but the most common and easily fixable ones presented in this post include upgrading from Swagger, utilizing components effectively, and providing comprehensive documentation. By making these improvements, you are laying the foundation for successful API documentation. When working on your spec, put yourself in the shoes of a new API consumer, since this is their initial interaction with the API. Ensure that it is well-documented and easy to comprehend, and set the stage for a positive developer experience.

You can read more OpenAPI tips in some of our other blog posts:

← Back to the liblab blog

Our mission is to empower developers with cutting-edge tools and resources, and at the core of this mission is the assurance that their data is secure. The significance of data security cannot be overstated, and this is why few milestones are as transformative as achieving System and Organization Controls (SOC) compliance.

liblab has successfully completed a comprehensive SOC 2 Type II audit, conducted by Sensiba LLP, a leader in audit services. We are thrilled to share the significance of this accomplishment and why it is crucial not only for our organization but also for our customers. In the short read ahead we’ll discuss the importance of attaining SOC 2 certification, how it impacts our operations, and most importantly, how it benefits our valued customers.

SOC 2 compliance logo

The Road to SOC 2 Compliance

SOC 2 is a rigorous set of standards developed by the American Institute of Certified Public Accountants (AICPA) to assess the security, availability, processing integrity, confidentiality, and privacy of customer data within service organizations. It is a comprehensive framework that demands the highest level of commitment to data security and privacy. Achieving SOC 2 compliance was not a straightforward task for liblab. Here are some of the challenges we encountered along the way:

Complex Documentation and Policies

The foundation of SOC 2 compliance lies in meticulous documentation and well-defined policies and procedures. Developing comprehensive documentation, including data security policies, incident response plans, and access control procedures, can be a time-consuming and complex process. We had to ensure that our documentation was not only thorough but also aligned with the stringent requirements of SOC 2.

Resource Allocation

Achieving SOC 2 compliance requires a substantial allocation of resources, both in terms of time and personnel. We had to designate a dedicated team to work on compliance-related tasks, diverting their efforts from other critical projects. This reallocation of resources was necessary to ensure the successful completion of the SOC 2 audit process.

Continuous Monitoring

SOC 2 compliance is not a one-time achievement but an ongoing commitment. Continuous monitoring and assessment of controls and processes are required to maintain compliance. This means that we needed to establish a system for ongoing monitoring and assessment, which added to the complexity of compliance efforts.

Vendor Compliance

As part of our operations, we engage with third-party vendors and service providers. Ensuring that these vendors also adhere to the rigorous standards of SOC 2 was a challenge. We had to assess their security practices, contractual agreements, and data handling processes to ensure alignment with our commitment to data security.

The Importance of SOC 2 Certification for liblab

Now that we have discussed some of the difficulties we faced in achieving SOC 2 compliance, let's delve into why this certification is a pivotal milestone for liblab and how it profoundly impacts both our operations and our customers.

Elevating Customer Trust

At liblab, our customers rely on our SDK generation service to build secure and reliable software solutions. Achieving SOC 2 compliance serves as a badge of trust for our customers, assuring them that we have robust controls and processes in place to protect their sensitive data. In an era where data breaches and cyber threats are all too common, this trust factor is invaluable.

Regulatory Compliance

Our SDK generation service often involves handling customer data, which may be subject to various data protection laws and regulations, such as GDPR (General Data Protection Regulation) in Europe or CCPA (California Consumer Privacy Act) in the United States. SOC 2 compliance aligns with many of these regulations, ensuring that we are in compliance with the law. This not only mitigates legal risks but also avoids potential fines and reputational damage stemming from non-compliance.

Competitive Advantage

In a competitive marketplace, where organizations are increasingly concerned about data security, achieving SOC 2 compliance provides us with a distinct competitive advantage. It positions liblab as a trusted and secure partner, setting us apart from competitors who may not have undergone such rigorous audits. This certification becomes a compelling factor when potential customers are evaluating their options.

Strengthening Internal Processes

The process of achieving SOC 2 compliance necessitates the establishment of robust internal processes and controls. We had to identify vulnerabilities, implement security measures, and develop an incident response plan. Going through this process not only prepared us for the certification audit but also enhanced our overall security posture. Continuous monitoring and improvement of these processes further strengthen the protection of customer data and reduce the risk of data breaches.

Why SOC 2 Compliance Matters to Our Customers

For our customers, who rely on our SDK generation products to build secure software applications, data security is of paramount importance. It reassures them that their data is handled with the highest level of security.

Enhanced Data Security

The most direct benefit of SOC 2 certification for our customers is enhanced data security. By achieving this certification, we are demonstrating our dedication to safeguarding their data from potential threats and breaches. Customers can trust that their data is protected when they use our developer products.

Data Privacy Assurance

In addition to security, SOC 2 compliance addresses data privacy concerns. It requires us to have clear privacy policies and practices to protect customer data and ensure compliance with data protection regulations. Customers can be confident that their privacy rights are respected and upheld when they entrust us with their data.

Reduced Risk Exposure

Attaining SOC 2 compliance reduces the risk of data breaches and security incidents. Our customers benefit from our proactive approach to data security, knowing that we have robust controls and processes in place to prevent, detect, and respond to security threats. This reduces the likelihood of data breaches that could lead to data loss or exposure.

Business Continuity

Having a well-defined incident response plan as part of our SOC 2 compliance ensures that we are prepared to handle security incidents effectively. This not only protects our customers' data but also helps maintain business continuity. Customers can rely on our SDK generation products without disruption, even in the face of security challenges.

Vendor Trust

Our customers often rely on a network of vendors and partners to build their software solutions. SOC 2 compliance extends to vendor management, requiring us to ensure that our vendors meet the same stringent security standards we do. This provides an additional layer of assurance to our customers, knowing that the entire ecosystem they engage with maintains high data security standards.

Conclusion

Achieving SOC 2 compliance has been a challenging journey for liblab, but it is one that we embrace wholeheartedly. It serves as a testament to our commitment to data security and privacy. For our customers, it signifies a seal of trust, enhanced data security, privacy assurance, reduced risk exposure, and the assurance of business continuity. Maintaining our SOC 2 certification remains a cornerstone of our promise to secure the future for our customers and our developer tools startup. As we continue to innovate and provide cutting-edge SDK generation solutions, information security compliance remains at the core of our promise to safeguard data for liblab and our valued customers.

← Back to the liblab blog

TypeScript, a statically typed superset of JavaScript, has become a go-to language for many developers, particularly when building SDKs that interact with web APIs. TypeScript's powerful type system aids in writing cleaner, more reliable code, ultimately making your SDK more maintainable.

In this blog post, we'll provide a focused exploration of how TypeScript's type system can be harnessed to better manage API routes within your SDK. This post is going to stay focused and concise. We’ll be looking solely at routing tips and intentionally eschewing some of the other aspects of SDK authoring, such as architecture, data structures, handling relations, and other aspects of SDK development. Our SDK will be simple: it is going to simply list a user or users. These tips will help your route definitions be less error prone and easier to read for other engineers.

At the end, we’ll cover the limitations of the tips in this post, what’s missing, and one way in which you can avoid dealing with having to author these types altogether.

Let’s get started.

Alias your types

Type aliasing is important! It can sometimes be overlooked in TypeScript, but aliases are an extremely powerful documentation and code maintenance tool. Type aliases provide additional context as to why something is a string or a number. As an added bonus, if you alias your types and make a different decision (such as shifting from a numeric ID to a GUID) down the road, you can change the underlying type in one place. The compiler will then call out most of the areas in which your code needs to change.

Here are a couple of examples that we’ll build upon later on:

type NoArgs = undefined;
type UserId = number;
type UserName = string;

Note that UserId is a number here. That may not always be the case. If it changes, finding UserId is an easier task than trying to track down which references to number are relevant for your logic change.

Aliasing undefined with NoArgs might seem silly at first, but keep in mind that it’s conveying some extra meaning. It indicates that we specifically do not want arguments when we use it. It’s a way of documenting your code without a comment. Ditto for UserName. It’s unlikely to change types in the future, but using a type alias means that we know what it means, and that’s helpful.

Note: there’s a subtlety here that’s worth calling out. NoArgs is a type here, while undefined is a value. NoArgs is not the value undefined, but is a type whose only acceptable value is undefined. It’s a subtle difference, but it means you can’t do something like const args = NoArgs. Instead, you would have to do something along these lines: const args: NoArgs = undefined.

Statically define your data structures wherever possible

This is similar to the above, and is generally accepted practice. This essentially boils down to avoiding the any keyword and avoid turning everything into a plain object ({[key: string]: any}). In this simple SDK, this means only the following:

type User = {
id: UserId;
name: UserName;
//other fields could go here
}

When we need a User or an array of Users, our SDK engineers will now have all the context they need at design-time. Types such as UserName can be more complex as well (you can use Template Types, for example), allowing you to further constrain your types and make it more difficult to introduce bugs. The intricacies of typing data structures is a much larger subject, so we’ll stick to simple types here.

Make your routes and arguments more resistant to typos

You’ve likely done it before: you meant to call the users endpoint and accidentally typed uesrs. You don’t find out until runtime that the route is wrong, and now you’re tracking it down. Or maybe you can’t remember if you’re supposed to be getting name or userName from the response body and you’re either consulting the spec, curling, or opening Postman to get some real data. Keeping your routes defined in one place means you only need to consult the API spec once (or perhaps not at all if you follow the tip at the end of the post) in order to know what your types are. Your SDK maintainers should only need to go to one place to understand the routes and their arguments:

type Routes = {
'users': NoArgs;
'users/:userId:': UserId;
};

Note that the pattern :argument: was used here, but you can use whatever is best for the libraries/helper methods that you already have. In addition, this API currently only has GET endpoints with no query parameters, so we’re keeping the types on the simple side. Feel free to declare some intermediate types that clearly separate out route, query, and body parameters. Then your function(s) that actually call API endpoints will know what to do with said parameters when it comes time to actually call an endpoint. This is a good segue into the next point:

Use generics to make code reuse easy

It’s hard to overstate how powerful generics can be when it comes to maintaining type safety while still allowing code reuse. It’s easy to slap an any on a return value and just cast your data in your calling function, but that’s quite risky, as it prevents TypeScript from verifying that the function call is safe. It also makes code harder to understand, as there’s missing context. Let’s take a look at a couple of types that can help out for our example.

type RouteArgs<T extends keyof Routes> = {
route: T;
params: Routes[T];
};

const callEndpoint = <Route extends keyof Routes, ExpectedReturn>(args: RouteArgs<Route>): ExpectedReturn => {
//your client code goes here (axios, fetch, etc.) Include any error handling.

//Don't do this, use a type guard to verify that the data is correct!
return [{id: 1, name: "user1"}, {id: 2, name: "user2"}] as unknown as ExpectedReturn
}

Note the T extends keyof Routes in our generic parameter for the type RouteArgs. This builds upon the Routes type that we used earlier, making it impossible to use any string that is not already defined as a route when you’re writing a function that includes a parameter of this type. This also enables you to use Routes[T], meaning that you don’t have to know the specific type at design-time. You get type safety for all of your calling functions.

Note that we also do not assign a type alias to the type of callEndpoint. This type is intended to only be used once in this code base. If you are defining multiple different callEndpoint functions (for example, if you want to separate out logic for each HTTP verb), aliasing your types to make sure that no new errors are being introduced would be highly recommended.

Note that type guards are mentioned in the comment. This code lives at the edge of type safety. You will never be 100% sure that the data that comes back from your API endpoint is the structure you expect. That’s where type guards come in. Make sure that you’re running type guards against these return types. Type guards are outside of the scope of this post, but guarding for concrete types in a generic function can be complex and/or tedious. Depending on your needs, you may choose to use an unsafe type cast similar to the example and put the responsibility of calling the type guard on the calling function. We won’t cover strategies for ensuring these types are correct in this post, but this is an area you should study carefully.

Tying it all together

What do we get for our work? Let’s take a look at the code that an SDK maintainer might write to use the types that we’ve defined:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: undefined})

return users
}

Hopefully it’s clear that we’ve gotten some value out of this. This call is entirely type safe (shown below), and is quite concise and easy to read.

Note that we also don’t have to specify the generic types here. TypeScript is inferring the types for us. If we make a mistake, the code won’t compile! Here are a couple of examples of bad calls and their corresponding errors:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'user', params: undefined})
//Type '"user"' is not assignable to type 'keyof Routes'. Did you mean '"users"'?
return users
}

Look at that helpful error message! Not only does it tell us we’re wrong, it suggests what might be right.

What if we try to pass an argument to this route? If you remember, we defined it to explicitly accept no arguments.

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: 'someUserName'})
//Type 'string' is not assignable to type 'undefined'.(2322)
//{file and line number go here}: The expected type comes from property 'params' which is declared here on type 'RouteArgs<"users">'
return users
}

This is also helpful, though there is some limitation. TypeScript will not pass through the alias that we defined (NoArgs), unfortunately. However, it does tell us exactly where the source of the error is, allowing an engineer to trace exactly why a string won’t work. The engineer will then see that NoArgs type and have a clear understanding of what went wrong.

What’s missing/limitations?

The examples here could still be improved upon. Note that ExpectedReturn is part of callEndpoint. This means that an SDK maintainer would need to have some knowledge of which type to pick (if not the specific structure). Why not include this information our Routes type? That may make a good exercise for the reader.

As previously mentioned, type aliases do not get passed through to compiler errors. There are some workarounds, however.

Depending on how you’re handling various verbs, your type guards/generic functions can get quite complex. This won’t have an impact on those maintaining your SDK, but there can be an up-front cost to defining these types. It’s up to you to decide whether to pay that cost.

What was that about avoiding all this?

Hopefully with the tips in this article, you feel more confident about making maintainable SDKs. However, wouldn’t it be nice if you just didn’t have to develop an SDK at all? After all, you have an API spec; and that should be enough to generate the code, right? Fortunately, the answer is yes, and liblab offers a solution to do just that. If you don’t want to think about challenges like error handling and maintainability for your SDK, liblab’s SDK generation tools may be able to help you.

← Back to the liblab blog

Introduction

When working on applications and systems, we usually rely on APIs to enable integrations between services that make up our system.

The purpose of this article is to provide understanding of some important API metrics that are used to measure an API's performance. For each of those metrics, I will also touch on some factors affecting them and ways to improve them, which will in-turn enhance your API’s performance.

Overview of the key API metrics

To cover the API performance metrics metrics in a comprehensive manner, I divided this article into two parts. In this first part, I will talk about three key metrics, which are Response Time, Throughput, and Latency.

1. Response Time

Response time is basically the time it takes for an API to respond to a request from a client application. Response time gives us a measure of our application's responsiveness, which in-turn has an impact on user's experience.

Factors Affecting Response Time

  • Network Latency is simply the delay in network connection between client applications and your API. Congestion and increased physical distance between servers in a network are examples of situations that impact network latency.
  • If you make use of external or third-party services, then the overall response times of your API will also be dependent on the response times of those services
  • The response times of your API can also be affected by slow or poorly written database queries

Monitoring Your API's Response Time

Monitoring and analyzing response time can help identify bottlenecks, optimize API performance, and ensure service level agreements (SLAs) are met.

There are lots of tools out there that can be used to monitor your API's response time. Here are some popular ones:

  • Apache JMeter
  • Pingdom
  • Datadog
  • New Relic

Improving Your API's Response Time

There are several approaches you can take to improve the response time of your API:

  • Making use of a Load Balancer
  • Optimizing your code to ensure to reduce unnecessary computations, database queries, or network requests
  • Implementing caching mechanisms
  • making use of content delivery networks (CDNs)

2. Throughput

Throughput is simply the number of requests an API can handle within a given time period. It is an important metric for measuring an API's performance, and is usually measured in requests per second (RPS)

An API with higher throughput simply means the system can handle a larger volume of requests, which ensures optimal performance even during peak API usage periods.

Monitoring Throughput

Monitoring throughput in the context of API performance involves analyzing and optimizing various factors such as

  • Server capacity
  • Network bandwidth,
  • Request processing time.

Improving Your API's Throughput

By employing techniques such as horizontal scaling, load balancing, and asynchronous processing, you can ensure a higher throughput which will significantly improve your API's performance.

3. Latency

Latency is another key performance metric for analyzing the performance of an API. It's a measure of the time taken for a client to send a request and get back a response from an API server.

Factors affecting API Latency

Some known factors that affect latency includes:

  • API Response with large data sets
  • Network Congestion
  • Inefficient or poorly written code.

How To Minimize Latency

It is very important to reduce latency, as higher latency can lead to sluggish user experiences, increased waiting times, and reduced overall performance. Some ways to reduce latency includes

  • Employing caching mechanisms
  • Applying data compression techniques
  • Returning data in chunks
  • Optimizing network protocols

4. Request Rate

Request Rate is an API performance metric that measures the rate or frequency at which requests are being made to an API within a specific time frame.

It provides insights into the load or demand placed on the API and helps gauge its capacity to handle incoming requests.

By monitoring request rate, API providers can identify usage patterns, peak periods, and potential scalability challenges which will help to anticipate traffic spikes, and plan resource allocation accordingly.

Monitoring API Request Rate

Request rate is typically measured over specific time intervals, such as:

  • Requests per second (RPS),
  • Requests per minute (RPM),
  • or Requests per hour (RPH).

The different measurement intervals determines the granularity of the metric and allows you to analyze the request patterns over different time periods.

There are several tools available to measure and analyze request rates for your API. Here are some popular options:

  • AWS CloudWatch
  • Google Cloud Monitoring
  • Grafana
  • Datadog
  • Prometheus

Optimizing For Higher Request Rates

To be able to handle increasing request rates during peak periods or as a result of high usage of some particular business features, you can consider implementing the following techniques:

Horizontal scalingUsing horizontal scaling techniques, such as distributing the load across multiple servers or instances. By adding more servers or utilizing cloud-based solutions that provides on-demand scaling of resources, you can handle a higher volume of requests by leveraging the collective resources of multiple machines
Asynchronous processingBy Identifying time-consuming or resource-intensive operations that can be performed asynchronously, you can free up resources to handle more incoming requests. This prevents blocking and allows your API to handle a higher request rate, if such operations are offloaded to background tasks or queues
CachingCaching can significantly improve response times and reduce the load on your API, especially for static or infrequently changing data. Utilize caching techniques like in-memory caches or CDNs can help your API to efficiently handle higher request rates

5. CPU Utilization

CPU utilization is another important metric that measures the percentage of CPU resources used during the processing of an API request. It provides insights into the efficiency of resource allocation and can be a key indicator of API performance.

Factors that can impact CPU usage during API processing include inefficient code implementation, highly computational operations, and the presence of resource-intensive tasks or algorithms.

Monitoring CPU Utilization

To effectively monitor CPU utilization, developers can employ various tools to gain insights into CPU usage. Some examples are New Relic, Datadog, or Prometheus.

Ways To Improve CPU Utilization

Below are some ways to reduce CPU usage within your API:

Efficient Algorithm DesignAnalyze your API code for computational bottlenecks and optimize them by using efficient algorithms and data structures. This will help to reduce CPU usage for operations that would have been more CPU intensive
Throttling & Rate LimitingImplement throttling mechanisms or Rate limiters to control request rates and maximum number of API calls that can be made within a specific time. This will in-turn prevent overload on the CPU.
Load BalancingBy making use of a load balancer, you can distribute incoming requests across multiple servers, effectively distributing the CPU load.

6. Memory Utilization

Memory utilization refers to the amount of system memory (RAM) used by the API during its operation. Efficient memory management is crucial for optimal performance. Excessive memory usage can lead to increased response times, resource contention, and even system instability.

Ways To Improve Memory Utilization

Here are some key points to consider to improve memory usage within your API:

CachingEmploying the use of in-memory caching mechanisms to store frequently accessed data or computations. This reduces the need for repeated processing and improves response times by serving precomputed results from memory.
Data PaginationWhen dealing with large datasets, consider implementing pagination rather than loading the entire dataset into memory, fetch and process data in smaller chunks or stream it to the client as it becomes available. This approach reduces memory pressure and enables efficient processing of large datasets.
Memory profiling toolsUtilize memory profiling tools to identify memory bottlenecks and areas of high memory consumption within your API. These tools can help you pinpoint specific code segments or data structures that contribute to excessive memory usage.

Conclusion

In this article, we discussed the importance of measuring API performance and some key metrics that tells us how well our API is performing.

Improving API performance as well as building SDKs for an API are some of the many problems that most API developers face. Here are liblab, we offer a seamless approach to building robust SDKs from scratch by carefully examining your API specifications.

By leveraging services like liblab, API providers can generate SDKs for their APIs, further enhancing their developer experience and accelerating the integration process with their APIs.

← Back to the liblab blog

In the realm of software engineering, there exists a unique breed of engineers known as pragmatists. These individuals possess a distinct approach to their craft, blending technical expertise with a grounded mindset that sets them apart from their peers. But what truly sets a pragmatic engineer apart? What is it about their approach that makes them so effective in navigating the complexities of software development?

To answer this we will dive into the provoking thoughts and ideas inspired by the books The Clean Coder: A Code of Conduct for Professional Programmers by Robert C. Martin and The Pragmatic Programmer: Your Journey To Mastery, 20th Anniversary Edition (2nd Edition) by David Thomas and Andrew Hunt.

At the core of pragmatism lies a set of behaviours that define the pragmatic engineer. One such behaviour is being an early/fast adopter. Pragmatists eagerly embrace new technologies, swiftly grasping their intricacies and staying ahead of the curve.

Curiosity is another characteristic that fuels the pragmatic mindset. Pragmatists are inquisitive souls, always ready to question and seek understanding when faced with unfamiliar concepts or challenges.

Critical thinking is a cornerstone of pragmatism. Rather than accepting solutions at face value, pragmatic engineers apply their analytical minds to evaluate and challenge proposed solutions, always on the lookout for more elegant or efficient alternatives.

Pragmatists also possess a keen sense of realism. They strive to understand the underlying nature of the problems they encounter, grounding their solutions in practicality and addressing the true essence of the challenges they face.

Embracing a broad spectrum of knowledge is another defining trait of the pragmatic engineer. Rather than limiting themselves to a single technology or domain, they actively seek to expand their skill set, becoming a polymath who can adapt to a wide range of contexts.

By understanding these foundational behaviours, we gain some insight into the pragmatic philosophy. It is a mindset that values adaptability, practicality, and a continuous thirst for knowledge. Now let’s explore the intricacies of the pragmatic engineer’s thinking, unraveling the secrets that make them such effective and versatile engineers in the ever-evolving world of software development.

In the first series of the blog post we will delve into the first three key aspects (Cat ate my source code, Just a broken window and Make a stone soup), providing sufficient time for contemplation and assimilation. Subsequently, in the next series of the blog post, we will examine the remaining three aspects (Good-enough software, Knowledge portfolio and Talk the Talk) and conclude with some final thoughts.

Enjoy the enlightening journey ahead.

Cat ate my source code

A cat licking its lips

Can you imagine a cat eating the source code? How does that statement sound to you? Do you find it silly, funny or maybe even stupid? Well, that is the same way your colleagues will feel if you try to make excuses for the mistakes you made. Never make excuses, forget that the word excuse even exists. As a pragmatic engineer you are in complete control of your career and you take full responsibility for the work you do. There’s a saying that if you don’t make mistakes it means that you’re either playing it too safe or you’re not playing at all. Mistakes are inevitable. They are the integral part of doing the work and growing as an engineer. Your team will forgive you for the mistake you made, but they won't forgive or forget if you don't take responsibility for it. It’s how you respond to a mistake that makes all the difference. Accepting that you made a mistake and taking the responsibility publicly is not the most pleasant experience, but you should be truthful, direct and never shy away from it.

Here are some ways that pragmatic engineers deal with mistakes and responsibility:

  • Choosing Options over Excuses:
    • Pragmatic engineers prioritise finding options rather than making excuses.
    • They avoid labelling something as impossible and instead approach challenges with a curious and can-do mindset.
  • Asking Questions and Preparing Options:
    • When faced with a difficult situation, pragmatic engineers ask questions to gain a deeper understanding.
    • They prepare options and alternatives to tackle the problem effectively.
  • Refactoring or Removing Project Parts:
    • Pragmatic engineers consider the possibility of refactoring or removing certain project components if necessary.
    • They investigate, create a plan, assess risks, estimate impacts, and present their findings to the team.
  • Assessing Needs and Implementing Improvements:
    • Pragmatic engineers identify the needs for proof of concept, enhanced testing, automation, or cleaning the code base.
    • They proactively address these requirements to prevent errors and optimize processes.
  • Seeking Resources and Collaboration:
    • Pragmatic engineers are not hesitant to request additional resources or seek help when needed.
    • They aren't afraid to admit when they lack the skills to solve a problem and will ask for help instead.
    • They understand the importance of putting in effort to explore options, ask for support, and leverage available resources.

💡 Just for thought: How do you react when someone gives you a lame excuse? Do you start losing trust?

Just a broken window

Windows in an old building, one if the windows has a broken pane

Why are some projects falling apart and inevitably increasing their technical debt? Even if they have the talented people and the time, so what is the problem? Have you heard about the broken windows theory? The social criminology scientific research states that if a building has just one broken window left un-repaired for a longer period of time, it will create an environment that encourages further window breaking. This is as true in nice neighbourhoods as in rundown ones. Multiple broken windows will create a sense of negligence that will then result with more civil disorder such as trash left out or graffiti on the building walls. In relatively short period the apartment owners will get a sense of hopelessness which will result in negativity spread, creating a contagious depression. As a consequence, that part of the neighbourhood becomes associated with being unsafe, and ultimately, the building gets abandoned.

Here is a reflection of the theory on the technology world:

  • The broken windows theory applies to software engineering projects as well:
    • Un-repaired broken windows such as bad design, wrong decisions, or unclean code contribute to technical debt and system entropy.
    • Technical debt decreases efficiency and throughput of the team and it increases dissatisfaction and frustration, potentially leading to more technical debt and an unsuccessful project outcome.
  • Psychology and culture play a vital role:
    • In projects filled with technical debt, it's easy to pass blame and follow poor examples (easy to say “Well it’s not my fault, I work with what I have, I will just follow the example”).
    • In well-designed projects, a natural inclination arises to maintain cleanliness and tidiness
  • Pragmatic engineers resist causing further damage:
    • They don't allow deadlines, high-priority issues, or crises to push them into increasing the project's collateral damage.
    • They adhere to The Boy Scout Rule from the book Clean Code by Robert C. Martin.
    • They diligently track broken windows and create concrete plans to fix them.
    • They don't think “I'll improve this later”, and trust that it will be done. They create tasks for things to be tracked and tackled in the future.
    • They demonstrate care, effort, and a resolve to address known issues.

💡 Just for thought: What did you do the last time you saw a broken window in your project? Did you do some action to repair it or did you look the other way and thought it’s not my fault that it’s broken?

Make a stone soup

A bowl of yellow soup with a sprig of rosemary and a stone in it

In a well-known European folk story, hungry strangers visit a village seeking food. The villagers refuse to assist, so the strangers, having only an empty cooking pot, light a fire. They place the pot on the fire, filling it with water and a large stone. As the strangers gather around the boiling pot, the villagers, being curious, slowly start approaching and asking questions. The strangers tell the villagers that they are preparing a “stone soup” and even though it’s a complete meal on it’s own they encourage them to bring some vegetables to make it even better. The first villager, who anticipates enjoying a share of the soup, brings a few carrots from his home. Quickly after, follows the second and the third villager bringing their share of onions and potatoes. Becoming even more convinced by the actions of the first three, more and more villagers start bringing tomatoes, sweetcorn, beef, salt and pepper. Finally, making sure that the soup is ready, the strangers remove the stone and serve a meal to everyone present.

What is the moral of this story?

Would you say that the villagers got tricked into sharing their food?

Why is it important in the context of software engineering?

The tale teaches a moral lesson about the power of collaboration and generosity. However, in the context of software engineering, the story could be used as an analogy to emphasise the importance of teamwork and resourcefulness in problem-solving. Just as the strangers in the story creatively used their limited resources to provide a solution, software engineers often need to think outside the box and work together to overcome challenges and deliver successful projects.

Here’s how pragmatic engineers make the stone soup:

  • Challenging Others and Communicating Vision:
    • Pragmatic engineers face the challenge of rallying others to contribute and align with their vision.
    • They draw inspiration from the tale of stone soup, emphasising the importance of proactivity and resourcefulness.
  • Acting as Catalysts for Change:
    • Pragmatic engineers take the initiative to create an initial proof of concept and lift ideas off the ground.
    • They actively work towards gaining buy-in from their colleagues.
  • Inspiring Others and Transforming Vision into Reality:
    • By presenting a glimpse of a promising future, pragmatic engineers inspire others to join their cause.
    • They collectively work to transform the shared vision into reality.
  • Creating a Win-Win Situation:
    • Through their efforts, pragmatic engineers create a win-win situation that benefits everyone involved.

💡 Just for thought: Have you ever made a stone soup for your team? How did they react?

End of part one

Are you filled with anticipation to discover the remaining aspects? If so, consider yourself fortunate as the second part of the blog series is just around the corner. Take this time to reflect on the insightful aspects discussed in this blog. Challenge yourself to apply at least one of the ideas you found most intriguing. Remember to hold yourself accountable and engage in self-reflection after a few weeks to assess your progress. Even a slight enhancement can lead to significant growth. Allow these concepts to simmer in your mind, ready to inspire your actions.

Start by doing what is necessary, then do what is possible, and suddenly you are doing the impossible."

Saint Francis of Assisi

In the upcoming blog post we will explore deeper into the topics of Good-enough software, Knowledge portfolio and Talk the talk.

Until next time, stay sharp!

← Back to the liblab blog

Introduction to REST API

We all understand the significance of APIs in software development, as they facilitate data sharing and communication across various software systems. Ensuring their proper functioning is paramount. Implementing proven conventions in your API can greatly enhance its scalability and maintainability. This post delves into versioning techniques and how leveraging existing tools can simplify the process.

Versioning is a key concept that enables your applications to maintain backward compatibility as your API evolves. Without proper versioning, any modifications made to your API could cause unexpected errors and disruptions in current client applications. REST API versioning allows you to introduce updates while ensuring earlier versions continue to function correctly.

Common Versioning Techniques

To implement versioning in your API, here are three popular methods:

  1. URL-Based Versioning: In this method, the version number is incorporated into the URL path. For instance, Version 1 of the API is represented by https://api.example.com/v1/resource.
  2. Query Parameter Versioning: This technique involves appending the version number as a query parameter in the API endpoint. For example, https://api.example.com/resource?version=1.
  3. Header-Based Versioning: With this approach, the version number is specified in a unique header field, such as Accept-Version or X-API-Version.

There is no unanimous consensus on the best approach, as each has its advantages. When choosing, consider the following:

Versioning typeProsCons
URL-based
  • Easy to shut down obsolete versions
  • Facilitates separation of authentication concerns for different versions
  • Compatible with most frameworks
  • Version is always clear and obvious
  • Requires adoption from the start; otherwise, it necessitates code refactoring
  • Difficulty in adding patch versions
Query parameter
  • Easy to implement in existing APIs
  • Allows for the addition of patch versions
  • Provides control over the default version provided to clients
  • Version might be optional
  • Challenging to separate authentication concerns
  • Harder to retire or deactivate obsolete versions
  • Potential confusion distinguishing between data version and API version
Header-based
  • Easy to implement in existing APIs
  • Allows for the addition of patch versions
  • Provides control over the default version provided to clients
  • Version might be optional
  • Challenging to separate authentication concerns
  • Harder to retire or deactivate obsolete versions

Now that you've selected a versioning technique, do you need to update all client applications every time a new version is deployed?

Ideally, keeping client applications up to date ensures optimal API utilization and minimizes issues. However, this doesn't have to be a complicated process if you employ the right tools: SDKs.

How SDKs Assist Client Applications in Adapting to Available Versions

SDKs (Software Development Kits) are libraries that handle API integration, including versioning, on behalf of developers. They offer the following benefits:

  1. Version Management and Compatibility: SDKs allow you to select the API version you want to use, simplifying the process of switching between versions.
  2. Handling Different API Versions: SDKs provide a unified interface for client developers, abstracting the differences between API versions. Regardless of the underlying version, developers can interact with the SDK using standardized techniques and models.
  3. Error Handling: Some versions might also handle errors differently, and SDKs will cover the required changes out of the box
  4. Compile-time errors: SDKs will also present you with compile-time errors when a major change has occurred between the versions, allowing you to save time on testing each change manually.
  5. Automatic updates: And last, but not least, if you are using an SDK provider, you don’t even have to worry about updating the SDK yourself, as all updates will be covered automatically.

To learn more about SDKs, check out this article on how SDKs benefit API management.

"You might wonder if building and maintaining an SDK is more challenging than adapting to newer API versions. After all, you would need to update the SDK to accommodate changes as well."

This is where liblab comes in. We offer an impressive suite of tools to effortlessly build robust and comprehensive SDKs from scratch. By analyzing your API spec, liblab can generate SDKs tailored to your API's needs. These SDKs are flexible and include all the necessary components out of the box.

If you love liblab, but your company hesitates to invest in new tools, check out this article on how to convince management to invest in the tools you need.

Conclusion

Properly versioning your REST API is crucial for its evolution and long-term stability. By utilizing versioning techniques such as URL-based, query parameter-based, or header-based approaches, you can manage changes while ensuring backward compatibility. Additionally, SDKs can assist client applications by abstracting API complexities, managing different versions, and providing consistent interfaces. By following best practices in REST API versioning, you can facilitate smoother transitions, enhance developer experience, and maintain strong relationships with your API consumers.

← Back to the liblab blog

The rise of remote work has transformed the way we work. With the ability to work from anywhere in the world, companies can now recruit the best talent from all over the globe. However, recruiting a winning team for a remote-first, globally distributed Saas company comes with a particular set of challenges. In this article, we'll discuss how liblab recruits the best talent from all over the world to join our remote-first company, and the strategies we use to build and retain a winning remote team.

The Challenges

Challenge #1: Lack of Physical Presence and Face-to-Face Interaction

One of the biggest challenges we encounter while recruiting is the lack of physical presence and face-to-face interaction. In a traditional office setting, recruiters and hiring managers can meet with the candidates in person, get a sense of their personality and work style, and assess their fit with the company culture. However, in a remote setting, these interactions are limited to virtual meetings and online communication.

To overcome this challenge, we’re very intentional about creating opportunities for virtual face-to-face interaction. This can include video interviews, virtual team-building activities, and online assessments that allow candidates to showcase their skills and work style. We leverage all types of video conferencing technology.

Additionally, it's important for us to have a strong company culture that is communicated effectively to candidates. By clearly defining our company's values and mission, and emphasizing the importance of collaboration and communication in a remote setting, we’re more likely to attract candidates who are a good fit for our work-from-anywhere culture.

Challenge #2: Ensuring Cultural Fit in a Distributed Team

Cultural fit is a critical factor in the success of any team, but it can be particularly challenging to assess in a remote setting. Without the benefit of in-person interactions and observations, our recruiters and hiring managers must rely on other indicators to assess cultural fit, such as a candidate's communication style, work habits, and attitude.

To ensure cultural fit in our distributed team, we’re careful about defining our company culture and values, and careful about communicating them effectively to candidates. Additionally, we include as many of our team members in the hiring process, allowing them to assess the candidate's fit within the respective team. This includes virtual team interviews and technical assessments, where team members can ask questions and provide feedback on the candidate's technical and cultural fit.

Challenge #3: Evaluating Technical Skills and Expertise Without In-Person Assessments

Another challenge we face is evaluating technical skills and expertise without in-person assessments. In a traditional office setting, candidates can be asked to complete technical assessments or work samples in person, which can provide valuable insights into their skills and abilities, and allow observation of how they may relate to teammates. However, in a remote setting, these assessments must be conducted online, which can be more challenging to manage and evaluate because it reduces the amount of personality clues we can collect.

To evaluate technical skills and expertise in a remote setting, we use online assessment tools and platforms that allow candidates to complete assessments and submit coding samples remotely. Additionally, we conduct virtual coding challenges and pair programming exercises, where candidates can work with our team members to solve real-world problems and demonstrate their skills.

Challenge #4: Addressing Time Zone Differences and Collaboration Challenges

Finally, we must address time zone differences and collaboration challenges when recruiting and hiring. In a distributed team, team members may be located in different time zones, which can make communication and collaboration more difficult. Additionally, remote teams may face other collaboration challenges, such as language barriers or differences in work styles.

To address time zone differences and collaboration challenges, we’re very obsessed with promoting clear communication and collaboration practices. This means a heavy reliance on video conferencing and chat tools, and setting expectations for response times and availability. We also leverage productivity tools that allow us to share and collaborate on documentation.

Additionally, being distributed across different time zones means we need to be more efficient and proficient with asynchronous communication. We heavily leverage our instant chat app. Whether it’s by voice, video, or text, we're constantly messaging, emailing, and leaving comments. Our gif and meme games are topnotch.

The Strategies

As can be seen with the challenges mentioned, recruiting a winning team for a globally dispersed remote company is no easy task. However, with the right strategies in place, it’s very possible to attract and retain the right talent for a distributed team. Next we’ll explore some of the key strategies liblab uses to recruit talented and passionate candidates from all over the world.

Strategy #1: Define Your Company Culture and Values

In a remote setting, it can be more challenging to establish a strong company culture. But defining company culture and values is essential for attracting and retaining top talent. By clearly defining company culture and values, we can communicate our vision and mission to potential candidates and attract those who share our values.

To define our company culture and values, we start by identifying what is most important to us. This includes things like ownership, agility, or customer satisfaction. With our core values identified, we communicate them to our team and potential candidates through job postings, our website, and other marketing materials (such as this blog!).

Ultimately, we value getting things done. When recruiting, we look for candidates who want to own the stack end-to-end.

Strategy #2: Recruit from Diverse Talent Pools

In a remote setting, we have access to a wider range of talent than we would in a traditional office setting. Not being restricted to a local talent pool, we’re able to find talented candidates that can only be found outside our immediate area. By recruiting from diverse talent pools, we can bring in fresh perspectives and ideas that can help us grow and innovate.

We embrace everyone’s unique backgrounds and make every effort to ensure each and every person feels included, heard, valued. And we believe that such principles should extend beyond our team into our relations with customers, partners, and of course, our candidates.

To recruit from diverse talent pools, it’s highly beneficial to partner with organizations that focus on diversity and inclusion. Additionally, consider posting job openings on sites that cater to diverse candidates. Finally, be sure to devote attention and resources to internal diversity initiatives and inclusion efforts.

Strategy #3: Conduct Thorough Evaluations

We carefully evaluate every potential candidate. Without the opportunity to meet with candidates in person, we can only use other evaluation methods, such as video interviews, online assessments, and work samples.

To conduct thorough evaluations, we start by identifying the key skills and qualities we’re looking for in a candidate. Then, we use a combination of evaluation methods to assess these skills and qualities. For example, we might conduct a video interview to assess communication skills, a pair programming session to evaluate live problem solving skills, and a coding challenge to understand technical proficiency.

Obviously, the more thorough the evaluation methods, the more strain it is on our resources. To alleviate the time our developers spend on reviewing candidate coding samples, we leverage external partners to augment the remote technical assessment portion of our recruiting. If you’re wondering how to get buy-in to pursue these partnerships, here’s our blogpost on How to Convince Management to Invest in the Tools You Need.

Ultimately, we’ve designed a recruiting workflow that allows us to efficiently evaluate the candidate while at the same time providing a positive experience for the candidate.

Strategy #4: Offer Competitive Compensation Packages

Offering competitive compensation packages is another important strategy for recruiting a winning team. In a remote setting, we’re competing with companies from all over the world for top talent, so it is important to offer compensation packages that are competitive with other companies in our industry.

To offer competitive compensation packages, we research industry standards and benchmark salaries for similar positions. Additionally, we offer benefits that are attractive to remote workers, such as flexible work-from-anywhere arrangements, professional development opportunities, and health and wellness benefits.

Strategy #5: Foster a Positive Company Culture

Finally, fostering a positive company culture is essential. In a remote setting, it can be more challenging to establish a strong company culture, but it’s critical for building a strong and cohesive team.

To foster a positive company culture, we start by codifying our core values. We then communicate our mission, we align in our vision, and we help each other achieve goals. As a mantra, we say “be smart and kind”. It’s not enough to be brilliant on our team; you must also be able to help your teammates win.

Additionally, provide opportunities for team members to connect and collaborate. We sponsor teammate travel to each other’s locations to collaborate and work together in person. Finally, we bring people together at least once a year to get to know each other and bond in person. “liblab week” is an annual offsite where we all meet, get to know our peers, and have fun together.

Conclusion

Recruiting the best talent for a remote Saas company doesn’t require re-inventing the wheel. But it does require understanding of the challenges and a dedication to the strategies that make it possible to find the best available talent anywhere.

When it comes to growing the team, we focus on implementing a robust recruitment process and establishing a strong company culture. Also, we value adaptability, self-motivation, and autonomy, and we only look for difference makers who embody these ideals.

At liblab we’re proving that with a winning remote team, any SDK is possible.

← Back to the liblab blog

Publishing packages to NPM is not a particularly difficult challenge by itself. However, configuring your TypeScript project for success might be. Will your package work on most projects? Will the users have type-hinting and autocompletion? Will it work with ES Modules (ESM) and CommonJS (CJS) style imports?

After reading this post, you will understand how to make your TypeScript package more accessible and usable in any (or most) JavaScript and TypeScript projects, including browser support!

Creating a TypeScript Project

Chances are that if you are reading this, you already have a TypeScript project set up. If you do, you might want to skip to the next steps or stay around to check for discrepancies.

Let's start by creating our base Node.js project and adding TypeScript as a development dependency:

npm init -y
npm install typescript --save-dev

You likely want to structure your code inside a src folder. So let's create your package's entry point inside of it:

mkdir src
touch src/index.ts

Now, Node.js and browsers don't understand TypeScript, so we need to set up tsc (TypeScript compiler) to compile our TypeScript code to JavaScript. Let's add a tsconfig.json file to our project by running:

npx tsc --init

If we run npx tsc now, it will scan our folder and create .js files in the same directories as our .ts files (which is not desirable). Let's add better configuration before we run that and make a mess.

Add the following lines to tsconfig.json:

{
"compilerOptions": {
// ... Other options
"rootDir": "./src", // Where to look for our code
"outDir": "./dist", // Where to place the compiled JavaScript
}

Let's also add a "build" script to our package.json:

{
"scripts": {
"build": "tsc"
}
}

If we run npm run build now, a new dist folder will appear with the compiled JavaScript. If you're using Git, make sure to add the dist folder to your .gitignore.

Setting up tsc for Optimal Developer Experience

We can already compile our TypeScript to JavaScript. However, if you publish it to npm as is, you'll only be able to use it seamlessly in other JavaScript projects. Also, the default target configuration is "es2016," and modern browsers only support up to "es2015." So let's fix that!

First, let's change our target to es2015 (or es6 since they're the same). esModuleInterop is true by default. Let's leave it as is since it increases compatibility by allowing ESM-style imports.

We are all using TypeScript for a reason: types! But if you build and ship your package right now, no types will be shipped with it. Let's fix that by setting declaration to true. This will generate declaration files (.d.ts) alongside our .js files. With that alone, your package will be usable in TypeScript projects from the get-go and provide type hints even in JavaScript projects.

The declaration files already go a long way in improving support and developer experience. However, we can go further by adding declarationMap. With that, sourcemaps (.d.ts.map) will be generated to map our declaration files (.d.ts) to our original TypeScript source code (.ts). This means that code editors can go to the original TypeScript code when using "Go to definition," instead of the compiled JavaScript files.

While we're at it, sourceMap will add sourcemap files (.js.map) that allow debuggers and other tools to display the original TypeScript source code when actually working with the emitted JavaScript files.

Using declarationMap and/or sourceMap means we also need to publish our source code with the package to npm.

With all that, here is our final tsconfig.json file:

{
"compilerOptions": {
"target": "es2015",
"module": "commonjs",
"strict": true,
"esModuleInterop": true,
"rootDir": "./src",
"outDir": "./dist",
"sourceMap": true,
"declaration": true,
"declarationMap": true,
}
}

package.json

Things are much simpler around here. We need to specify the entry point of our package when users import it. So let's set main to dist/index.js.

Other than the entry point, we also need to specify the main types declaration file. In this case, that would be dist/index.d.ts.

We also need to specify which files to ship with the package. Of course, we need to ship our built JavaScript files, but since we are using sourceMap and declarationMap, we also need to ship src.

Here's a reference package.json with all of that:

{
"name": "the-greatest-sdk", // Your package name
"version": "1.0.3", // Your package version
"main": "dist/index.js",
"types": "dist/index.d.ts",
"scripts": {
"build": "tsc"
},
"keywords": [], // Add related keywords
"author": "liblab", // Add yourself here
"license": "ISC",
"files": [
"dist",
"src"
],
"devDependencies": {
"ts-node": "^10.9.1",
"typescript": "^5.0.4"
}
}

Publishing to NPM

Publishing to NPM is not difficult. I strongly recommend taking a look at the official instructions, but here are the general steps:

  1. Make sure your package.json is set up appropriately.
  2. Build the project (with npm run build if you followed the guide).
  3. If you haven't already, authenticate to npm with npm login (you'll need an npm account).
  4. Run npm publish.

Keep in mind that if you update your package, you'll need to increase the version option in your package.json before publishing again.

There are more sophisticated (and recommended) ways to go about publishing, like using GitHub actions and releases, especially for open-source packages, but that’s out of scope for this post.

Conclusion

By following the discussed approach your typescript npm packages will now provide better type-hinting, auto-completion and support ES Modules (ESM) and CommonJS (CJS) style imports, making them more accessible and usable by a wider audience.

Here at liblab, we know that preparing your project for NPM can be annoying. That's why our TypeScript SDKs come prepared with all the necessary adjustments for proper publishing to NPM. We'll even help you set up your CI/CD for seamless publishing. Contact us here to learn more about how we can help automate your API’s SDK creation.

← Back to the liblab blog

As engineers, we often prioritize scalability, elegance, and efficiency in finding solutions. We despise monotony, tedious tasks, and boring work that consume our time. It becomes especially frustrating when we identify a tool that can reduce effort and produce better results, only to be turned down by management. This article will guide you on how to effectively communicate your tool request to management, increasing the chances of getting a 'Yes' for your proposal.

You’re Not Asking The Right Way

You may find it obvious why the tool you recommend is a great investment. It saves time, improves results, and allows you to focus on high-return activities for your organization. However, management doesn't always grasp these benefits. Why is that?

The reason lies in how you present your case. Management is primarily concerned with the organization's Profit and Loss (P&L). Their role is to maximize revenue while minimizing costs. When you propose a tool, simply highlighting time savings or improved output these don’t necessarily resonate. They are one step removed from what truly matters to management. Unless you can quantify the impact of the tool on P&L, management will perceive it as an "additional cost."

Show Impact on P&L

So, how can we translate the "additional cost" into improving the P&L? It's actually quite straightforward. While there will be an initial cost associated with the tool, evaluating it in isolation is not a fair assessment of the P&L impact. To present the impact properly, you need to calculate the additional revenue generated by the tool or the opportunity cost of not having it. The sum of the additional revenue (Rev_gen) and the opportunity cost (Opp_cost) subtracted from the tool cost will quantify the impact on the P&L.

PL_impact = Rev_gen - (Tool_cost - Opp_cost)

Provided you can demonstrate that investing in your tool will result in a positive PL_impact (> 0), management should find it an easy decision to support.

How To Calculate P&L Impact

Now that we know what needs to be quantified (Rev_gen and Opp_cost), let's discuss the general methodology used to quantify them.

  1. Identify the input parameters required to calculate Rev_gen and Opp_cost.
    • Write down the expression that dictate the calculations for each parameter. Assumptions are acceptable and often expected.
  2. Build a simple model (a spreadsheet, no LLM required 😉) to calculate Rev_gen and Opp_cost based on the input parameters.
  3. Present the calculations in a straightforward and simple manner. It's crucial to show all the math. It should be laid out so simply, that even a middle schooler can follow the logic.

Example - Calculating Opportunity Cost

In the following example we will show how we at liblab calculate the opportunity cost associated with building and maintaining SDKs for an API manually. This allows us to enable our customers to show that leveraging liblab’s products & services are a fraction of the cost of tackling the problem yourself (we assume a Rev_gen of 0, for simplicity sake)

Opp_cost = APIs * Num_SDK * Cost_build+maintain_per_SDK
  1. Input Parameters

    1. APIs = Number of APIs your organization needs SDKs for
    2. Num_SDK: Number of SDK languages you need to support for your APIs
    3. Cost_build+maintain_per_SDK can be further broken down into:
      1. Cost_build/SDK: Engineering cost per SDK build
        1. Cost ($) / Hour derived from engineering salary
        2. Hours / SDK Build (assumption based on your organization's API complexity)
      2. Cost_maintain/SDK: Engineering cost per change and number of changes
        1. Cost ($) / Hour derived from engineering salary
        2. Hours / SDK Build (assumption based on your organization's API complexity)
        3. Update Frequency (assumption on how often your API changes)
  2. Model calculating Opp_cost based on input parameters:

    1. The reason we want to put this in a spreadsheet model vs. just writing the equations and answers out is because it makes it easy for management to see the end result, if they don’t agree with one of your assumptions - most of the times this won’t change the decision on the tool but will help deal with any objections they may have.
    2. See models governing equations below:
    Opp_cost = APIs * Num_SDK * (Cost_build + Cost_maintain) per SDK
    Cost_build/SDK = Cost_hr * Hrs_build
    Cost_maintain/SD = Cost_hr * Hrs_maintain * Update_frequency
  3. Present the Math Simply - Below is a screenshot of our liblab Investment Calculator

    1. This calculator depicts the investment needed by an average liblab customer if they are to build and maintain SDKs across 6 languages themselves for a single API.

SDK Investment Calculator

As you can see, the cost of building and maintaining SDKs for our average customers exceeds $100K. Making the cost of liblab’s product and services a fraction of the opportunity cost, resulting in very good return and positive impact on P&L.

Conclusion

To obtain a "Yes" from management for the tools you need, demonstrate through a model how they will have a positive impact on the P&L.

At liblab, we have created a calculator that helps developers articulate the cost savings of using our services to management. Not to mention the additional impact a better developer experience can have on their business (not quantified in our calculator).

If you're interested in receiving a free estimate of your annual API SDK investment, reach out to us via our contact form and mention "Free SDK Investment Assessment" in the message field. We'll respond promptly and provide you with a customized report breakdown, similar to the example above, tailored to your organization.

← Back to the liblab blog

Software development can be a complex and daunting field, especially for those who are new to it. The tech world's jargon and acronyms can be confusing to newcomers. You may have heard the term “SDK.” But what exactly is an SDK, and why is it important for software development?

More specifically, how can an SDK, when applied to an API, create huge benefits for your API management?

In this article, we'll take a closer look at what an SDK is and why it's an essential tool for developers looking to create high-quality software applications. You'll also come away with a clear understanding of the benefits SDKs have on API management, for both API owners and end users.

What is an SDK?

An SDK (software development kit) is a programming library made to serve developers and the development process.

A good use case for SDKs are APIs, the application programming interfaces that allow two programs, applications or services to communicate with each other. Without an SDK, a developer has to access the API via manual operations. Whereas with an SDK, developers can interact with the API using pre-built functionality, enabling quicker and safer software development.

How to Use SDKs for APIs

An API is the interface of an application by which a developer can interact directly with that application. An SDK provides tools that help a developer interact with the API.

To emphasize how using an SDK is different from interacting with an API via “manual operations,” we will juxtapose calling The One API to find a book with both methods below.

Calling The One API via Manual Operations

We will call The One API via a BASH script and JavaScript fetch method — both “manually.”

curl   -X GET https://the-one-api.dev/v2/book/1264392547 \
2 -H "Accept: application/json" \
3 -H "Authorization: Bearer THEONEAPISDK_BEARER_TOKEN"

This is a very basic way to query a server. It is available straight out of the terminal and gives you a way to describe the network request using arguments to one big command.

Explanation about the command:

  • curl ****the command for BASH to transfer data.
  • -X debugging mode, allows BASH to be more verbose.
  • GET is the method.
  • -H is a header option.

In other words: transfer data , do it with a get, and pass the headers Accept: application/json, Authorization: Bearer

JavaScript Fetch Method

async function fetchData() {
try {
const url = 'https://the-one-api.dev/v2/book/123';
const res = await fetch(url, {
headers: {
Authorization: `Bearer 12345`,
},
});

if (!res.ok) {
throw new Error(res.statusText);
}

const data = await res.json();
console.log(data);
} catch (error) {
console.log('error', error);
}
}

This is the most basic way to query a server with JavaScript. What you see here is an asynchronous function that queries the server, converts the request to JSON format and then logs the data it produced.

It is better than bash because:

  • Readability: Instead of arguments to one long command, you'd use an object which is more human-readable and less error-prone.
  • It handles an error. Instead of just logging the error, the try/catch block allows you to handle the situation where an error occurred.

Calling The One API via an SDK

import { TheOneApiSDK } from './src';

const sdk = new TheOneApiSDK(process.env.THEONEAPISDK_BEARER_TOKEN);

(async () => {
const result = await sdk.Book.getBook('123');
console.log(result);
})();

Notice that the SDK client allows us to use the book controller and getBook method to query the API. We set the headers when we instantiated the clients and we were ready to query the API. This approach is much easier to read and less error-prone.

This example is different from the two mentioned above because the user did not write the http request.

The http request is actually made behind the scenes (might be written with JS fetch too) by the maintainer of the SDK; this allows the maintainer to decide:

  • What network protocol is going to be used.
  • What is the destination of the network request.
  • How the headers should be set for the network request.
  • Readability: It's very clear which action the user wants to achieve, what is the controller, etc.

Who Benefits from SDKs?

There are many benefits to using an SDK. For the end users, it allows safer and cleaner access to the API. For the owners it ensures the API is used correctly and keeps support costs down.

SDK can reduce costs in many ways, including:

Retry strategy. When writing an SDK, you can add logic that prevents an SDK client from keep trying to query the API, therefore preventing unwanted calls to the API

Better use. Because the user does not query the API directly, the API receives better calls or more standardized input from its clients.

Speed up development. The SDK clients enjoy faster development because the requests from the server are standardized.

Code maintenance. When querying the API directly, you need to keep up with every update the API has done. Using an SDK facilitates this interaction and keeps up with the updates of the API. You do, however, need to keep your SDK up to date.

An SDK benefits the API user by:

  • Better understanding how to use an API through semantically-named functions, parameters and types.
  • Making it easier to invoke methods as they become readily available/discoverable via internal development environments (IDE)
  • Provides easier API access, with simple functions and parameters.
  • Prevents bad requests, and allows the user to correct their input.

An SDK benefits the API owner by:

  • Ensuring the required inputs from the user are in every request.
  • Preventing wrong inputs from being sent to the API server via enforcing types and validations.
  • Having an additional layer of validations to enforce response patterns. For example, if a user is sending too many requests, the SDK can warn and stop the user as they approach their limit.

Conclusion

Many API owners don't provide SDKs because of the difficulty and the development time involved with creating one, not to mention the onerous task of maintaining them. An API can have dozens or even hundreds of endpoints, and each one requires a function definition.