Skip to main content

16 posts tagged with "SDK"

View All Tags
← Back to the liblab blog

This is a guest post by Emmanuel Sibanda, a Full Stack Engineer with expertise in React/NextJS, Django, Flask who has been using liblab for one of his hobby projects.

Boxing data is very hard to come by, there is no single source of truth. One could argue that BoxRec is the 'single source of truth'. However, you will only find stats on a boxer's record and a breakdown of the fights they have had on BoxRec. If you want more nuanced data to better understand each boxer you would need to go to CompuBox to get data on punch stats recorder per fights. This doesn't include all fights, as they presumably only include fights that are high profile enough for CompuBox to show up and manually record the number and type of punches thrown.

Some time back I built a project automating retrieving data from BoxRec and enriching this with data from CompuBox. With this combination of data, I can analyze;

  • a boxer's record (eg. what is the calibre of the opponents they have faced, based on their opposition's track record)
  • a boxer's defense (eg. how many punches do their opponents attempt to throw at them in each recorded fight and on average, how many of these punches actually land). I could theoretically breakdown how well the boxer defends jabs, power shots
  • a boxer's accuracy using similar logic to above
  • how age has affected both a boxer's accuracy and defense based on the above two points
  • a comparison of whether being more defensive or accurate has a correlation to winning a fight (eg. when a fight goes the full length, do judges often have a bias towards; accuracy, aggression or defense)

These are all useful questions, if you want to leverage machine learning to predict the outcome of a fight, build a realistic boxing game or whatever reason, these are all questions that could help you construct additional parameters to use in your prediction model.

Task: Create an easily accessible API to access the data I have collected

Caveat: I retrieved this data around November 2019 - a lot has happened since then, I intend to fetch new data on the 19th of November 2023.

When I initially built this project out, initially a simple frontend enabling people to predict the outcome of a boxing match based on a machine learning model I built using this data, I got quite a few emails from people asking me how I got the data to build this model out.

To make this data easily accessible, I developed a FastAPI app, with an exposed endpoint for data retrieval. The implementation adheres to OpenAPI standards. I integrated Swagger UI to enable accessibility directly from the API documentation. You send the name of a boxer and receive stats pertaining their record.

Creating an SDK to enable seamless integration using liblab

I intend to continue iteratively adding more data and ensuring it is up to date. In order to make this more easily accessible I decided to create a Software Development Kit. In simple terms, think of this as a wrapper around the API, that comes with pre-defined methods that you can use, reducing how much code you would need to write to interact with the API.

In creating these SDKs, I ran into a tool; liblab, an SDK as a service platform that enables you to instantly generate SDK in multiple languages. The documentation was very detailed and easy to understand. The process of creating the SDK was even simpler. I especially like that when I ran the command to build my SDKs I got warnings with links to OpenAPI documentation to ensure that my API correctly conformed to OpenAPI standards, as this could result in creating a subpar SDK.

Here's a link to version 1 of the BoxingData API.

Feel free to reach out regarding any questions you have, data you want me to include and if you want me to send you the SDKs (Python and TypeScript for now). You can find me on LinkedIn and Twitter.

← Back to the liblab blog

SDK and API are 2 terms banded around a lot when developers think about accessing services or other functionality. But what are they, and what are the differences between them? This post will teach you everything you need to know and how SDKs can benefit your software development process!

What is an Application Programming Interface (API)?

An API, or application programming interface is an interface to a system that application programmers can write code against. This reads like I'm just juggling the words around, so let's break down this statement.

There are many systems and services out there that you might want to integrate into your application. For example, you might want to use Stripe as a payment provider. As you program your application, you need to talk to these services, and these services define an interface you can use to talk to them - this interface lists all the things you can do with the service, and how to do them, such as what data you need to send or what you will get back from each call.

Application Programming Interfaces in the modern world

In the modern software development world we think of APIs as a way of making calls to a service over networks or the internet using standard protocols. Many services have a REST API - a set of web addresses, or URLs, you can call to do things. For example, a payment provider API will have endpoints you can call to create a payment or authorize a credit card. These endpoints will be called using HTTP - the same technology you use when browsing the internet, and can take or return data either in the URL, or attached to the call in what is called the body. These are called using verbs - well defined named actions, such as GET to get data, or POST to create data.

An API with 2 methods exposed, a GET and a POST on the /user endpoint An API with 2 methods exposed, a GET and a POST on the /user endpoint

Calling APIs

Calling an API endpoint is referred to as making a request. The data that is returned is referred to as a response.

APIs can be called from any programming language, or from tools like Postman.

There are many protocols APIs can use, including REST, gRPC, GraphQL and SOAP. REST is the most common, and the one I'll be referencing in this article.

What is a Software Development Kit (SDK)?

A software development kit is a code library that implements some kind of functionality that you might want to use in your application.

What can SDKs be used for?

SDKs can implement a huge range of functionality via different software components - they can include visual components for desktop or mobile apps, they can interact with sensors for embedded apps, or provide frameworks to speed up application development.

SDKs for your APIs

Another use case for SDKs is to provide a wrapper around an API to make it easier to call from your application. These SDKs make APIs easier to use by converting the calls you would make into methods, and wrap the data you send and receive in objects.

An SDK with 2 methods exposed, a getUser method that wraps the GET on /user and a createUser that wraps the POST on /user An SDK with 2 methods exposed, a getUser method that wraps the GET on /user and a createUser that wraps the POST on /user

These SDKs can also manage things like authentication or retries for you. For example, if the call fails because the service is busy, the SDK can automatically retry after a defined delay.

For the rest of this article, when I refer to SDKs I'll be talking about SDKs that wrap APIs.

How Do APIs Work?

APIs work by exposing a set of endpoints that you can call to do things.

API Endpoints

These endpoints are called using verbs that roughly align to CRUD (create, read, update, delete) operations.

For example, you might have a user endpoint that handles the following verbs:

VerbDescription
GETRead a user
POSTCreate a user
PUTUpdate a user
DELETEDelete a user

API Data

Data is typically sent and returned as JSON - JavaScript object notation. For example, a user might be returned as:

{
"id": 42,
"firstname": "Jim",
"lastname": "Bennett",
"email": "jimbobbеnnе[email protected]"
}

How Do SDKs Work?

Software Development Kits work by wrapping the API calls in code that is easier for developers to use.

Creating a user with an API

For example, if I wanted to create a user using an API, then my code would need to do the following:

  1. Create an HTTP client - this would be code from a library that can make HTTP calls.
  2. Create a JSON object to represent my user.
  3. If my API requires authentication, I would need to get an access token and add it to the request.
  4. Send the JSON object to the API endpoint using the HTTP client.
  5. Get a response and see if it was successful or not.
  6. If it was successful, parse the response body from JSON to get the user Id.

This is a number of steps, and each one is error prone as there is no compiler or linter to help you. For example, if you sent the first name in a field in the JSON object called "first-name" and the API expected it to be "firstname" then the API call would fail at run time. If you forgot to add the access token, the API call would fail at run time. If you forgot to check the response, your code would continue to run and would fail at some point later on.

Creating a user with an SDK

An SDK on the other hand would implement most of this for you. It would have a class or other strongly typed definition for the user object, and would handle things like authentication and error checking for you. To use an SDK you would:

  1. Create an instance of the SDK class.
  2. Set the authentication token once on the SDK so it can be used for all subsequent calls.
  3. Create an instance of the user object, and set the properties.
  4. Call the SDK method to create the user, passing in the user object.
  5. If this fails, an exception would be thrown, otherwise the user Id would be returned.

Benefits of Application Programming Interfaces

APIs are the perfect way to expose a service to your internal infrastructure or the world via the internet. For SaaS (Software-as-a-Service) companies like Auth0 and Stripe, their APIs provide the services that their customers use to integrate with their applications. Internally organizations can build microservices or other internal services that different teams can use to build applications. For example, a company might have a user service that manages users, and a product service that manages products. These services would expose APIs that other teams can use to build applications.

By using a standard protocol such as REST you are providing the most widely used interface - development tools like programming language and many low/no-code technologies can call REST APIs. This means that your service can be used by any application, regardless of the technology it is written in.

Pretty much. every service should have an API if it needs to be called from an application.

Benefits of Software Development Kits

SDKs on the other hand are software components that provide wrappers over APIs. They make it easier to call APIs without making mistakes by providing things like type safety and error handling. If you use the wrong name for a field on an object, your compiler or linter will tell you (probably as soon as you type it with a red squiggly in your IDE). If you forget to add an access token, the SDK will throw an exception, but once set it can be used for all calls, and not need to be set every time. If the API call fails, the SDK can retry for you.

The benefit of an SDK is this hand-holding - it's a wrapper around the API that makes your life easier. An SDK takes nothing away from your API, developers can still call it directly if they are so inclined, but the SDK makes it substantially easier to use.

SDK vs API - Key Distinctions

As we compare SDK vs API, here are some key differences:

APISDK
Pass data as JSONPass data as strongly typed objects
Call endpoints defined using stringsCall methods or function
No compiler or linter checkingCompiler and linter checking
No automatic retriesAutomatic retries can be defined in the SDK
You have to read the docs to discover services or learn what data to pass or receiveIntellisense and documentation
Can be called from any programming language, as well as tools like Postman or low/no-code toolsCan only be called from compatible languages

How and When to Choose Between SDKs or APIs?

As a provider of a service, there is no choice as such. You have to provide an API so software developers can call you service. Should you provide an SDK as well as part of your development process? Well, yes - it improves the experience for your users, and makes it easier for them to use your service. If you don't provide an SDK, then your users will have to write their own, and that's a lot of work for them. Keep your customers and users happy, right?

Conclusion

In this post we've looked at the differences between SDKs and APIs, and when you might use one over the other. We've seen that APIs are the interface to a service, and SDKs are wrappers around APIs that make them easier to use. We've also seen that APIs are the best way to expose a service to the world, and SDKs are the best way to make it easier to use an API.

Can I automate SDK generation?

The obvious question now is how do I create an SDK for my API. You could write one yourself, but why do that when liblab can automate SDK generation for you as part of your software development process? Check out liblab.com for more information and to sign up!

← Back to the liblab blog

In the ever-evolving software development landscape, selecting the right tools can make or break your project's success. With a plethora of options available, it can be overwhelming to choose the best one. In this blog post, we will discuss why liblab stands out as a superior choice over its competitors in various aspects, including user-friendliness, customization, support, security, reliability, cost, number of supported languages, and documentation.

User-Friendliness: Human Readability and IDE Compatibility

liblab prides itself on its user-friendly nature. The code generated by liblab looks like it was written by a human rather than a machine, making it easier to read and understand. Additionally, liblab's code is easily picked up by Integrated Development Environments (IDEs), providing users with helpful type hinting for a seamless development experience.

Customization

liblab offers unique customizations tailored to your business’ needs, with over 147 hours of investment put into refining these features. Regardless of your needs, liblab can be customized to provide a solution that meets your unique requirements and ensures the best possible development experience.

Support: A Comprehensive Solution

Unlike many competitors, liblab is more than just a product; it is a complete solution that includes both product and service. With a dedicated Technical Account Manager (TAM), liblab ensures that you meet Rapyd's developer experience goals via SDKs and documentation.

Security: SOC2 Compliance and Best Practices

Security is paramount in today's digital world. liblab is SOC2 compliant and continuously incorporates best practices to ensure that your data and developers are protected at all times.

Reliability: On Call Support and Code Reliability

liblab offers on-call support with Service Level Agreements (SLAs) that guarantee a response to your requests within 12 hours. Furthermore, liblab generates tests for all its SDKs, ensuring code reliability and reducing the likelihood of unexpected issues.

Cost: Upfront Savings and Minimized Backend Costs

By choosing liblab, you can significantly reduce costs associated with building and maintaining your development infrastructure. liblab's upfront cost eliminates the need to hire a team and develop subject matter expertise over time, allowing your engineers to focus on higher ROI, mission-critical work.

Number of Supported Languages: Idiomatic and Quality-driven

By the end of the year, liblab plans to support six languages, with a focus on idiomatic patterns. This ensures that each language is of high quality and useful for developers. While competitors may offer more partially-maintained languages, liblab emphasizes quality first, with quantity following soon after.

Documentation: SDK Embedded Docs

liblab auto-generates powerful documentation that includes code examples from your SDKs, making it easier for developers to understand and use the software.

In conclusion, liblab outshines its competition in multiple aspects, making it the ideal choice for your development needs. With its user-friendly code, extensive customization, comprehensive support, strong security, impressive reliability, cost-effective pricing, commitment to quality-driven language support, and robust documentation, liblab is the clear winner in the race for the best development solution.

← Back to the liblab blog

liblab is excited to be sponsoring APIWorld 2023 where you can join thousands of global technical leaders, engineers, software architects, and executives at the world’s largest and longest-running API & microservices event – in its 11th year!

The API world logo

This conference is running in-person at the Santa Clara Convention Center, and online a week later. We will be there, both in person and online, so come and meet us and learn about how liblab can generate better SDKs for your APIs!

Get a free SDK

liblab is currently in beta, but if you want to skip the queue and get early access, talk to us at APIWorld. We'll be granting early access to everyone at the event.

We'll also be on hand to review API specs, and generate a high quality, human readable SDK for your API. You can then see how your developer experience is improved by a good SDK. Just leave us your email address and a link to your API spec at our booth, and we'll send you a copy of your SDK.

Learn from our experts

On our booth you will find some of the worlds finest (in our opinion) OpenAPI experts, who will be able to discuss your API and help you to produce the best API spec possible that will allow you to quickly generate high quality SDKs. We can talk you through some common problems, as well as best practices for API design. If you want a sneak preview of our expertise, check out our why your OpenAPI spec sucks post from Sharon Pikovski.

We also want to learn from you! We'll give you the chance to vote on your favorite SDK languages, and share your stories of the best and worst SDKs you've used. If you're up for it we'd love to share your tales on our YouTube channel.

I'll also be giving a demo-heavy session called From APIs to SDKs: Elevating your Developer Experience with automated SDK generation where I will talk through why SDKs are better for your customers than accessing APIs directly (yup, they really are). I'll also show how you can automate the process of generating SDKs by integrating liblab into your CI/CD pipelines. There will be plenty of code on show and resources to help you re-create the demos yourself. I'll be speaking on Thursday 26th October at 2:30pm in the Expo Hall.

A picture of Jim on a stage at a conference standing next to a podium with a laptop on it. Jim is wearing a black t-shirt and is working on the laptop

More details on the APIWorld session page.

Meet the team

We are busy preparing for our presence in the expo hall at the event. Literally - half of my home office is full of booth bits 😁. We'll be there to talk APIs and SDKs, with a load of demos showing how you can use liblab to quickly generate SDKs, and implement this SDK generation into your CI/CD pipelines.

As expected, we'll have some swag to give away - in our case, edible swag! I'm personally not a fan of a lot of conference swag, we all have our fill of cheap pens, USB cables that break, notebooks, and terrible mugs, and a lot of this ends up in landfill. To be more sustainable, we'd rather give you something that leaves a more pleasant taste in your mouth, literally. Come and see us to find out more!

We'll also have some stickers, after all, who doesn't love stickers?

2 liblab stickers on a wooden table. One has the liblab logo, a simple line drawn llama face with curly braces for the side of the head, and liblab.com, the other has a version of the liblab llama logo with hearts for eyes and the caption love your SDK

We'll be available on a virtual booth during the online event, so if you can't make it to Santa Clara, you can still come and meet us online. No stickers or edible swag at the virtual event though, sorry!

Meet the llama

You may also get the chance to meet a real life* llama 🦙. Snap a pic with our llama and tweet it with the hashtag #liblabLlama and tag @liblaber to get a special sticker!

A sticker with a llama mascot and the text I met the liblab llama

* This is not actually true, it's Sean in a llama costume.

See you there - on us!

We have a number of open tickets to give away for the in-person and virtual events with access to the expo hall and some of the sessions. If you want to come meet us, then sign up for our beta and we'll send you a ticket. We'll be giving away tickets on a first come, first served basis, so don't delay!

Otherwise, head to apiworld.co to get your tickets now. We can't wait to meet you!

← Back to the liblab blog

Our mission is to empower developers with cutting-edge tools and resources, and at the core of this mission is the assurance that their data is secure. The significance of data security cannot be overstated, and this is why few milestones are as transformative as achieving System and Organization Controls (SOC) compliance.

liblab has successfully completed a comprehensive SOC 2 Type II audit, conducted by Sensiba LLP, a leader in audit services. We are thrilled to share the significance of this accomplishment and why it is crucial not only for our organization but also for our customers. In the short read ahead we’ll discuss the importance of attaining SOC 2 certification, how it impacts our operations, and most importantly, how it benefits our valued customers.

SOC 2 compliance logo

The Road to SOC 2 Compliance

SOC 2 is a rigorous set of standards developed by the American Institute of Certified Public Accountants (AICPA) to assess the security, availability, processing integrity, confidentiality, and privacy of customer data within service organizations. It is a comprehensive framework that demands the highest level of commitment to data security and privacy. Achieving SOC 2 compliance was not a straightforward task for liblab. Here are some of the challenges we encountered along the way:

Complex Documentation and Policies

The foundation of SOC 2 compliance lies in meticulous documentation and well-defined policies and procedures. Developing comprehensive documentation, including data security policies, incident response plans, and access control procedures, can be a time-consuming and complex process. We had to ensure that our documentation was not only thorough but also aligned with the stringent requirements of SOC 2.

Resource Allocation

Achieving SOC 2 compliance requires a substantial allocation of resources, both in terms of time and personnel. We had to designate a dedicated team to work on compliance-related tasks, diverting their efforts from other critical projects. This reallocation of resources was necessary to ensure the successful completion of the SOC 2 audit process.

Continuous Monitoring

SOC 2 compliance is not a one-time achievement but an ongoing commitment. Continuous monitoring and assessment of controls and processes are required to maintain compliance. This means that we needed to establish a system for ongoing monitoring and assessment, which added to the complexity of compliance efforts.

Vendor Compliance

As part of our operations, we engage with third-party vendors and service providers. Ensuring that these vendors also adhere to the rigorous standards of SOC 2 was a challenge. We had to assess their security practices, contractual agreements, and data handling processes to ensure alignment with our commitment to data security.

The Importance of SOC 2 Certification for liblab

Now that we have discussed some of the difficulties we faced in achieving SOC 2 compliance, let's delve into why this certification is a pivotal milestone for liblab and how it profoundly impacts both our operations and our customers.

Elevating Customer Trust

At liblab, our customers rely on our SDK generation service to build secure and reliable software solutions. Achieving SOC 2 compliance serves as a badge of trust for our customers, assuring them that we have robust controls and processes in place to protect their sensitive data. In an era where data breaches and cyber threats are all too common, this trust factor is invaluable.

Regulatory Compliance

Our SDK generation service often involves handling customer data, which may be subject to various data protection laws and regulations, such as GDPR (General Data Protection Regulation) in Europe or CCPA (California Consumer Privacy Act) in the United States. SOC 2 compliance aligns with many of these regulations, ensuring that we are in compliance with the law. This not only mitigates legal risks but also avoids potential fines and reputational damage stemming from non-compliance.

Competitive Advantage

In a competitive marketplace, where organizations are increasingly concerned about data security, achieving SOC 2 compliance provides us with a distinct competitive advantage. It positions liblab as a trusted and secure partner, setting us apart from competitors who may not have undergone such rigorous audits. This certification becomes a compelling factor when potential customers are evaluating their options.

Strengthening Internal Processes

The process of achieving SOC 2 compliance necessitates the establishment of robust internal processes and controls. We had to identify vulnerabilities, implement security measures, and develop an incident response plan. Going through this process not only prepared us for the certification audit but also enhanced our overall security posture. Continuous monitoring and improvement of these processes further strengthen the protection of customer data and reduce the risk of data breaches.

Why SOC 2 Compliance Matters to Our Customers

For our customers, who rely on our SDK generation products to build secure software applications, data security is of paramount importance. It reassures them that their data is handled with the highest level of security.

Enhanced Data Security

The most direct benefit of SOC 2 certification for our customers is enhanced data security. By achieving this certification, we are demonstrating our dedication to safeguarding their data from potential threats and breaches. Customers can trust that their data is protected when they use our developer products.

Data Privacy Assurance

In addition to security, SOC 2 compliance addresses data privacy concerns. It requires us to have clear privacy policies and practices to protect customer data and ensure compliance with data protection regulations. Customers can be confident that their privacy rights are respected and upheld when they entrust us with their data.

Reduced Risk Exposure

Attaining SOC 2 compliance reduces the risk of data breaches and security incidents. Our customers benefit from our proactive approach to data security, knowing that we have robust controls and processes in place to prevent, detect, and respond to security threats. This reduces the likelihood of data breaches that could lead to data loss or exposure.

Business Continuity

Having a well-defined incident response plan as part of our SOC 2 compliance ensures that we are prepared to handle security incidents effectively. This not only protects our customers' data but also helps maintain business continuity. Customers can rely on our SDK generation products without disruption, even in the face of security challenges.

Vendor Trust

Our customers often rely on a network of vendors and partners to build their software solutions. SOC 2 compliance extends to vendor management, requiring us to ensure that our vendors meet the same stringent security standards we do. This provides an additional layer of assurance to our customers, knowing that the entire ecosystem they engage with maintains high data security standards.

Conclusion

Achieving SOC 2 compliance has been a challenging journey for liblab, but it is one that we embrace wholeheartedly. It serves as a testament to our commitment to data security and privacy. For our customers, it signifies a seal of trust, enhanced data security, privacy assurance, reduced risk exposure, and the assurance of business continuity. Maintaining our SOC 2 certification remains a cornerstone of our promise to secure the future for our customers and our developer tools startup. As we continue to innovate and provide cutting-edge SDK generation solutions, information security compliance remains at the core of our promise to safeguard data for liblab and our valued customers.

← Back to the liblab blog

TypeScript, a statically typed superset of JavaScript, has become a go-to language for many developers, particularly when building SDKs that interact with web APIs. TypeScript's powerful type system aids in writing cleaner, more reliable code, ultimately making your SDK more maintainable.

In this blog post, we'll provide a focused exploration of how TypeScript's type system can be harnessed to better manage API routes within your SDK. This post is going to stay focused and concise. We’ll be looking solely at routing tips and intentionally eschewing some of the other aspects of SDK authoring, such as architecture, data structures, handling relations, and other aspects of SDK development. Our SDK will be simple: it is going to simply list a user or users. These tips will help your route definitions be less error prone and easier to read for other engineers.

At the end, we’ll cover the limitations of the tips in this post, what’s missing, and one way in which you can avoid dealing with having to author these types altogether.

Let’s get started.

Alias your types

Type aliasing is important! It can sometimes be overlooked in TypeScript, but aliases are an extremely powerful documentation and code maintenance tool. Type aliases provide additional context as to why something is a string or a number. As an added bonus, if you alias your types and make a different decision (such as shifting from a numeric ID to a GUID) down the road, you can change the underlying type in one place. The compiler will then call out most of the areas in which your code needs to change.

Here are a couple of examples that we’ll build upon later on:

type NoArgs = undefined;
type UserId = number;
type UserName = string;

Note that UserId is a number here. That may not always be the case. If it changes, finding UserId is an easier task than trying to track down which references to number are relevant for your logic change.

Aliasing undefined with NoArgs might seem silly at first, but keep in mind that it’s conveying some extra meaning. It indicates that we specifically do not want arguments when we use it. It’s a way of documenting your code without a comment. Ditto for UserName. It’s unlikely to change types in the future, but using a type alias means that we know what it means, and that’s helpful.

Note: there’s a subtlety here that’s worth calling out. NoArgs is a type here, while undefined is a value. NoArgs is not the value undefined, but is a type whose only acceptable value is undefined. It’s a subtle difference, but it means you can’t do something like const args = NoArgs. Instead, you would have to do something along these lines: const args: NoArgs = undefined.

Statically define your data structures wherever possible

This is similar to the above, and is generally accepted practice. This essentially boils down to avoiding the any keyword and avoid turning everything into a plain object ({[key: string]: any}). In this simple SDK, this means only the following:

type User = {
id: UserId;
name: UserName;
//other fields could go here
}

When we need a User or an array of Users, our SDK engineers will now have all the context they need at design-time. Types such as UserName can be more complex as well (you can use Template Types, for example), allowing you to further constrain your types and make it more difficult to introduce bugs. The intricacies of typing data structures is a much larger subject, so we’ll stick to simple types here.

Make your routes and arguments more resistant to typos

You’ve likely done it before: you meant to call the users endpoint and accidentally typed uesrs. You don’t find out until runtime that the route is wrong, and now you’re tracking it down. Or maybe you can’t remember if you’re supposed to be getting name or userName from the response body and you’re either consulting the spec, curling, or opening Postman to get some real data. Keeping your routes defined in one place means you only need to consult the API spec once (or perhaps not at all if you follow the tip at the end of the post) in order to know what your types are. Your SDK maintainers should only need to go to one place to understand the routes and their arguments:

type Routes = {
'users': NoArgs;
'users/:userId:': UserId;
};

Note that the pattern :argument: was used here, but you can use whatever is best for the libraries/helper methods that you already have. In addition, this API currently only has GET endpoints with no query parameters, so we’re keeping the types on the simple side. Feel free to declare some intermediate types that clearly separate out route, query, and body parameters. Then your function(s) that actually call API endpoints will know what to do with said parameters when it comes time to actually call an endpoint. This is a good segue into the next point:

Use generics to make code reuse easy

It’s hard to overstate how powerful generics can be when it comes to maintaining type safety while still allowing code reuse. It’s easy to slap an any on a return value and just cast your data in your calling function, but that’s quite risky, as it prevents TypeScript from verifying that the function call is safe. It also makes code harder to understand, as there’s missing context. Let’s take a look at a couple of types that can help out for our example.

type RouteArgs<T extends keyof Routes> = {
route: T;
params: Routes[T];
};

const callEndpoint = <Route extends keyof Routes, ExpectedReturn>(args: RouteArgs<Route>): ExpectedReturn => {
//your client code goes here (axios, fetch, etc.) Include any error handling.

//Don't do this, use a type guard to verify that the data is correct!
return [{id: 1, name: "user1"}, {id: 2, name: "user2"}] as unknown as ExpectedReturn
}

Note the T extends keyof Routes in our generic parameter for the type RouteArgs. This builds upon the Routes type that we used earlier, making it impossible to use any string that is not already defined as a route when you’re writing a function that includes a parameter of this type. This also enables you to use Routes[T], meaning that you don’t have to know the specific type at design-time. You get type safety for all of your calling functions.

Note that we also do not assign a type alias to the type of callEndpoint. This type is intended to only be used once in this code base. If you are defining multiple different callEndpoint functions (for example, if you want to separate out logic for each HTTP verb), aliasing your types to make sure that no new errors are being introduced would be highly recommended.

Note that type guards are mentioned in the comment. This code lives at the edge of type safety. You will never be 100% sure that the data that comes back from your API endpoint is the structure you expect. That’s where type guards come in. Make sure that you’re running type guards against these return types. Type guards are outside of the scope of this post, but guarding for concrete types in a generic function can be complex and/or tedious. Depending on your needs, you may choose to use an unsafe type cast similar to the example and put the responsibility of calling the type guard on the calling function. We won’t cover strategies for ensuring these types are correct in this post, but this is an area you should study carefully.

Tying it all together

What do we get for our work? Let’s take a look at the code that an SDK maintainer might write to use the types that we’ve defined:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: undefined})

return users
}

Hopefully it’s clear that we’ve gotten some value out of this. This call is entirely type safe (shown below), and is quite concise and easy to read.

Note that we also don’t have to specify the generic types here. TypeScript is inferring the types for us. If we make a mistake, the code won’t compile! Here are a couple of examples of bad calls and their corresponding errors:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'user', params: undefined})
//Type '"user"' is not assignable to type 'keyof Routes'. Did you mean '"users"'?
return users
}

Look at that helpful error message! Not only does it tell us we’re wrong, it suggests what might be right.

What if we try to pass an argument to this route? If you remember, we defined it to explicitly accept no arguments.

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: 'someUserName'})
//Type 'string' is not assignable to type 'undefined'.(2322)
//{file and line number go here}: The expected type comes from property 'params' which is declared here on type 'RouteArgs<"users">'
return users
}

This is also helpful, though there is some limitation. TypeScript will not pass through the alias that we defined (NoArgs), unfortunately. However, it does tell us exactly where the source of the error is, allowing an engineer to trace exactly why a string won’t work. The engineer will then see that NoArgs type and have a clear understanding of what went wrong.

What’s missing/limitations?

The examples here could still be improved upon. Note that ExpectedReturn is part of callEndpoint. This means that an SDK maintainer would need to have some knowledge of which type to pick (if not the specific structure). Why not include this information our Routes type? That may make a good exercise for the reader.

As previously mentioned, type aliases do not get passed through to compiler errors. There are some workarounds, however.

Depending on how you’re handling various verbs, your type guards/generic functions can get quite complex. This won’t have an impact on those maintaining your SDK, but there can be an up-front cost to defining these types. It’s up to you to decide whether to pay that cost.

What was that about avoiding all this?

Hopefully with the tips in this article, you feel more confident about making maintainable SDKs. However, wouldn’t it be nice if you just didn’t have to develop an SDK at all? After all, you have an API spec; and that should be enough to generate the code, right? Fortunately, the answer is yes, and liblab offers a solution to do just that. If you don’t want to think about challenges like error handling and maintainability for your SDK, liblab’s SDK generation tools may be able to help you.

← Back to the liblab blog

Introduction

When working on applications and systems, we usually rely on APIs to enable integrations between services that make up our system.

The purpose of this article is to provide understanding of some important API metrics that are used to measure an API's performance. For each of those metrics, I will also touch on some factors affecting them and ways to improve them, which will in-turn enhance your API’s performance.

Overview of the key API metrics

To cover the API performance metrics metrics in a comprehensive manner, I divided this article into two parts. In this first part, I will talk about three key metrics, which are Response Time, Latency, and Throughput.

Response Time

Response time is basically the time it takes for an API to respond to a request from a client application. Response time gives us a measure of our application's responsiveness, which in-turn has an impact on user's experience.

Factors Affecting Response Time

  • Network Latency is simply the delay in network connection between client applications and your API. Congestion and increased physical distance between servers in a network are examples of situations that impact network latency.
  • If you make use of external or third-party services, then the overall response times of your API will also be dependent on the response times of those services
  • The response times of your API can also be affected by slow or poorly written database queries

Monitoring Your API's Response Time

Monitoring and analyzing response time can help identify bottlenecks, optimize API performance, and ensure service level agreements (SLAs) are met.

There are lots of tools out there that can be used to monitor your API's response time. Here are some popular ones:

  • Apache JMeter
  • Pingdom
  • Datadog
  • New Relic

Improving Your API's Response Time

There are several approaches you can take to improve the response time of your API:

  • Making use of a Load Balancer
  • Optimizing your code to ensure to reduce unnecessary computations, database queries, or network requests
  • Implementing caching mechanisms
  • making use of content delivery networks (CDNs)

Throughput

Throughput is simply the number of requests an API can handle within a given time period. It is an important metric for measuring an API's performance, and is usually measured in requests per second (RPS)

An API with higher throughput simply means the system can handle a larger volume of requests, which ensures optimal performance even during peak API usage periods.

Monitoring Throughput

Monitoring throughput in the context of API performance involves analyzing and optimizing various factors such as

  • Server capacity
  • Network bandwidth,
  • Request processing time.

Improving Your API's Throughput

By employing techniques such as horizontal scaling, load balancing, and asynchronous processing, you can ensure a higher throughput which will significantly improve your API's performance.

Latency

Latency is another key performance metric for analyzing the performance of an API. It's a measure of the time taken for a client to send a request and get back a response from an API server.

Factors affecting API Latency

Some known factors that affect latency includes:

  • API Response with large data sets
  • Network Congestion
  • Inefficient or poorly written code.

How To Minimize Latency

It is very important to reduce latency, as higher latency can lead to sluggish user experiences, increased waiting times, and reduced overall performance. Some ways to reduce latency includes

  • Employing caching mechanisms
  • Applying data compression techniques
  • Returning data in chunks
  • Optimizing network protocols

Request Rate

Request Rate is an API performance metric that measures the rate or frequency at which requests are being made to an API within a specific time frame.

It provides insights into the load or demand placed on the API and helps gauge its capacity to handle incoming requests.

By monitoring request rate, API providers can identify usage patterns, peak periods, and potential scalability challenges which will help to anticipate traffic spikes, and plan resource allocation accordingly.

Monitoring API Request Rate

Request rate is typically measured over specific time intervals, such as:

  • Requests per second (RPS),
  • Requests per minute (RPM),
  • or Requests per hour (RPH).

The different measurement intervals determines the granularity of the metric and allows you to analyze the request patterns over different time periods.

There are several tools available to measure and analyze request rates for your API. Here are some popular options:

  • AWS CloudWatch
  • Google Cloud Monitoring
  • Grafana
  • Datadog
  • Prometheus

Optimizing For Higher Request Rates

To be able to handle increasing request rates during peak periods or as a result of high usage of some particular business features, you can consider implementing the following techniques:

Horizontal scalingUsing horizontal scaling techniques, such as distributing the load across multiple servers or instances. By adding more servers or utilizing cloud-based solutions that provides on-demand scaling of resources, you can handle a higher volume of requests by leveraging the collective resources of multiple machines
Asynchronous processingBy Identifying time-consuming or resource-intensive operations that can be performed asynchronously, you can free up resources to handle more incoming requests. This prevents blocking and allows your API to handle a higher request rate, if such operations are offloaded to background tasks or queues
CachingCaching can significantly improve response times and reduce the load on your API, especially for static or infrequently changing data. Utilize caching techniques like in-memory caches or CDNs can help your API to efficiently handle higher request rates

CPU Utilization

CPU utilization is another important metric that measures the percentage of CPU resources used during the processing of an API request. It provides insights into the efficiency of resource allocation and can be a key indicator of API performance.

Factors that can impact CPU usage during API processing include inefficient code implementation, highly computational operations, and the presence of resource-intensive tasks or algorithms.

Monitoring CPU Utilization

To effectively monitor CPU utilization, developers can employ various tools to gain insights into CPU usage. Some examples are New Relic, Datadog, or Prometheus.

Ways To Improve CPU Utilization

Below are some ways to reduce CPU usage within your API:

Efficient Algorithm DesignAnalyze your API code for computational bottlenecks and optimize them by using efficient algorithms and data structures. This will help to reduce CPU usage for operations that would have been more CPU intensive
Throttling & Rate LimitingImplement throttling mechanisms or Rate limiters to control request rates and maximum number of API calls that can be made within a specific time. This will in-turn prevent overload on the CPU.
Load BalancingBy making use of a load balancer, you can distribute incoming requests across multiple servers, effectively distributing the CPU load.

Memory Utilization

Memory utilization refers to the amount of system memory (RAM) used by the API during its operation. Efficient memory management is crucial for optimal performance. Excessive memory usage can lead to increased response times, resource contention, and even system instability.

Ways To Improve Memory Utilization

Here are some key points to consider to improve memory usage within your API:

CachingEmploying the use of in-memory caching mechanisms to store frequently accessed data or computations. This reduces the need for repeated processing and improves response times by serving precomputed results from memory.
Data PaginationWhen dealing with large datasets, consider implementing pagination rather than loading the entire dataset into memory, fetch and process data in smaller chunks or stream it to the client as it becomes available. This approach reduces memory pressure and enables efficient processing of large datasets.
Memory profiling toolsUtilize memory profiling tools to identify memory bottlenecks and areas of high memory consumption within your API. These tools can help you pinpoint specific code segments or data structures that contribute to excessive memory usage.

Conclusion

In this article, we discussed the importance of measuring API performance and some key metrics that tells us how well our API is performing.

Improving API performance as well as building SDKs for an API are some of the many problems that most API developers face. Here are liblab, we offer a seamless approach to building robust SDKs from scratch by carefully examining your API specifications.

By leveraging services like liblab, API providers can generate SDKs for their APIs, further enhancing their developer experience and accelerating the integration process with their APIs.

← Back to the liblab blog

Introduction to REST API

We all understand the significance of APIs in software development, as they facilitate data sharing and communication across various software systems. Ensuring their proper functioning is paramount. Implementing proven conventions in your API can greatly enhance its scalability and maintainability. This post delves into versioning techniques and how leveraging existing tools can simplify the process.

Versioning is a key concept that enables your applications to maintain backward compatibility as your API evolves. Without proper versioning, any modifications made to your API could cause unexpected errors and disruptions in current client applications. REST API versioning allows you to introduce updates while ensuring earlier versions continue to function correctly.

Common Versioning Techniques

To implement versioning in your API, here are three popular methods:

  1. URL-Based Versioning: In this method, the version number is incorporated into the URL path. For instance, Version 1 of the API is represented by https://api.example.com/v1/resource.
  2. Query Parameter Versioning: This technique involves appending the version number as a query parameter in the API endpoint. For example, https://api.example.com/resource?version=1.
  3. Header-Based Versioning: With this approach, the version number is specified in a unique header field, such as Accept-Version or X-API-Version.

There is no unanimous consensus on the best approach, as each has its advantages. When choosing, consider the following:

Versioning typeProsCons
URL-based
  • Easy to shut down obsolete versions
  • Facilitates separation of authentication concerns for different versions
  • Compatible with most frameworks
  • Version is always clear and obvious
  • Requires adoption from the start; otherwise, it necessitates code refactoring
  • Difficulty in adding patch versions
Query parameter
  • Easy to implement in existing APIs
  • Allows for the addition of patch versions
  • Provides control over the default version provided to clients
  • Version might be optional
  • Challenging to separate authentication concerns
  • Harder to retire or deactivate obsolete versions
  • Potential confusion distinguishing between data version and API version
Header-based
  • Easy to implement in existing APIs
  • Allows for the addition of patch versions
  • Provides control over the default version provided to clients
  • Version might be optional
  • Challenging to separate authentication concerns
  • Harder to retire or deactivate obsolete versions

Now that you've selected a versioning technique, do you need to update all client applications every time a new version is deployed?

Ideally, keeping client applications up to date ensures optimal API utilization and minimizes issues. However, this doesn't have to be a complicated process if you employ the right tools: SDKs.

How SDKs Assist Client Applications in Adapting to Available Versions

SDKs (Software Development Kits) are libraries that handle API integration, including versioning, on behalf of developers. They offer the following benefits:

  1. Version Management and Compatibility: SDKs allow you to select the API version you want to use, simplifying the process of switching between versions.
  2. Handling Different API Versions: SDKs provide a unified interface for client developers, abstracting the differences between API versions. Regardless of the underlying version, developers can interact with the SDK using standardized techniques and models.
  3. Error Handling: Some versions might also handle errors differently, and SDKs will cover the required changes out of the box
  4. Compile-time errors: SDKs will also present you with compile-time errors when a major change has occurred between the versions, allowing you to save time on testing each change manually.
  5. Automatic updates: And last, but not least, if you are using an SDK provider, you don’t even have to worry about updating the SDK yourself, as all updates will be covered automatically.

To learn more about SDKs, check out this article on how SDKs benefit API management.

"You might wonder if building and maintaining an SDK is more challenging than adapting to newer API versions. After all, you would need to update the SDK to accommodate changes as well."

This is where liblab comes in. They offer an impressive suite of tools to effortlessly build robust and comprehensive SDKs from scratch. By analyzing your API spec, liblab can generate SDKs tailored to your API's needs. These SDKs are flexible and include all the necessary components out of the box.

If you love liblab, but your company hesitates to invest in new tools, check out this article on how to convince management to invest in the tools you need.

Conclusion

Properly versioning your REST API is crucial for its evolution and long-term stability. By utilizing versioning techniques such as URL-based, query parameter-based, or header-based approaches, you can manage changes while ensuring backward compatibility. Additionally, SDKs can assist client applications by abstracting API complexities, managing different versions, and providing consistent interfaces. By following best practices in REST API versioning, you can facilitate smoother transitions, enhance developer experience, and maintain strong relationships with your API consumers.

← Back to the liblab blog

As engineers, we often prioritize scalability, elegance, and efficiency in finding solutions. We despise monotony, tedious tasks, and boring work that consume our time. It becomes especially frustrating when we identify a tool that can reduce effort and produce better results, only to be turned down by management. This article will guide you on how to effectively communicate your tool request to management, increasing the chances of getting a 'Yes' for your proposal.

You’re Not Asking The Right Way

You may find it obvious why the tool you recommend is a great investment. It saves time, improves results, and allows you to focus on high-return activities for your organization. However, management doesn't always grasp these benefits. Why is that?

The reason lies in how you present your case. Management is primarily concerned with the organization's Profit and Loss (P&L). Their role is to maximize revenue while minimizing costs. When you propose a tool, simply highlighting time savings or improved output these don’t necessarily resonate. They are one step removed from what truly matters to management. Unless you can quantify the impact of the tool on P&L, management will perceive it as an "additional cost."

Show Impact on P&L

So, how can we translate the "additional cost" into improving the P&L? It's actually quite straightforward. While there will be an initial cost associated with the tool, evaluating it in isolation is not a fair assessment of the P&L impact. To present the impact properly, you need to calculate the additional revenue generated by the tool or the opportunity cost of not having it. The sum of the additional revenue (Rev_gen) and the opportunity cost (Opp_cost) subtracted from the tool cost will quantify the impact on the P&L.

PL_impact = Rev_gen - (Tool_cost - Opp_cost)

Provided you can demonstrate that investing in your tool will result in a positive PL_impact (> 0), management should find it an easy decision to support.

How To Calculate P&L Impact

Now that we know what needs to be quantified (Rev_gen and Opp_cost), let's discuss the general methodology used to quantify them.

  1. Identify the input parameters required to calculate Rev_gen and Opp_cost.
    • Write down the expression that dictate the calculations for each parameter. Assumptions are acceptable and often expected.
  2. Build a simple model (a spreadsheet, no LLM required 😉) to calculate Rev_gen and Opp_cost based on the input parameters.
  3. Present the calculations in a straightforward and simple manner. It's crucial to show all the math. It should be laid out so simply, that even a middle schooler can follow the logic.

Example - Calculating Opportunity Cost

In the following example we will show how we at liblab calculate the opportunity cost associated with building and maintaining SDKs for an API manually. This allows us to enable our customers to show that leveraging liblab’s products & services are a fraction of the cost of tackling the problem yourself (we assume a Rev_gen of 0, for simplicity sake)

Opp_cost = APIs * Num_SDK * Cost_build+maintain_per_SDK
  1. Input Parameters

    1. APIs = Number of APIs your organization needs SDKs for
    2. Num_SDK: Number of SDK languages you need to support for your APIs
    3. Cost_build+maintain_per_SDK can be further broken down into:
      1. Cost_build/SDK: Engineering cost per SDK build
        1. Cost ($) / Hour derived from engineering salary
        2. Hours / SDK Build (assumption based on your organization's API complexity)
      2. Cost_maintain/SDK: Engineering cost per change and number of changes
        1. Cost ($) / Hour derived from engineering salary
        2. Hours / SDK Build (assumption based on your organization's API complexity)
        3. Update Frequency (assumption on how often your API changes)
  2. Model calculating Opp_cost based on input parameters:

    1. The reason we want to put this in a spreadsheet model vs. just writing the equations and answers out is because it makes it easy for management to see the end result, if they don’t agree with one of your assumptions - most of the times this won’t change the decision on the tool but will help deal with any objections they may have.
    2. See models governing equations below:
    Opp_cost = APIs * Num_SDK * (Cost_build + Cost_maintain) per SDK
    Cost_build/SDK = Cost_hr * Hrs_build
    Cost_maintain/SD = Cost_hr * Hrs_maintain * Update_frequency
  3. Present the Math Simply - Below is a screenshot of our liblab Investment Calculator

    1. This calculator depicts the investment needed by an average liblab customer if they are to build and maintain SDKs across 6 languages themselves for a single API.

SDK Investment Calculator

As you can see, the cost of building and maintaining SDKs for our average customers exceeds $100K. Making the cost of liblab’s product and services a fraction of the opportunity cost, resulting in very good return and positive impact on P&L.

Conclusion

To obtain a "Yes" from management for the tools you need, demonstrate through a model how they will have a positive impact on the P&L.

At liblab, we have created a calculator that helps developers articulate the cost savings of using our services to management. Not to mention the additional impact a better developer experience can have on their business (not quantified in our calculator).

If you're interested in receiving a free estimate of your annual API SDK investment, reach out to us via our contact form and mention "Free SDK Investment Assessment" in the message field. We'll respond promptly and provide you with a customized report breakdown, similar to the example above, tailored to your organization.

← Back to the liblab blog

Software development can be a complex and daunting field, especially for those who are new to it. The tech world’s jargon and acronyms can be confusing to newcomers. You may have heard the term “SDK.” But what exactly is an SDK, and why is it important for software development?

More specifically, how can an SDK, when applied to an API, create huge benefits for your API management?

In this article, we’ll take a closer look at what an SDK is and why it’s an essential tool for developers looking to create high-quality software applications. You’ll also come away with a clear understanding of the benefits SDKs have on API management, for both API owners and end users.

What is an SDK?

An SDK (software development kit) is a programming library made to serve developers and the development process.

A good use case for SDKs are APIs, the application programming interfaces that allow two programs, applications or services to communicate with each other. Without an SDK, a developer has to access the API via manual operations. Whereas with an SDK, developers can interact with the API using pre-built functionality, enabling quicker and safer software development.

How to Use SDKs for APIs

An API is the interface of an application by which a developer can interact directly with that application. An SDK provides tools that help a developer interact with the API.

To emphasize how using an SDK is different from interacting with an API via “manual operations,” we will juxtapose calling The One API to find a book with both methods below.

Calling The One API via Manual Operations

We will call The One API via a BASH script and JavaScript fetch method — both “manually.”

curl   -X GET https://the-one-api.dev/v2/book/1264392547 \
2 -H "Accept: application/json" \
3 -H "Authorization: Bearer THEONEAPISDK_BEARER_TOKEN"

This is a very basic way to query a server. It is available straight out of the terminal and gives you a way to describe the network request using arguments to one big command.

Explanation about the command:

  • curl ****the command for BASH to transfer data.
  • -X debugging mode, allows BASH to be more verbose.
  • GET is the method.
  • -H is a header option.

In other words: transfer data , do it with a get, and pass the headers Accept: application/jsonAuthorization: Bearer …

JavaScript Fetch Method

async function fetchData() {
try {
const url = 'https://the-one-api.dev/v2/book/123';
const res = await fetch(url, {
headers: {
Authorization: `Bearer 12345`,
},
});

if (!res.ok) {
throw new Error(res.statusText);
}

const data = await res.json();
console.log(data);
} catch (error) {
console.log('error', error);
}
}

This is the most basic way to query a server with JavaScript. What you see here is an asynchronous function that queries the server, converts the request to JSON format and then logs the data it produced.

It is better than bash because:

  • Readability: Instead of arguments to one long command, you’d use an object which is more human-readable and less error-prone.
  • It handles an error. Instead of just logging the error, the try/catch block allows you to handle the situation where an error occurred.

Calling The One API via an SDK

import { TheOneApiSDK } from './src';

const sdk = new TheOneApiSDK(process.env.THEONEAPISDK_BEARER_TOKEN);

(async () => {
const result = await sdk.Book.getBook('123');
console.log(result);
})();

Notice that the SDK client allows us to use the book controller and getBook method to query the API. We set the headers when we instantiated the clients and we were ready to query the API. This approach is much easier to read and less error-prone.

This example is different from the two mentioned above because the user did not write the http request.

The http request is actually made behind the scenes (might be written with JS fetch too) by the maintainer of the SDK; this allows the maintainer to decide:

  • What network protocol is going to be used.
  • What is the destination of the network request.
  • How the headers should be set for the network request.
  • Readability: It’s very clear which action the user wants to achieve, what is the controller, etc.

Who Benefits from SDKs?

There are many benefits to using an SDK. For the end users, it allows safer and cleaner access to the API. For the owners it ensures the API is used correctly and keeps support costs down.

SDK can reduce costs in many ways, including:

Retry strategy. When writing an SDK, you can add logic that prevents an SDK client from keep trying to query the API, therefore preventing unwanted calls to the API

Better use. Because the user does not query the API directly, the API receives better calls or more standardized input from its clients.

Sped up development. The SDK clients enjoy faster development because the requests from the server are standardized.

Code maintenance. When querying the API directly, you need to keep up with every update the API has done. Using an SDK facilitates this interaction and keeps up with the updates of the API. You do, however, need to keep your SDK up to date.

An SDK benefits the API user by:

  • Better understanding how to use an API through semantically-named functions, parameters and types.
  • Making it easier to invoke methods as they become readily available/discoverable via internal development environments (IDE)
  • Provides easier API access, with simple functions and parameters.
  • Prevents bad requests, and allows the user to correct their input.

An SDK benefits the API owner by:

  • Ensuring the required inputs from the user are in every request.
  • Preventing wrong inputs from being sent to the API server via enforcing types and validations.
  • Having an additional layer of validations to enforce response patterns. For example, if a user is sending too many requests, the SDK can warn and stop the user as they approach their limit.

Conclusion

Many API owners don’t provide SDKs because of the difficulty and the development time involved with creating one, not to mention the onerous task of maintaining them. An API can have dozens or even hundreds of endpoints, and each one requires a function definition.