Skip to main content
← Back to the liblab blog

At liblab, we tackle complex engineering problems to build SDKs for our customers and their end users, who are engineers themselves. Our team's extensive knowledge in software, software-as-a-service solutions, and developer tools is critical to our success. Therefore, retaining our talented developers is a priority.

Why should you care about engineering culture?

While it's often said that people leave managers, not jobs, it's equally true that engineers leave companies with poor culture. Engineering culture, shaped by the attitudes and experiences of software developers and team leaders, influences the working atmosphere. By understanding the importance of engineering culture, we can enhance the positive aspects and minimize the negative ones.

Practical tools for fostering a great engineering culture

There's a lot of discussion about building a great engineering culture on the internet. However, I'm focusing on practical tools here. These are concrete steps anyone can take to improve their team's culture and their own contributions.

1. Coding style and standards

Coding style is an aspect of engineering culture. It's not about specific preferences - we're not engaging in the tabs versus spaces debate! - but how coding style is shared across the team. Often, one "tastemaker" can influence the team's preferred style. If one engineer is the sole torchbearer, the implementation can seem arbitrary, and enforcing that style in code reviews with the wrong tone can alienate coworkers.

A coding standards document covers basic expectations like naming variables and functions, commit comments, and best practices. Creating these standards with feedback from the team fosters consensus and buy-in. Implementing a corresponding linting solution in the team’s IDEs and code repositories can clarify expectations and allow for early and regular corrections by the system, not a peer.

2. Developer tool choice

Another aspect of coding standards to consider is allowing tool choice. Not every developer likes the same IDE - some even prefer a traditional text editor! Allowing engineers to choose the solutions they’re most comfortable with will improve their satisfaction and productivity.

3. Code reviews and pair programming

Code reviews are routine tasks with a significant impact on culture. For engineers who don't pair on projects regularly, this could be their main professional interaction. Establishing a routine for code reviews ensures this important collaboration happens consistently. We recommend engineers spend time at the start of their day and after lunch on reviews to avoid a full-day wait for feedback. Alternatively, regular pair programming can reduce the need for code reviews since multiple engineers have evaluated the code already.

While experienced engineers analyze the code for style, functionality, and performance, every engineer can contribute. Less experienced engineers should be able to understand the code and changes. Asking for more information on a new function is a great reminder to include detailed comments! Seeing how others solve problems can spark new ideas and questions, leading to mentoring or collaboration. Participating in code reviews fosters relationships between developers, improving collaboration and culture.

4. Architecture discussions

Architecture discussions are some of my favorite times working in software engineering. Analyzing a problem, freely exchanging ideas, and incorporating elements into a cohesive solution represent the team's collective experience. Experienced engineers are naturally inclined to solve problems quickly, but these sessions are also opportunities to mentor and develop less experienced developers.

Encouraging experienced engineers to speak last allows the rest of the team to work through the problem together. Making a single engineer responsible for the project and the final decision on technical matters encourages strong ownership. This approach can also encourage participation from less experienced engineers, since they know an experienced engineer is ultimately responsible for the design.

5. Design reviews

Technical design reviews follow these architecture discussions. The responsible engineer details the proposed solution in a "request for comment" document, which the engineering team reviews asynchronously. If there are comments to consider, then a meeting is scheduled to discuss questions and collect feedback. This process gives every engineer a voice, promoting participation and an inclusive engineering culture.

6. Hiring

Hiring is another area where we can influence culture. New team members bring their own experiences and influence, but how we select them also matters. We at liblab prefer a take-home style coding challenge that closely reflects the daily developer experience. We've extensively discussed and iterated the content of this test, ending up with tests that target each of our engineering teams' focus areas.

Creating the test, having coworkers take it for a baseline, and evaluating the results is an important part of engineering culture. It establishes a minimum standard for the team, and completing the coding challenge becomes a shared experience and a badge of belonging for those who join.

Conclusion

Here are some practical tools for fostering a great engineering culture:

  • Promoting coding style through tools
  • Allow engineers to use their preferred tools
  • Setting ground rules for engineering discussions
  • Conducting inclusive design reviews
  • Regular code reviews and/or pair programming
  • Involving engineers in hiring practices

Anyone can adopt these practices within their own engineering team to enhance their culture. Teams should also experiment; what worked well for us at liblab might differ for your team.

If our commitment to a great culture sounds appealing, then come join us! We are hiring for a range of roles, and you can find more details at liblab.com/careers.

← Back to the liblab blog

At liblab, our exploration into leveraging AI, specifically Large Language Models (LLMs), to see if it can be used to contribute to the generation of Software Development Kits (SDKs) has been both enlightening and challenging. This exploration aimed to enhance our internal development processes and maintain a competitive edge in the rapidly evolving tech landscape. Despite the hurdles, our experiences have provided valuable insights into the potential and limitations of current AI technologies in software development.

The experiment

Our exploration began with the goal of generating SDKs from OpenAPI specifications. This process involves several stages:

  • Generating OpenAPI specs from documentation
  • Creating SDK components such as models and services
  • Integrating custom hooks based on prompts.

Our experiments helped us better evaluate the potential in the space.

Achievements

We managed to partially generate SDK components, demonstrating the potential of AI in streamlining certain aspects of development. This achievement is a testament to the capabilities of AI to assist in automating repetitive and straightforward tasks within the SDK generation process.

This may sound great, but the downsides far outweigh the benefits.

Limitations

Despite the achievements, the integration of these components into a cohesive SDK proved to be overly time-consuming and highlighted significant limitations:

1. Non-determinism and hallucinations

In the context of LLMs, “hallucination” refers to a phenomenon where the model generates text that is incorrect, nonsensical, or not real.

A Gentle Introduction to Hallucinations in Large Language Models

LLMs showed a tendency to produce variable outputs and factual inaccuracies. This inconsistency poses challenges in achieving reliability, especially since SDKs generated in different sessions may not be compatible due to hallucinated content or changes in the SDK's public surface area after adding new endpoints.

In some cases, the LLM would entirely disregard the given specification or make incorrect assumptions about specific endpoints, resulting in non-functional code.

2. Input and output limitations

The models faced difficulties with the volume of code that could be processed and generated in a single instance, hampering their ability to handle complete codebases. LLMs use tokens, with a token approximately equivalent to a couple of characters, and different LLM models have different token limits, for example the largest GPT-4 model can take 128,000 tokens as input, and return only 4,000 tokens as output, which is too small for an SDK.

Over time this will reduce as LLMs have larger token limits, but for now, it's a significant limitation.

3. Response time and cost implications

The increased response time and associated costs of utilizing LLMs for development tasks were non-negligible, impacting efficiency. For instance, a single call to the OpenAI API can take upwards of 15 seconds, with multiple calls needed. liblab can generate multiple SDKs in approximately 10 seconds.

4. State and memory constraints

LLMs demonstrated limitations in maintaining conversational memory and managing state, complicating extended code generation sequences. This is related to the input and output limitations, but also the fact that LLMs are not designed to be used in a conversational manner.

Memory can be implemented by passing all the previous inputs and outputs into the model, but this is not practical for SDK generation as this drastically increases the number of tokens used, the response time, and cost. Other solutions are possible, such as using vector databases, but these add complexity and cost.

5. Security concerns

A significant limitation that cannot be overlooked when considering AI-generated SDKs is the potential for increased security risks. As AI models, including LLMs, are trained on vast datasets of existing code, they inherently lack the ability to discern secure coding practices from insecure ones. This can lead to the generation of code that is vulnerable to common security threats and weaknesses.

For more information on this topic refer to The Snyk 2023 AI Code Security Report.

Insights into AI's application in SDK generation

Our journey has underscored the importance of clarity, precision, and simplification in interactions with LLMs. We learned the value of structuring prompts and managing code outputs effectively to optimize the model's performance. However, our experiments also highlight the current impracticality of relying on AI for SDK generation due to the critical issue of non-determinism and hallucinations.

The verdict on AI-generated SDKs

Given the challenges in achieving the necessary level of accuracy, consistency, and functionality, the current state of AI technology does not offer a viable alternative to traditional SDK generation methods. Our findings suggest a need to approach AI with caution in this context, emphasizing the irreplaceable value of human expertise and tailored solutions.

The role of AI in enhancing developer experience at liblab

We are committed to providing developers with tools that are not only powerful but also user-friendly and easy to integrate. Our exploration has reinforced our belief in leveraging AI where it adds value without compromising on quality or usability, ensuring that we prioritize human-centered design in our tool development.

Future directions

Informed by our experiences, we remain optimistic about the potential of AI to enhance our offerings in ways that do not involve direct SDK generation. We are exploring other avenues where AI can positively impact the developer experience, focusing on areas that complement our strengths and meet our customers' needs.

Conclusion

The journey of leveraging AI at liblab has been a path of discovery, filled with both challenges and opportunities. While AI-generated SDKs cannot not replace traditional methods, the potential of AI to transform development practices remains undeniable.

We look forward to continuing our exploration of AI technologies, constantly seeking ways to innovate and improve the tools we offer to the developer community.

← Back to the liblab blog

We're thrilled to announce a significant update to our SDK documentation generation: Markdown support is here! This new feature is designed to make your documentation more readable, more engaging, and easier to write. Whether you're a seasoned API developer or just starting out, Markdown can simplify the way you create and maintain your documentation. Let's dive into what this means for you and how you can leverage these new capabilities.

What is Markdown?

Markdown has become the lingua franca of the web for writing content. Its simplicity and readability make it an excellent choice for writing documentation. Unlike HTML or other markup languages that are often cumbersome to write and read, Markdown allows you to format text using plain text. This means you can create headers, lists, links, and more, without taking your hands off the keyboard to interact with with tags or styling buttons.

For those unfamiliar with Markdown, it's a lightweight markup language created by John Gruber and Aaron Swartz. Markdown enables you to write using an easy-to-read, easy-to-write plain text format, which then converts to structurally valid HTML (or other formats) for viewing in a web browser or other platforms. This makes it an ideal choice for writing online documentation, as it's both human-readable and machine-convertible, ensuring your documentation is accessible both in raw and rendered forms.

Supported Features

Our SDK documentation generator now supports the following Markdown features:

  • Headers: Structure your documentation with headers to define sections clearly and improve navigation. Use different levels of headers (e.g., #, ##, ###) to create a hierarchy and organize content logically.
  • Bold and Italics: Add emphasis to your text with bold and italics, making important information stand out.
  • Images: Integrate images into your documentation to provide visual examples, diagrams, or illustrations. This can greatly aid in explaining complex concepts, workflows, or architecture, making your documentation more comprehensive and accessible.
  • Tables: Organize information neatly in tables. This is perfect for parameter lists, version compatibility, and more.
  • Lists: Organize information in lists to improve readability and structure. Lists are great for step-by-step instructions, feature lists, or any information that benefits from a clear hierarchy or grouping.
  • Inline Code and Code Blocks: Highlight code snippets directly in your documentation. Inline code for small references and code blocks for larger examples.
  • Links: Create hyperlinks to external resources, further reading, or cross-references within your documentation.
  • Blockquotes: Use blockquotes to highlight important notes, warnings, or quotes from external sources.

With these features, you can create more structured, readable, and engaging SDK documentation, allowing users to better understand and utilize your SDK.

How to Use the New Features

Incorporating Markdown into your SDK documentation is straightforward. Here's how to get started:

  1. Open your OpenAPI specification file.
  2. Find the description fields where you want to add formatted documentation.
  3. Insert your Markdown directly into these fields.

This Markdown will be automatically converted into beautifully formatted documentation when generated through with liblab.

Example

Here's an example of how you can use Markdown in your OpenAPI specification. In this case, we are using the classic Pet Store API Spec with some Markdown added to the description:

openapi: 3.0.0
servers:
- url: https://petstore.swagger.io/v2
description: Default server
- url: https://petstore.swagger.io/sandbox
description: Sandbox server
info:
description: |
This is a sample server Petstore server.
You can find out more about Swagger at
[http://swagger.io](http://swagger.io) or on [irc.freenode.net, #swagger](http://swagger.io/irc/).
For this sample, you can use the api key `special-key` to test the authorization filters.


# Introduction
This API is documented in **OpenAPI format** and is based on
[Petstore sample](http://petstore.swagger.io/) provided by [swagger.io](http://swagger.io) team.

# Cross-Origin Resource Sharing
This API features Cross-Origin Resource Sharing (CORS) implemented in compliance with
[W3C spec](https://www.w3.org/TR/cors/).
And that allows cross-domain communication from the browser.
All responses have a wildcard same-origin which makes them completely public and accessible to
everyone, including any code on any site.

# Authentication

Petstore offers two forms of authentication:
- API Key
- OAuth2
OAuth2 - an open protocol to allow secure authorization in a simple
and standard method from web, mobile and desktop applications.

version: 1.0.2
title: Swagger Petstore

This Markdown will be automatically converted into beautifully formatted documentation when generated through liblab:

The pet store docs with rich text in the description. This has headers, links, bullet points and bold text

Future Enhancements

We're not stopping here! Our team is dedicated to improving and expanding the capabilities of our SDK documentation.

Stay tuned for updates, and don't hesitate to share your feedback and suggestions. Your input is invaluable in making our tools even better. Please feel free to contact us to request features or improvements!

We hope you're as excited about this update as we are. Markdown support is a big step forward in making your SDK documentation more accessible and easier to understand.

← Back to the liblab blog

This year, we will be sponsoring the Nordic APIs Summit in Austin, TX on March 11-13. We look forward to meeting you in person and discussing our latest SDK generator.

Start generating your SDK for free today - liblab.com/join

Why stop by?

So why should you attend and stop by the liblab booth? Here are a few reasons:

Meet the liblabers

First of all, our booth will be packed with our top Engineers (friendly, too) demoing how to save developers time and effort by generating SDKs for your APIs in your own language. Coding and conversation language 🙂

A group of people posing for a selfie wearing liblab shirts standing in a carpark. 2 of the team are wearing sunglasses, and another is making a peace sign

See the 1-minute demo

Hear about our product updates and new features to help with documentation and API specs. We know it's a pain. We will demo how painful it can be and how we can reduce your pain. Worth your visit.

Visual Studio Code showing some code using a llama store SDK in Python

Snap a photo with liblab's Llama

Let's be honest: the best part of our booth is a 7-foot Llama. Snap a pic with our llama, tweet it with the hashtag #liblabLlama and tag @liblaber to get a special sticker!

A group of people wearing liblab love your SDK shirts posing with a large llama mascot also wearing the same shirt. Behind the group is the liblab booth from API world

More from liblab:

Hear us on stage, and if it's anything like the last event we attended, you'll even get to do some stretching exercises with us. We're not kidding.

Jim on a stage presenting with his arms outstretched, and the audience also stretching their arms

We will be hosting 3 sessions:

  • Build a terrible API for people you hate - 12th March, 1:10PM, API design technical track

    We've all been there - you've been asked to build an API to be used by someone you really dislike. Maybe it's the person who keep stealing your milk from the company kitchen, or the one who asks long winding questions just as the 5pm Friday meeting is about to end. It's someone who annoys you, and you have to build them an API.

    So malicious compliance time! You have to build them an API, but no-one said it has to be good. Here's your chance to get revenge on this person by building the Worst. API. Ever. This session will show you how, covering some of the nastiest ways to create an API that is terrible to use. From lack of discoverability, to inconsistent naming, this session will have it all!

    And maybe if you have to create an API for someone you love, this might give you some pointers as to what not to do...

  • From APIs to SDKs: Elevating Your Developer Experience With Automated SDK Generation - 12th March, 2:50PM, Developer experience technical track

    APIs are everywhere and are a great way for developers to access your service. But not all developers want to access APIs directly. Most want to use SDKs in their preferred programming language using the tools they already know and use everyday. These not only feel more intuitive to a developer, but bring features like type safety, documentation, and code completion.

    In this demo-heavy session, Jim will compare APIs and SDKs and show just how much a well crafted SDK can improve the developer experience. Releasing changes to your customer quickly is something every development team strives for, so Jim will go on to show you how this process can be automated, including in your CI/CD pipelines so that every API change can be released as an SDK as soon as possible with minimal engineering effort.

    By the end of this session you will have a new appreciation for SDKs and understand that creating and maintaining these does not have to be burdensome. You'll be ready to automate this process yourself and improve your own APIs developer experience.

  • 3 Quick Steps to Generate SDKs for Your APIs - 13th March, 10:10AM, Demos and lightning talks

    Learn how to generate SDKs for your APIs in 3 quick steps using liblab.

Get your ticket

Sign up now at nordicapis.com/events/austin-api-summit-2024!

← Back to the liblab blog

SDKs are everywhere, and help developers build software faster and easier. In this post I take a look at some examples of a few SDKs, and talk through the use cases and characteristics of a good SDK.

What is a Software Development Kit (SDK)?

We cover this a lot in one of our other blog posts - how to build an SDK from scratch, but essentially Software Development Kits, or SDKs, are code libraries containing an abstraction layer that makes it easier to interact with a piece of software or hardware using your preferred programming language, speeding up your application development process.

A block diagram of code calling an SDK that calls an API. The code is represented by a plushie llama using a laptop, the arrows between blocks are plushie brown arrows with a neutral facial expression. The SDK is a plushie block fox with SDK written on it, and the API is a plushie block with API written on it, a smiley face, and a swirl of cream on top.

The developers code interacts with the SDK, and that in turn interacts with an API, hardware, or other software.

Some examples of types of SDKs include:

  • API SDKs - these are used to interact with an API, such as the Twitter API, or the Stripe API.
  • Hardware SDKs - these are used to interact with hardware, ranging from consumer devices like printers, to IoT devices, smart consumer devices, and more.
  • Mobile SDKs - these are used to interact with mobile devices, such as the iOS or Android SDKs. If you want to use your phones camera, or handle notifications, you need to use the mobile SDKs.
  • UI SDKs - these are used to implement user interface components, in web sites, on mobile apps, or in desktop applications.

List of SDK Examples

There are many types of SDKs, and thousands of SDKs available. Here are a few examples of the types of SDKs mentioned above:

API SDK

An API SDK is a an SDK that makes it easier to use Application Programming Interfaces, or APIs. Many SaaS and cloud companies have APIs to use their tools and products, and calling these APIs directly is not usually the best approach. You can easily end up writing a lot of boilerplate code to handle things like authentication, retries, and more, in your programming language of choice.

A cute llama using a laptop

An API SDK on the other hand will provide you with an SDK in the languages you are using, and it will handle all of the boilerplate code for you, usually implementing the best practices for that API. For example, an SDK can implement authentication in a way that makes sense for the API, or handle retries and backoff in the most appropriate way.

These SDKs are also structured to make sense to the developer using them. This might be by splitting up the API into multiple SDKs, or wrapping multiple APIs in one SDK. They will also provide code examples, documentation, and more in the available SDK languages.

You can read more on the differences between an SDK and an API in our blog post SDK vs API: what's the difference?

Examples of API SDKs include:

  • Stripe SDK - this SDK makes it easier to interact with the Stripe API to handle payments. The Stripe SDKs wrap their APIs for a huge range of programming languages and technologies.
  • Twilio SDK - this SDK makes it easier to interact with the Twilio API to handle communications, such as SMS, voice, and video.
  • AWS SDK - this SDK makes it easier to interact with the AWS API to handle cloud services, such as storage, databases, and more.

Hardware SDK

Hardware SDKs provide an abstraction layer for communicating with hardware tools rather than software tools. This could be a device connected to a computer, such as a bar code scanner, to remote devices such as Internet of Things (IoT) devices, to hardware connected to a mobile device, such as credit card readers.

A cute llama using a credit card

Communication with hardware is typically low level code, and can be very complex. A hardware SDK will provide a higher level of abstraction, making it easier to interact with the hardware.

Examples of hardware SDKs include:

  • Square reader SDK - the Square Reader is a credit card and contactless payment device that connects to mobile phones. This SDK allows you to integrate payments into your mobile apps using the reader hardware. This reader can be connected over an audio jack, so the SDK handles converting audio signals to the data the reader needs, and back to the data your application needs.
  • Tuya IoT App SDK - this SDK allows mobile app developers to build apps that interact with Tuya's range of smart home and IoT devices. It can handle pairing of devices, device configuration, and device control, over multiple protocols from Zigbee to Matter. The SDK abstracts the protocols and device types, making it easier to build apps without caring about the underlying technology.
  • Zebra Scanner SDK - this SDK allows you to build applications that interact with Zebra's range of barcode scanners. It provides a higher level of abstraction than the low level barcode scanner APIs, making it easier to build applications that interact with the scanners.

Mobile SDK

Mobile SDKs are used to interact with mobile devices, such as iOS and Android. These SDKs provide a higher level of abstraction than the low level APIs provided by the mobile operating systems, making it easier to build applications that interact with the devices.

A cute llama using a mobile phone

Some mobile SDKs are provided by the device manufacturer, and usually these are limited in the SDK languages you can use. The are third party SDKs that sit on top of these, and provide support for other languages, or a consistent development experience across multiple devices.

Examples of mobile SDKs include:

  • iOS SDK - this SDK provides a range of APIs for building applications on iOS devices. It includes APIs for interacting with the device hardware, such as the camera, and for interacting with the operating system, such as handling notifications. This SDK primarily supports Swift, but you can create apps using C++ and Objective C.
  • Android SDK - similar to the iOS SDK, this SDK provides a range of APIs for building mobile applications on Android devices. This SDK primarily supports Java and Kotlin, but you can create Android apps in C++.
  • Flutter SDK - this SDK provides a range of APIs for building mobile applications on both iOS and Android devices. It provides a consistent development experience across both platforms, and supports the Dart programming language.
  • .NET MAUI - this SDK provides a range of APIs for building mobile applications on both iOS and Android devices. It provides a consistent development experience across both platforms, and supports C# and F#.

UI SDK

UI SDKs provide user interface components for you to use in your web, mobile or desktop app. These abstract out a range of user experiences in different programming languages, and provide a consistent look and feel across different platforms.

A cute llama designing a web site

Examples of UI SDKs include:

  • DevExpress - this SDK provides a range of user interface components for web, mobile and desktop applications.
  • MUI - this SDK provides a range of user interface components for web applications, and is built on top of React.
  • Material Design - this SDK provides a range of user interface components for web applications, as well as mobile apps built using Android or Flutter. This is built on top of Google's Material Design system.

Use Cases for SDK Implementation

With all the different types of SDKs, the general theme is the same - they provide a way to build functionality into your application without having to write all the code yourself. Here are some use cases for implementing an SDK:

Payments

We all like being paid right? But handling payments can be complex, with a lot of edge cases to handle. A payments SDK such as Stripe can handle all of this for you, from handling the user interface for entering payment details, to handling the communication with the payment provider, and handling the response from that payment provider. If you are building a platform where you need to charge someone, you can add payments in only a few lines of code calling the relevant SDK.

Analytics and metrics

As developers we want to know who is using our apps, what features they are using, how they discover those features, and in-depth details when things go wrong. It's reasonably quick these days to add analytics and metrics to an application using an SDK such as Sentry. Again, a few lines of code and you have analytics, metrics, and error tracking.

Social Media

Social media has changed how we interact with our customers, potential customers, friends and colleagues. There is a lot a developer can do with social media, from getting alerts when their company is mentioned, to scheduling posts. A social media SDK can allow you to integrate this functionality into your own apps.

Custom relationship management (CRM)

Custom relationship management, or CRM, software often needs to integrate with other systems, such as the systems you are building as a developer. For example, if you wanted to add a contact form to your website, and want to feed that customer data to your CRM. This can be done using the CRM SDK, with different SDK capabilities from a full UI component that does everything, to a more lighter weight SDK where you send the form data to an SDK call.

Characteristics of a Good SDK

Whilst in principal an SDK should make your development experience better, this is not always the case. There are some great SDKs out there, as well as some utterly terrible ones.

Here are some characteristics of a good SDK:

  1. Easy to install - whilst easy is a subjective term, your SDK needs to be easy for your users to install. Sometimes this is as simple as ensuring it is hosted on a package manager, such as npm, pip, nuget, or maven.

  2. Documentation - without good documentation an SDK can vary from being hard to use to being useless. Good documentation should include reference documentation in all supported languages, guides to implementing common functionality, and tutorials to get you started quickly.

  3. Code samples - developers love to copy and paste, and having great code samples really helps developers build apps quickly by taking anything from inspiration, to huge swathes of code, from your examples. These can be inside your documentation, or in a separate examples repository on GitHub. Good code samples can even help guide AI coding assistants inside your integrated development environment or IDE.

  4. Sample apps - an extension to code samples, sample apps can help your users understand the bigger use cases for your SDK, and see how it can be implemented across a larger application, instead of just in one place. As software developers create their apps, these samples can speed up their time to deliver as they can provide a complete reference implementation. For example, an app with example code showing how user authentication can be implemented in one location in your app, and then how that authenticated user can be used across the app.

  5. Idiomatic to the language - a good SDK supports a range of programming languages, and for each language the implementation should be idiomatic to that language. From documentation in the right format to allow for easy integration with IDEs, to using the right language features, a good SDK should feel like it was written by a developer who knows the language well making software development using it feel natural. It should also not re-invent the wheel, using the standard libraries and frameworks for that language where possible.

  6. Few dependencies - it's very easy for an SDK to have a huge amount of dependencies, and this can lead to problems managing these when you want to add dependencies to your own project. A good SDK should have as few dependencies as possible.

  7. Simple to use - simple is another subjective term, but a good SDK should feel to the developer like it is easy to use, and expresses concepts and patterns in a user centric way focusing on the use cases that a developer might have. Your SDK should be written with an understanding of the types of problems your users want to solve, and provide convenient ways to do this, not just be a wrapper around your internal database structure or your hardware. For example with a bar code scanner SDK, developers using it probably just want to get an event fired when a barcode is scanned with the bar code number, not have to worry about writing control loops to monitor for what the scanner is detecting all the time, polling to see if a bar code is found, then getting the code.

  8. Up to date - a good SDK should be up to date with the latest version of the API or hardware it is wrapping. This can be a challenge, but it is important to keep your SDK up to date, as your users will want to use the latest features of your API or hardware. To help with this, make your SDK as maintainable as possible (for some tips with TypeScript SDKs check out our Quick tips for making your SDK more maintainable in TypeScript: routes edition article). For API SDKs consider auto-generating them using liblab to avoid having to spend time maintaining them when your API is updated, adding auto-generation to your development process.

Conclusion

A good SDK can greatly improve the developer experience for your users. For hardware they get a nice abstraction that is easier to program against rather than sending raw, low level hardware commands, for an API they get models and service classes that make it easier to interact with that API, handling things like authentication, retries and more for you. For more on should you build an SDK, check out our article on 'why do I need to build an SDK'.

SDKs are a great thing to provide to your customers, but can be a lot of work to build. This is where liblab comes in, allowing you to quickly generate SDKs for your APIs.

If you have an API and want to improve your users developer experience with an SDK, sign up for liblab now at liblab.com/join.

← Back to the liblab blog

One of the exciting new features available in the liblab Portal is the Audit Trails feature!

With the Audit Trails feature, organization admins gain the ability to closely monitor and track a wide range of crucial user member activities. This includes:

  • keeping track of SDK builds
  • monitoring document approvals
  • tracking PR publishing
  • and much more!

The Audit Trails feature empowers organization admins with enhanced visibility and control over the platform, ensuring that all important actions are recorded and can be easily reviewed when needed.

What is an Audit trail?

Audit trails, also referred to as audit logs, play a vital role in ensuring data security, regulatory compliance, maintaining accountability, and facilitating incident response.

By keeping a detailed record of all actions and events related to the system, audit trails provide a comprehensive view of the activities performed, helping to identify any potential unauthorized access, monitor compliance with established policies and procedures, and support investigations during security incidents or breaches.

In addition to their primary function of tracking and documenting system activities, audit trails also serve as valuable tools for monitoring and analyzing user behavior, detecting anomalies, identifying trends, and enhancing overall system performance and optimization.

The implementation and effective utilization of audit trails are essential components of a robust and proactive approach to maintaining the integrity, confidentiality, and availability of data, as well as ensuring organizational compliance with relevant regulations and standards.

Types of Audit Trails?

1. Data Security

The Audit Trails feature is an essential component of data security. It plays a critical role in safeguarding sensitive information by maintaining a comprehensive log of user activities.

This log serves multiple purposes, including the detection of any potential unauthorized access attempts. By monitoring user actions, the Audit Trails feature helps ensure compliance with established policies and procedures, guaranteeing the integrity, confidentiality, and availability of data.

With its ability to track and trace user interactions, this feature provides valuable insights into system usage patterns, aiding in the identification of any irregularities or anomalies.

By capturing and documenting actions taken within the system, the Audit Trails feature enhances accountability and transparency, enabling organizations to effectively manage and mitigate potential risks.

2. Compliance

Many industries and organizations across various sectors are required to strictly comply with regulatory requirements that mandate the comprehensive logging and continuous monitoring of specific activities. These regulatory measures play a crucial role in ensuring the accountability and transparency of businesses, as well as safeguarding the interests of stakeholders.

By maintaining detailed audit trails, organizations can effectively demonstrate their compliance to regulations such as SOC 2, which is widely recognized as a benchmark for data security and privacy standards in the industry.

3. Accountability

Audit trails are an essential feature of our system as they attribute and track actions and changes made by specific users or entities. This not only establishes a sense of accountability but also serves as a deterrent against malicious behavior. By maintaining a clear record of who did what, we ensure accountability and transparency and can easily identify any unauthorized actions or suspicious activities within our system.

4. Incident Response

In the event of a security incident, audit trails play a crucial and indispensable role in incident response. They serve as a comprehensive and invaluable source of real-time or historical data that allows security teams to effectively analyze and comprehend the nature and extent of the incident, as well as the nefarious actions executed by attackers.

By having access to these audit trails, security professionals are empowered with the necessary information to make informed decisions and take appropriate countermeasures in order to mitigate the impact of the incident and prevent future occurrences.

How do I access my organization's audit trails?

Audit trails are only available to organization admins. Member users do not have the ability to view them.

To access your organization's audit trails, go to the liblab portal and click on the Audit Trails icon in the left toolbar.

The audit trails menu item

Or navigate your browser to app.liblab.com/audit-trails.

There, you will see a paginated view of your organization's audit trails, sorted by the most recent event. There are a couple of filter options available: search by email and event type.

A screenshot of audit trails showing user activities such as removing a CI/CD token, building, and publishing a PR

Some events, such as user build, will include a hyperlink to view more details about the build. This link provides information about the languages that were built and SDK documentation.

What user events are audited?

The following user events are currently being audited by our system. Additional events may be added in the future as we introduce new features and enhancements.

Event NameDescription
User CreateThe user create event is audited whenever a new user is created in our system. Its purpose is to maintain a complete and accurate record of all user creations, allowing us to track user growth and analyze trends over time.
User Verified EmailThe user verified email event is tracked when a user validates their email address. It ensures that the email validation process is properly documented and can be traced if needed.
User Create APIThe user create api event is recorded when a new API is added to our system during the initial SDK build process. The audit log includes the name and ID of the API record, which can be viewed by organization admins for more details.
User BuildThe user build event tracks the occurrence of a build performed by our CLI. The corresponding audit entry contains the API name linked to the build and a unique build ID. Organization admins can access the build details, including the programming language SDKs included in the build, and approve or unapprove any associated documents.
User Approved docWhenever a user approves a document, the the user approved doc event is tracked.. The audit entry displays the ID of the approved document, and organization admins can preview the API documentation by clicking on the document ID.
User Unapproved docLikewise, when a user unapproves a document, the user unapproved doc event is tracked. The audit entry displays the ID of the unapproved document, and organization admins can preview the API documentation by clicking on the document ID.
User Published PRWhen a user publishes a pull request (PR) from the CLI for a specific programming language SDK, the publish PR event is tracked. The audit entry includes the name of the SDK and the associated programming language.
User Publish PR FailureSimilarly, if a pull request fails to publish, the user publish pr failure event is tracked. The audit entry includes the name of the SDK and the associated programming language.
User Invite MemberThe user invite member event is tracked whenever a user invites someone via email to join their organization. Users can invite new members through the liblab Portal, and the audit entry includes the email address of the invited individual.
User Add CI TokenWhen a user adds a Continuous Integration (CI) token through the Command Line Interface (CLI), the user add CI token event is tracked. The audit entry includes the token name, and the ID associated with the generated token.
User Remove CI TokenSimilarly, when a user removes a Continuous Integration (CI) token through the CLI, the user remove CI token event is tracked. The audit entry only includes the ID associated with the removed token.

Future Enhancements

We are continuously working on enhancing our Audit Trail functionality to provide you with a seamless experience. As part of our ongoing efforts, we are exploring additional features that will further improve the functionality of our Audit Trails.

One of the features we are considering is the ability to search audits by date range. This will allow you to easily locate specific audits within a specified time frame, making it more convenient for you to track and review relevant activities.

In addition, we are exploring an export feature that will enable you to export your audit trails to a file. This will provide you with the flexibility to save and analyze your audit data offline or share it with other stakeholders as needed.

We value your feedback and want to ensure that the Audit Trail process meets your needs and requirements. If you have any frustrations or suggestions regarding your current audit trail process, we would love to hear from you. Your input is invaluable in helping us shape the future of our Audit Trails functionality.

Please feel free to contact us! and share your thoughts, suggestions, or any challenges you may be facing. We are here to assist you and make your audit trail experience as seamless and efficient as possible.

Thank you for your continued support and partnership.

← Back to the liblab blog

As an experienced engineer, I've learned the value of simplicity and ease of getting things done. Life is too short to spend on writing boilerplate code, or dealing with the complexities of lowest common denominator tooling. I want to focus on the business problem I'm trying to solve, not the plumbing.

This is why I'm always a fan of a well-crafted SDK, a tool that makes it easy to use a service or API. In this post we look at the process of creating SDKs, from concept to creation, and show how you can automate the process using liblab.

What is an SDK?

Let's start with the basic question - what is an SDK? An SDK, short for Software Development Kit, is a set of tools written in one or more programming languages that make it easier to use a service or API. For this post, I will focus on SDKs for APIs, providing a wrapper around the direct REST calls that a user might make against an API.

An SDK is a great abstraction layer. It is written in the language the developer uses, and provides language idiomatic ways to call the API, managing all the hard stuff for you such as the HTTP calls, the mapping of JSON to objects, authentication, retries, and so on.

A good SDK drastically reduces the code you have to write.

The importance of an SDK

SDKs are important to improve the developer experience of using your service or API. They allow your users to focus on the business problem they are trying to solve, rather than the plumbing of calling your API. This is just as important for public facing APIs from SaaS companies, as it is for internal APIs used by your own developers from your corporate microservices.

Some benefits include:

  1. Intellisense and code completion - you can see the methods and properties available to you, and the parameters they take in your IDE.
  2. Authentication - the SDK can handle the authentication process for you, so you don't have to worry about it.
  3. Built in best practices - you can embed best practices inside the SDK, such as retry logic, so that if a call fails due to rate limiting, the SDK will automatically retry the call after a short delay.

To learn more about the comparison between SDKs and calling APIs directly, check out our post SDK vs API, key distinctions.

How to Build an SDK

Creating SDKs is a multi-step process, and is very time consuming. Not only do you need to design and build the SDK, but you also will want to create SDKs in multiple languages to support your users needs.

This might be less important if you are building SDKs for internal APIs and you only use a single language - for example if you are a Java shop you may only need a Java SDK. However, if you are building a public facing API, you will want to support as many programming languages as possible, so that your users can use the language they are most comfortable with. You will probably want a TypeScript SDK or JavaScript SDK, a Python SDK, a C# SDK, a Java SDK, and so on depending on the programming language priorities of your users.

The steps you would take are:

  1. Design your SDK
  2. Build your SDK
  3. Document your SDK
  4. Test your SDK
  5. Publish your SDK
  6. Maintain your SDK

1. Design your SDK

The first step is to design the SDK. A typical pattern is to mimic the API surface, providing functions that act like your API endpoints. This is a great way to build an SDK - it means your documentation and other user guides can be essentially the same for your API and SDKs, with just different sample code to use the SDK rather than make an API call.

API specification

To do this you will need to know how your API works, which is why it is important that your API has documentation, ideally an API specification such as an OpenAPI spec. Some API designers will start from an OpenAPI spec, others will start with code and autogenerate the OpenAPI spec from that code. Either way, you need to have a good understanding of your API before you can start to design your SDK.

The act of designing your SDK can also be a great way to validate that you have a good, clean API design. If you find that your SDK is hard to design, or that you have to do a lot of work to make it easy to use, then this is a good sign that your API needs improvement. For some tips, check out our post on why your OpenAPI spec sucks.

Paths and components

OpenAPI specs have 2 sections that make designing your SDK easier - paths and components. The paths section defines the endpoints of your API, and the components section defines reusable components, such as schemas for request and response bodies, or security schemes.

You want to use components as much as possible to make it easier to understand the data that is being passed around. Using components also helps you to them as much as possible between endpoints.

For example, if you have an endpoint that returns a single user, it is cleaner to have that user defined as a components/schema/user object. You can then reference that in the endpoint that gets a single user, and wrap it as an array for an endpoint that gets multiple users.

From API spec to an SDK design

Once you have your API spec, you can start to design your SDK. A good design is to provide objects that wrap the requests and responses for your API endpoints, and methods that call the API endpoints.

You will need to consider:

  • What objects will encapsulate the components
  • How paths will be grouped and wrapped as methods
  • The interface to handle authentication
  • Each SDK language will have different idioms, libraries and best practices. You will need to decide how to handle these differences. This might be a challenge if you are not familiar with the language.
  • How to handle best practices such as retries. Do you implement this yourself, or use a library?
  • Naming conventions. It is important to create SDKs that use idiomatic code for the SDK language - for example, if you are building a Python SDK, you want to use Python idioms such as using snake_case for method names, and PascalCase for class names, whereas TypeScript would use camelCase for method names.

2. Build your SDK

After you have designed your SDK, you can start to build the code implementation. This is a manual process, and is very time consuming, especially if you have SDKs in multiple languages.

Components

First you want to create objects to wrap the requests and responses for your API endpoints, defined in the components/schemas section of your OpenAPI spec. These objects are often referred to as models or DTOs (data transformation objects). These are usually 'simple' objects that have properties that map to the properties on the schema, and allow you to write JSON mapping code (or use a built in library) to automatically map the JSON to the object.

Sometimes the objects can be more complex. For example if your API specification defines an anyOf or oneOf - an endpoint that can return one of a range of possible types. Some languages can support this through a union type, others cannot, so you will need to decide how to handle these types. Remember, you need to do this for every SDK language you want to support.

SDK methods

Once you have your models, you can start to build the SDK methods that call your API endpoints. These functions can take models as parameters, and return the models as responses.

You will need to handle making HTTP requests, and the mapping of JSON to objects. You will also need to handle the authentication process, and any other best practices you want to build in such as persisting the HTTP connection, retries and logging.

These methods might also need parameters to match the parameters for the endpoint - both path parameters and query parameters.

For example, if the endpoint is GET /llama/{id} with the Id as a path parameter, you might have a method like this:

class Llama(BaseService):
def get_llama(self, id: int) -> Llama:

where the llama Id becomes a method parameter.

Grouping methods

A good practice is to group these functions into service classes. You might want to do it based off the endpoint - for example having the GET, POST, and PUT methods for a single endpoint in a single class.

API specifications can also define groupings of endpoints, such as OpenAPI tags, so you might want to group your functions based on these tags. You know your API, so pick a technique that works for you.

As you build these methods, you should abstract out as much logic as possible into base service classes, or helper methods. For example, you might want to abstract out the HTTP request logic, the JSON mapping logic, the authentication logic, and the retry logic. This way you can define it once, and share the logic with all your service methods.

SDK client

Finally you will want to put some kind of wrapper object, usually referred to as an SDK client, around the endpoints. This will be the single place to surface authentication, setting different URLs if you have a multi-tenant API, or support different API regions or environments. This is your users entry point to the SDK.

class Llamastore:
def __init__(self, access_token="", environment=Environment.DEFAULT) -> None:
# The llama service
self.llama_service = LlamaService(access_token)

# Set a different URL for different environments
def set_base_url(self, url: str) -> None:

# Set the API access token
def set_access_token(self, token: str) -> None:

3. Document your SDK

An SDK is only as good as its SDK documentation. As you create SDKs, you will also need to write documentation that shows how to install, configure, and use the SDK. You will need to keep the documentation in sync with the SDK, and update it as the SDK evolves. This will then need to be published to a documentation web page, or included in the SDK package.

OpenAPI specs support inline documentation, from descriptions of paths and components, to examples of parameters and responses. This is a great way to document your API, and you can use this to generate proper documentation for your users.

4. Test your SDK

Once your SDK is built, you need to ensure that it works as expected. This means writing unit tests that verify:

  • The JSON expected by and returned by your API can map to the objects you have created
  • The HTTP requests are being made correctly by the SDK methods to the right endpoints
  • Authentication is implemented correctly
  • The best practices such as retries work as expected

You should write unit tests that mock the real endpoint, as well as integration tests that call a real endpoint in a sandbox or other test environment.

5. Publish your SDK

Your finished SDK needs to be in the hands of your users. This means publishing it to a package manager, such as npm for JavaScript, or PyPi for Python.

For every SDK language you will need to create the relevant package manifest, with links to your SDK documentation, and publish it to the package manager. For internal SDKs, you might want to publish it to a private package manager, such as GitHub packages, or a private npm registry.

6. Maintain your SDK

It's not enough to just create SDKs once. As your API evolves, you need to update your both your SDK and SDK documentation to reflect the changes. This is a manual process, and is very time consuming, especially if you have SDKs in multiple languages.

Your users will expect your SDK to be in sync with your API, ideally releasing new features to your SDK as soon as they are released to the API. As well as the engineering effort to update the SDK and SDK documentation, you also need to ensure that during SDK development, your internal processes track API updates, create tickets for the work to update the SDK, and ensure that the SDK is released at the same time as the API.

Every new endpoint would mean a new SDK method, or a new class depending on how you have structured your SDK. Every new schema is a new model class. Every change to an existing endpoint or schema is a change to the existing SDK method or model class. You may also need to update any best practices, such as supporting new authentication schemes, or adding new retry logic.

You will also need to track breaking changes, and update your SDK version as appropriate - a major version bump for breaking changes, a minor version bump for new features, and a patch version bump for bug fixes.

Best practices for building the perfect SDK

When building an SDK, there are a number of best practices you should follow to ensure that your SDK is easy to use, and works well for your users.

  • Provide good documentation and examples - your API spec should be full of well written descriptions, examples, and other documentation that can be ported to your SDK. This will make it easier for your users to get started quickly.
  • Use idiomatic code - your SDK should use the idioms of the language you are building the SDK for. It should match the language's naming conventions, use standard libraries, and follow the language's best practices. For example, using requests in Python, making all your C# methods async, using TypeScript's Promise for asynchronous methods, and so on.
  • Embed security best practices - take advantage of security best practices, from using the latest version of any dependencies with mechanisms to monitor for patches and provide updates, to using the latest security protocols and libraries.
  • Handle authentication - your SDK should handle authentication for your users. This might be as simple as providing a method to set an API key, or as complex as handling OAuth2. You should also provide a way to handle different environments, such as development, staging, and production.

Automating the SDK build with liblab

Creating SDKs is a lot of work that only gets bigger as your API grows, or you want to support more languages. This is where automation is your friend. liblab is a tool that can take your API specification, such as an OpenAPI spec, and autogenerate SDKs for you in multiple languages.

The SDK generation process will generate the models and service classes for you, with all the required mapping code. It will also generate your best practices such as authentication and retries, and you can configure these via a configuration file. You can also write code that is injected into the API lifecycle to add additional functionality, such as logging or custom authentication with the liblab hooks feature.

The flow for using liblab - an api spec and config file goes in, and an SDK and documentation comes out. The flow for using liblab - an api spec and config file goes in, and an SDK and documentation comes out.

SDK generation is fast! A typical developer flow is to spend a few hours generating SDKs the first time as you configure the generation process to meet your needs. This is an iterative cycle of generate, check, adjust the configuration, then regenerate. Once you have your configuration the way you need, your SDKs are generated in seconds.

Adding more SDK languages is also fast - you add the new language to the configuration file, and regenerate.

liblab can also help you to keep your SDKs in sync with your API. As your API is updated, your SDK can be re-generated to match. For example, if your API spec lives in a git repository, you can set up a GitHub action to detect any changes, and regenerate your SDK, publishing the new SDK source code to your SDK repositories.

Finally liblab can generate your SDK documentation for you, with full code examples and all the documentation you need to get started. This is taken from your API specification, so is always in sync with your API.

creenshot from liblab UI indicating list pets and query attributes with an example of an SDK code and response fields creenshot from liblab UI indicating list pets and query attributes with an example of an SDK code and response fields

Conclusion

SDKs are a powerful way to improve the developer experience of your API. They come with a cost - the amount of work needed to generate them. This is why automation is so important. With liblab you can automate the process of generating SDKs, and keep them in sync with your API as it evolves.

Sign up for liblab today and start generating SDKs for your APIs in minutes.

← Back to the liblab blog

API's need to change over time. Features are added, bugs are fixed, and changes are made. How can you introduce and track changes without breaking client applications? API versioning is the answer. By versioning your API, you work towards building a robust and scalable product.

What is API versioning?

Versioning an API is the process that allows tracking changes and managing the API's various iterations. Essentially, versioning allows you to create multiple API versions that that coexist but operate independently of each other. That way new features can be added, updates can be made, and old features can be removed with minimal disruption of service to the user.

Why is API versioning important?

Proper versioning is a crucial step to keep projects flexible and ensure compatibility with existing and new tools. Without proper versioning, modifications to the API could cause unexpected errors and result in disruptions for the client. You’ll likely need to make changes to your API over time, so it’s a good idea to analyze whether or not implementing proper API versioning from the start would be a good idea.

A good API versioning strategy not only helps to make projects more flexible, but it can also make projects compatible with more tools and protect backwards compatibility. Over the course of the project, it can also help lower the cost of introducing new features and help communicate changes clearly to the users. Since each API version number gets its own release notes, migration guides, and updated documentation, this strategy promotes a trusting relationship with the user.

When should you version an API?

If you're going to introduce major changes to the API, it would be a good idea to consider adopting an API versioning strategy. As different versions become available, users can incrementally opt-in to new features at their own pace. Versioning can also facilitate making security updates without forcing API users into upgrades that would require downtime to incorporate.

In a situation where the API will support multiple client platforms, API versioning will allow the user to stick with their platform's SDK without being worried about updates for other platforms, and that's something we can help with — liblab offers an robust and comprehensive suite of tools to generate SDKs tailored to your API.

When should you not version an API?

Versioning isn't the best solution for every situation, though. Developing a full versioning strategy for minor updates or bug fixes will more likely add confusion than benefits. Also, in situations where there is only one or two users, such as an API that will only be used internally, it's probably more practical to just update both server and client at once. Same goes if you’re introducing a non-breaking or temporary change, or something on a branch that won't impact any clients.

How to do API versioning

If you think API versioning will be a good fit, you need to understand how to adapt API versioning to suit your needs. One of the first things you’ll want to consider is how you want to label your versioning. There’s a few options:

  1. Semantic Versioning (commonly referred to as SemVer) follows a MAJOR.MINOR.PATCH format. For more information on semantic versioning, semver.org is a good resource. It’s helpful for tracking backward-compatible changes, functionality, and bug fixes. Each of the major breaking changes are incremented as a new major version number, while backward-compatible additions and bug fixes are each just a minor version number.

  2. Date-based versioning tags make every API version number the date it was released, which might be useful in some situations where a chronological sequence of releases is more relevant than semantic clarity.

  3. Endpoint-based versioning may be helpful in limited situations where the scope of the version will only affect certain endpoints with independent resources.

There isn’t consensus on the “best” approach, it really depends on what information will help you better track the changes made to the API. Analyzing your needs and desired results will help you decide which system will work best for you.

Types of API versioning

Next, you need to decide how the user specifies which API version they want to use. Here are some options:

Versioning TypeBasicsExamplePositivesNegatives
URI VersioningThe version numbers are incorporated into a URL pathhttp://www.example.com/api/1/productsEasy to understand and implement Clearly separated API versionsCan become cluttered. Not recommended by REST architecture
Query ParameterThe version number is appended as a query parameter in the API endpointhttp://www.example.com/api/products?version=1Clear separation of API versions Easy to implementLess intuitive for API consumers. Can result in long cluttered URLs
Header BasedThe version number is a specific and unique header fieldcurl -H “Accepts-version: 1.0” <http://www.example.com/api/products>Follows REST principles Keeps URI focused on the resourcesLess intuitive. More effort is needed to check API request
Content NegotiationThe version is based on the representational state or media typecurl -H “Accept: application/vnd.xm.device+json; version=1” <http://www.example.com/api/products>Smaller footprint. No need for URI routing rules. Versions resource representations instead of entire APILess accessible for testing and exploring via browser

Again, each of these techniques have different objectives and advantages, so your specific project requirements should determine which technique is the best for you.

How to build an API versioning strategy

Once you’ve planned out what methods and techniques will best suit your constraints and objectives, you’re ready to formulate your strategy according to API versioning best practices. You’ll need to assess the project scope and define what your versioning policy will be. REST (Representational State Transfer) is a popular API architecture for building web services in which resources are accessed via standard HTTP methods. Using versioning with a REST API allows the developer to add new features to the API, fix bugs, and remove old functionality without breaking anything for the API consumers. If you’re building a REST API, there are a few principles regarding versioning that you might want to keep in mind. Here are some of those recommended api versioning strategies:

1. Communicate changes clearly

The whole point of using a REST API is so that there’s no confusion between client and server about where to access resources. That completely breaks down if you haven’t clearly communicated to the API consumers when things change on the server. You’ll need to consider release notes, migration guides, and updated API documentation to keep everyone on the same page. Perhaps it’d even be worth considering a longer time table to give users enough time to prepare for and implement updates.

2. Use Semantic Versioning

We talked about some of the other options, but semantic versioning is best in line with REST principles. Why? Because REST APIs are stateless; endpoints aren’t affected by outside constraints and function independently from one another. They (in theory, unless you really need it) shouldn’t be fixed to anything in the real world affecting their output, like the date of the most recent new version. Setting up SemVer isolates the endpoints from anything resembling state even further.

3. Maintain backwards compatibility when possible

REST APIs are uniform and consistent. In an ideal world, there would never be any breaking changes. In reality, that’s difficult to implement long-term, but always lean on the side of backwards compatibility. For example, new endpoint parameters should have default values. New features should get their own new endpoints and their own new version. Removing existing fields from API responses is also frowned upon for this reason, especially if you have a good deprecation strategy.

4. Deprecate old versions gradually

What does a good deprecation strategy look like? A clear timeline. Support old versions during the deprecation period and make sure that deprecation endpoints are recorded clearly in the API documentation. Also, be clear with the user. Make sure they’re on the same page about why older versions are being deprecated, what the benefits of upgrading to the new version are, what issues they might face, and how they make solve those issues. Ample support during the transition period will help minimize disruptions and promote trust between the developer and API consumers.

API versioning best practices

Many aspects of your API versioning strategy will be dependent on factors unique to your application, but there are some general guidelines of API versioning best practices to take into consideration.

1. Prioritize the docs

The current state of the entire API should be reflected in comprehensive documentation, customized to the latest API version. Make sure that there are clear instructions on how changes should be introduced with each new version so no user gets confused — you’d be surprised how little friction it takes to make some users jump ship.

2. Keep the line open with the clients

Good communication is key. Understand what your consumers need and how each new version will affect their workflow, not just yours. Establish good channels of communication in advance to inform users of each upcoming change and each new version. Those channels also let you gather feedback from users to understand what their needs are and what their expectations are, and that’ll help you build a roadmap for the future of your API.

3. Plan for security and scalability

While most of API versioning focuses on the functional aspect of the API, security and scalability should be taken into consideration. As new versions are introduced that protect against security threats, older versions may continue to have those since-fixed vulnerabilities. Also, if you build a good API, you’ll start eventually getting increased usage (both over time and in quick spikes) and larger data volumes. The solution? Bake in automated security checks, vulnerability assessments, and scalable practices from the start. Make security updates and patch versions a priority for every version you support, not just the latest version. That includes every major or minor patch, even the since-replaced ones. This is an even bigger area where communication is crucial, since there may even be legal obligations to inform API users of how their data security is being protected in each new update.

4. Test thoroughly

You want to catch as many issues as possible before the API gets to the user. Conduct unit tests, integration tests, and regression tests for each new version. We’ve all been through the frustration of doing an upgrade on one part of a project just to find that we accidentally broke everything else. Thorough testing at each stage of API versioning helps avoid those situations and ensures a reliable product for the user. Automated tools can greatly streamline the process.

How to test API versions

To start, you want to thoroughly test the new API version separately to ensure that it meets all the functional specifications that the new API version is supposed to meet. There’s a couple ways to do this:

1. Unit testing

Unit testing involves testing individual pieces of code. For example, does each endpoint still function like expected? Take an endpoint that just takes in a letter, and if its within a certain range of ASCII values, it’ll shift the letter by however many places you specify. Here’s a function that does that:

const shiftLetter = (letter, key, rangeStart, rangeEnd) => {
const rangeLength = rangeEnd - rangeStart;
const code = letter.charCodeAt(0);
if (rangeStart <= code && code < rangeEnd) {
let n = code - rangeStart + key;
if (n < 0) n = rangeLength - Math.abs(n) % rangeLength;
return String.fromCharCode((n % rangeLength) + rangeStart);
} else return letter;
};

These examples are from an Algolia article about unit testing. If we have standardized our API, we don’t even need to test this over HTTP requests since that part acts predictably. We can just write a little function like this to test this particular unit of code:

const testShiftLetter = () => {
if (shiftLetter("L", 3, 65, 91) != "O") throw "Assertion error"; // test basic case
if (shiftLetter("s", 14, 65, 122) throw "Assertion error"; // test wrap around, and custom ranges
};

All it does is throw an error if the function doesn’t produce the correct result. You can also measure the performance of individual units here. Each new API version requires that you rerun these tests to make sure each individual piece of the API still works as you expect, so you should build this into your automated workflow, perhaps using a tool like GitStream.

2. Integration testing

Integration testing is very similar to unit testing (some don’t even make the distinction). The difference is that now we’re testing how units of code work together to produce the right result. Here's a more complex example from that same article:

const testCaesar = () => {
if (caesar("HELLO", 1) != "IFMMP") throw "Assertion error"; // test basic case
if (caesar(caesar("DECRYPTED TEXT", 19), -19) != "DECRYPTED TEXT") throw "Assertion error"; // test negative keys for decryption
};

See how it tests expected output even in edge cases?

3. System testing

The last type of testing involves using an application built with the API, testing how everything works together. This is harder to implement, but since you’ve built such great documentation and migration guides for each new version, you likely have demos built with your API that you can test with.

How can liblab help with API versioning

One sticking point a lot of developers have with API versioning is how it interacts with the various SDKs that actually consume the API. That’s where we come in — liblab can analyze your spec and generate SDKs tailored to the needs of your API. Trying to support multiple API versions and making sure clients can abstract API complexities and maintain consistent interfacing is usually a nightmare, but how to version APIs more effectively using SDKs.. liblab’s user-friendly controls let you automatically generate flexible SDKs that include all the necessary components right out of the box.

Conclusion

It may seem daunting to consider all these factors at the beginning of a project, but the time and effort now will pay dividends through the entire lifecycle of the project. If you're in a situation where it makes sense to create an API versioning strategy, the hard work right now will definitely be worth it! Thoughtful planning and implementation of best practices will result in robust scalable APIs and ensure long-term stability and adaptability. It's important to remember that software development is an evolving landscape, so we as devs have to keep up to date with improved best practices and new methods. Doing that puts you well on your way towards creating APIs with smooth transitions between versions, enhancing the end user experience, and helping you to build strong relationships with satisfied customers.

← Back to the liblab blog

TL;DR - liblab can generate dev containers for your SDKs so you can start using them instantly. See our docs for details.

As software developers, our productivity is constantly increasing - with better tools, and even AI powered coding to allow us to deliver faster, and focus on building better solutions to more complex problems. The downside of this is the initial setup - getting our tools configured and our environments ready can be a big time sink. Anyone who has ever joined a new company with poor on-boarding documentation can attest to this! Days or even weeks trying to find the right versions of tools, and getting them configured correctly. It's almost a rite of passage for a new developer to re-write the onboarding document to add all the new tools and setup needed to get started.

This is something we see at liblab - our customers generate SDKs to increase their productivity, or the productivity of their customers, but to validate each SDK means setting up environments to run TypeScript, Java, Python. C#, Go and more. Although this is a one time cost, we decided to reduce this setup time so you can test out your generated SDKs in seconds - using dev containers!

What is a dev container?

A development container, or dev container, is a pre-configured isolated development environment that you can run locally or in the cloud, inside a container. Containers come with all the tools you need pre-installed, your code, and any configuration you need to get started, such as building or installing your code. They were designed to solve the setup problem as well as reliably deployments - set up your container once for a particular project, and everyone can share that setup. If the setup changes, the dev container changes, and everyone gets the new setup. It's also the closest thing we have to a "works on my machine" solution - if it works in the dev container, it will work for everyone. Literally like shipping your machine to your customers!

Running dev containers

Dev containers can be run locally through your favorite IDE, such as Visual Studio Code or IntelliJ, as long as you have container tooling installed such as Docker. You can also run containers in the cloud, using things like GitHub codespaces.

For example, you can create a dev container for Python development using one of your projects, that is based upon a defined version of Linux, has Python 3.11 installed, is configured to use the Python Visual Studio Code extension, and will install the required pip packages from your projects requirements.txt file when the container is created. As soon as you open this in Visual Studio Code, it will spin up the container, make sure the Python extension is installed, install your pip packages and you are instantly ready to start coding.

Dev containers isolation

Because dev containers are just that - containers, they run isolated from your local machine, so you don't need to worry about conflicting tool versions, or other conflicts. What is installed in the container just lives in the container, and what you have locally is not available inside that container. This means you can have multiple dev containers for different projects, and they won't conflict with each other. For example - you could have Python 3.11 installed locally, have one project in a dev container using Python 3.10 and another for legacy work using Python 2! Each will be isolated and have no idea about the other.

How are dev containers set up?

A dev container is defined by having a folder called .devcontainer in the root of your project. This folder contains a devcontainer.json file, which defines the container image to use, and any configuration you need to run your code. This devcontainer.json file can reference a pre-defined environment, such as Python, or NodeJS with TypeScript, or you can define your own environment using a Dockerfile.

.
└── .devcontainer/
└── devcontainer.json
└── Dockerfile

In the devcontainer.json file, you can also define any extensions you want to install in your IDE. By having the configuration in one or more files, these dev containers are instantly reproducible - check them into source code control and when someone else clones your repo and opens the folder, they will instantly get exactly the same environment.

You can read more on configuring dev containers in the dev container documentation.

How can I use dev containers with my generated SDK?

liblab can be configured to create the dev container configuration files for your SDK. This is done by an option in your configuration file:

{
...
"customizations": {
"devContainer": true
},
...
}

When you run the SDK generation process using liblab build, the generated SDK folder will come with the .devcontainer folder all ready to go. The container will be configured to not only install the relevant language tooling, but your SDK will also be installed ready to be used.

Use your dev container to test your SDK

To test your SDK in a dev container, make sure you have Docker desktop (or another Docker compliant container tool) running, and open the SDK folder for your language of choice in Visual Studio Code. You will be prompted to open the folder in a container, select Reopen in Container and your dev container will be created. This may take a few minutes the first time, but subsequent times will be much faster.

The VS Code reopen in container dialog

Let's use the dev container created with a Python SDK as an example. In this example, I'm going to use the Python SDK for our llama store API sample. Open this SDK in VS Code, or even open it in a codespace in GitHub.

Once the container is opened, a script is run to install the SDK dependencies, build the Python SDK as a wheel, then install this locally. There's no need to use a virtual environment, as the container is isolated from your local machine, so the installed wheel is only available in the container.

The output from the terminal showing the llama store SDK installed

As well as installing the SDK, the dev container will also install PyLance, the Python extension for Visual Studio Code, so VS Code will be fully configured for Python development. You can see this in the extensions panel:

The VS Code extensions panel with PyLance and the Python language server installed

This is now available from any Python code you want to run. Every SDK comes with a simple example - liblab takes the default get method and creates a small sample using this, and you can find this in the examples/sample.py file. If you open this file, you will see that VS Code knows about the SDK, with intellisense, documentation and code completion available:

The sample.py file with both code completion and documentation showing

It's now much easier to play with the SDK and build out code samples or test projects to validate your SDK before you publish it.

Conclusion

Dev containers are a great way to test out a new SDK. liblab can generate the dev container configuration for your SDK, so you can get started instantly. To find out more, check out our documentation.

← Back to the liblab blog

It's just a simple fact of life that more work (and better work) gets done when everything is well-structured and organized. That's why, for example, professional chefs keep clean kitchens, precise timings, and ordered hierarchies of responsibilities.

A chef building a plate in an organized kitchen.

It's the same reason why, if you regularly work at a desk, you probably make some effort to keep it organized (or at least you want to). Especially developers — who work in a strictly-structured world of math and logic — love straightforward direction in the form of standards.

Fortunately, the charging one has been solved now that we've all standardized on mini-USB. Or is it micro-USB?

Preview

This is why we need standardized Application Programming Interfaces (APIs): the more our API design reflects the consistency developers are expecting, the more productive they'll be with it and the more valuable it'll be to the devs and the API provider.

When we're done here, you'll be able to answer these three questions:

  • What is API standardization and what does it look like in practice?
  • How do standardized APIs actually help in general and in my industry?
  • What can help lower the API standardization barrier-to-entry?

What is API Standardization?

You know what an API is: it's just the layer of glue between two pieces of code. Technically, it applies to almost every communication between modules inside an application, as well as to every external communication to some other program. That means that every time you have two pieces of code talk to each other, they have to have agreed in advance on what kind of communication they're going to use.

For example, maybe they'll use the JSON format, and they'll send messages to each other like this:

{
result: "success",
data: {
message: "Hello from across the API!"
}
}

Now, if this API is completely inside your application, you might be fine with just sending little packets of JSON like this between pieces of code. But the moment you start dealing with more complexity, you'll realize this approach to APIs doesn't scale well. Think about all the possible edge cases:

  • Will the response JSON always contain a result key?
  • What happens if the operation wasn't a success? Where will details about the error be stored?
  • Is the body of the response always inside data? Or is it sometimes at the root level of the object?

You probably have experience building an API like this. Maybe you've directly worked through these questions with your API, or maybe not, but if you can think of definitive answers to them, then you're delving into API standardization. You're setting rules for how the pieces of code talk to each other so that it'll be completely consistent across your app.

A definition

With that example in mind, now we have enough information to write a more succinct definition:

👉 API standardization (n): the process of creating and implementing consistent rules for how two pieces of code talk to each other over an API (Application Programming Interface)

The definition of API standardization.

Those consistent rules might include coming up with answers to questions such as:

  • In what format will the data be sent, both in the request and in the response?
  • What naming convention will we use? Are we sticking with camelCase, snake_case, or something else for variable, resource, function, and endpoint names?
  • How do the standards in our industry affect our ruleset?

You're a talented dev though; you know APIs go way beyond just two files sending tiny JSON packets back and forth.

API definitions over the internet

A company (called the service provider) might let you and your program interact with their programs over the Internet. The HTTP protocol that governs this already gives us some standardization to work with, but the body of each request is almost completely unsupervised. Imagine the chaos if every time we wanted to interact with, say, the Google Maps API, we could send our requests in whatever form we wanted! It'd be almost impossible for the service provider to pick out the needed data from our request, and even if it did, it'd probably be extremely slow. Let's focus on this use case from here on out to try and figure why API standardization isn't just a nice-to-have, but a crucial element of modern, efficient Information Technology.

Importance of API Standardization

What are the benefits of having standardized APIs for external services? Here's a few reasons:

1. It gets everybody collaborating

When API consumers can trust that it's straightforward to interact with the service and build what they want without too much unnecessary hassle, they'll be motivated to go further out of their way to work with others to solve problems.

2. It keeps the bugs to a minimum

It's way easier to avoid unintended functionality when security and predictability are baked right into the messages servers and clients send to each other. This has the effect of increasing the expected uptime of the API and reducing interruptions in service provision.

3. It promotes scalable innovation on the provider's side

When the service provider already has established the input and output of a particular endpoint or function, much of the grunt work involved in adding new features goes away. Standardized APIs open up tons of future prospects moving forward because the developers building the API can afford to be more creative.

4. It simplifies documentation and improves developer experience

Developers already keep track of enough, so if the learning curve is too high to interact with an API, they'll just avoid it. API standardization simplifies docs because we can remove all of the things that the rules make too obvious or too redundant to explain again. Standardization means developers can more easily find what they're looking for in the docs, flattening the learning curve and making the API more approachable to a wider range of users.

5. It reduces long-term technical debt in actual implementations

If the communication both between modules and to outer systems is standardized with an API and consistent rules, it removes the complexity that would otherwise need to be managed in the future, which helps to reduce operational costs that would come with it.

API Standardization Use Cases

Healthcare

Standardized Application Programming Interfaces are incredibly important in healthcare because data scattered among many systems affects how doctors and pharmacists work. For example, imagine you moved from South Carolina to Louisiana, so your providers, insurance company, pharmacies, and other healthcare connections all changed. What if, because the providers in South Carolina didn't have an easy, standardized API to upload your current prescriptions to a centralized database, your new healthcare providers in Louisiana prescribed you a conflicting medicine and damaged your health?

Smart API design prevents potentially life-threatening gaps in data sharing, and that's one of the biggest reasons why API standardization has gained industry-wide adoption in healthcare.

Government

Public services are notoriously difficult to use. Just the word “bureaucracy” comes with negative connotations because workers in different departments have the reputation of (to put it kindly) preferring to keep necessary information to themselves. In contrast, APIs are designed to bring people together through data and promote efficient information technology. No wonder standardized APIs have been a digital transformation in the government sector.

For the United States government, there's an official standard that requires several main objectives to be fulfilled:

  1. Make it available to everyone by putting new APIs in the directory.
  2. Use the API outer layer common to all government APIs for analytics, authentication, and security.
  3. Get versioning right. Wondering how? See our recent article on what makes API versioning so necessary and how to do it.
  4. Build public docs. Thankfully, this is way easier because we're working with standardized APIs.
  5. Have a feedback mechanism where users can report issues and ask questions. Feedback builds trust and also helps retain API users who are having trouble and are ready to quit.
  6. Use OpenAPI spec files to make it clear to users how to access the APIs, and make sure these OpenAPI specs don't suck!
  7. Use the normal endpoint design. Creativity is usually a positive, but once it starts causing mass confusion, it's not worth it.

You're not bound to these rules, but it might be worth implementing them at your company because they're a great way to make sure the principles of API standardization come through in the actual implementations you build.

Financial Services

Financial service technology (a.k.a. “fintech”) companies love APIs and standardization. There's a couple particularly strong reasons:

First, the modern shift toward open banking means customers are just expecting financial institutions to easily talk to each other. For example, in some countries you can set up investment accounts to pull directly from your bank. As a modern user would expect, there was no copy-and-pasting routing numbers, no hacky tiny deposits, no three-day wait, because they have standardized APIs (probably served via Plaid) for connecting the two accounts.

Plaid in action

Also, many smaller companies don't have the infrastructure to run payment data themselves. A typical small business would find it way easier to use something like Stripe, a company that created API definitions for payment methods, invoices, charges, refunds, and every other piece of the payment process. That means that by using their API, we can get from hypothetical ideas to actual implementations really quickly.

Third, like we mentioned earlier, standardized APIs have plenty of security benefits too. That's especially great for fintech companies because they deal with extreme amounts of regulation to prevent data leaks and keep their customers' money and business data safe.

And as always, they're businesses that want to distinguish themselves from the competition. By adopting the principles of standardized APIs, they can focus on innovation and coming up with new ideas that'll grow the business.

How liblab Can Help With API Standardization

One of the biggest benefits of creating standard API definitions for endpoints and resources is that it allows you to automate building Software Development Kits (SDKs) out of your API.

That's where liblab comes in. Before, a small change in how your API worked meant you had to update every piece of code that used the API. But by putting liblab into your workflow, you can automatically generate the language-specific libraries through which your customers and clients will use your tool straight from the API design file, written to the OpenAPI spec. This means that you won't be bothered by the language-specific bugs and intricacies that would normally drag down a project that others depend on — instead, you'll get to efficiently spend time on what strengthens future prospects moving forward, like new API features and performance optimizations.

A pitch for standardized Application Programming Interfaces

Want a quick pitch to bring to the folks in charge of your APIs? Here's a summary of what we've learned so far:

  • APIs need consistency to be useful.
  • API standardization helps facilitate developer collaboration, simplify maintenance, promote faster innovation, make the developer experience more enjoyable, and limit technical debt.
  • Many industries needed API standardization, and how they implemented it can help you decide how to implement it at your company to improve your future prospects.
  • liblab's experts can help you through it. Benefit from our expertise by signing up for liblab.

Now that you're an expert in API standardization, set yourself up for long-term success by partnering up with liblab and automating SDK generation based on your standardized Application Programming Interfaces.