Skip to main content
← Back to the liblab blog

An SDK is a set of software development tools that enable developers to create applications for a specific software platform or framework. In the context of APIs, an SDK typically includes libraries, code samples, and documentation that simplify the process of integrating the API into an application.

Offering your developers a good SDK will increase the adoption of your APIs and provide the necessary safeguards for how to access them.

Why is adding an SDK to your API beneficial?

Here are some of the reasons why adding an SDK to your API is beneficial:

Reduce Development Time and Complexity

By providing an SDK, you can significantly reduce the time and effort required to integrate your API into an application. The SDK provides pre-built code libraries that developers can use to quickly and easily interact with your API, reducing the amount of time they need to spend coding and debugging. If you provide your developers only with an API, they will need to define things like auth logic, pagination, error handling, and other complex logic. This can be built-in into your SDK so developers can implement your API within one line of code.

For example, instead of storing pagination tokens and asking your developers to implement their own logic, they can simply use something like mySDK.getData(skip: 10).

Improved Developer Experience

When you provide an SDK, you're not just providing code libraries; you're also providing documentation and code samples that help developers understand how to use your API effectively. This makes it easier for developers to integrate your API into their applications and reduces the likelihood of errors or issues arising during development. IntelliSense and code suggestions will improve the DevEx of your API by giving your developers information about the available calls they can make. Built-in documentation guides developers on what to expect (request/response) in each API call.

SDK Auto completion

Increased Adoption

When you make it easier for developers to use your API, you increase the likelihood that they will actually do so. This can help to drive adoption of your API and increase its overall usage. If you want to increase your API adoption, offering SDKs will help.

Easier Maintenance and Upgrades

When you provide an SDK, you have more control over how developers interact with your API. This makes it easier to make changes or upgrades to your API without breaking existing applications that use it. By providing an SDK, you can ensure that developers are using your API in a consistent and supported way, which can reduce the risk of issues arising.

Better Security

An SDK can also help to improve the security of your API. By providing pre-built code libraries, you can ensure that developers are using secure coding practices when interacting with your API. This reduces the likelihood of security vulnerabilities arising due to coding errors or mistakes. You can also catch API errors in the SDK and have client-side logic to handle it. Retries are an example of how you can throttle the API access in event of errors.

Conclusion

Adding an SDK to your API can provide significant benefits for both developers and API providers. It simplifies the integration process, improves the developer experience, increases adoption, makes maintenance and upgrades easier, and improves security. If you're developing an API, consider providing an SDK to help developers integrate your API more easily and effectively.

Building true idiomatic SDKs in multiple languages takes time and knowledge. Here at liblab, we offer it as a simple, fast straightforward service. Give liblab a try and check how we can automatically build SDKs for your API in seconds, not months.

← Back to the liblab blog

In 2016 GitHub released the first large-scale, public GraphQL API, proving to everyone that GraphQL is here to stay. Their API is robust and constantly evolving, and it provides developers the tools to “create integrations, retrieve data, and automate your workflows”.

Later on in 2018, Github came out with GitHub Actions — a tool to help developers do what they love most: automate things. With GitHub Actions you can automate your CI/CD and general project management using YAML files.

You can use GitHub’s GraphQL API in the GitHub Actions workflow files to create the automation of your dreams.

A Quick Overview of GraphQL

GraphQL was originally developed by Facebook as a way of cutting down response payload. With a REST get request, you get back everything that an entry has to offer. Let’s say we have a database of customers where a customer entry might look like the following:

"Customer" : {
"id" : "101010",
"first_name" : "Bjarne",
"last_name" : "Stroustrup",
"email" : "[email protected]",
"City" : "Austin",
"State" : "Texas",
...
}

Now let’s say we have the customer’s id and we just want their first name so we can print “Hello, Bjarne!”. Using a REST call will return all the fields for a customer when we only need a single field. This is where GraphQL comes in. With GraphQL you create a query specifying the fields you need and only those fields are returned.

Let’s look at a simple GraphQL query that calls on the GitHub GraphQL API:

query FindRepoID {
repository(owner:"YOUR_ORGANIZATION_NAME", name:"REPOSITORY_NAME") {
id
}
}

The query calls on the repository query with your organization name and the repository name, and returns the id of that repository.

Here is an example of a response of this query:

{
"data": {
"repository": {
"id": "R_ktHp3iGA",
}
}
}

The repository object has 119 fields, but we only asked for one and therefore only received one. Had we used a REST call we would have received all 119 fields.

The Basics of GitHub Actions

To create a workflow for a repository we need to create a workflow YAML file in the .github/workflows directory. If your project doesn’t have those directories, go ahead and create them.

Let’s look at a very basic workflow.yaml file:

name: My First Action ## name of our workflow
on: create ## branch or tag being created trigger
jobs: ## jobs to run when triggered
echo-branch-name: ## name of a job
runs-on: ubuntu-latest ## which machine the job runs on
steps: ## steps to run
- name: Step One ## name of a step
run: | ## commands to run
echo '${{ github.event.ref }}'

A workflow is triggered by events (you can see a list of events that trigger a workflow here). In this case our workflow is triggered by a branch or a tag being created in the workflow’s repository. Once the workflow is triggered, our jobs start running on the type of machine we chose. In this case we simply echo the name of the branch from the ${{ github.event }} variable.

Connecting Branches With Issues Using GitHub Actions and GitHub GraphQL API

First, let’s decide on a branch naming system. For this example we decide that for every Issue there is a branch with a name that starts with the Issue’s number. For example, an Issue numbered #42 will have a branch with a name that starts with #42, like “#42/awesome_branch”.

We’ll start off by getting the Issue number from the branch’s name.

name: New Branch Created 

on:
create

jobs:
On-New-Branch:
runs-on: ubuntu-latest
steps:
- name: Get Issue Number
run: |
branch_name=`echo '${{ github.event.ref }}'`
issue_num=`echo ${branch_name#*#} | egrep -o '^[^/]+'`


For this example if a branch was created without an issue number at the start of its name we will ignore it and the workflow will not continue. To do so we’ll create output variables that the next job can use to determine if to execute or not.

name: New Branch Created 

on:
create

jobs:
Check-Branch-Name:
runs-on: ubuntu-latest
outputs:
issue_num: ${{ steps.step1.outputs.issue_num }}
tracked: ${{ steps.step1.outputs.tracked }}
steps:
- name: Get Created Issue Number
id: step1
run: |
branch_name=`echo '${{ github.event.ref }}'`
issue_num=`echo ${branch_name#*#} | egrep -o '^[^/]+'`
re='^[0-9]+$'
if ! [[ $issue_num =~ $re ]] ; then
echo "::set-output name=tracked::false"
else
echo "::set-output name=tracked::true"
fi
echo "::set-output name=issue_num::$issue_num"

We check if issue_num is actually a number - if it is a number, we set an output variable named ‘tracked’ to be ‘true’, and if it is not a number we set it to be ‘false’. To use the issue number in the next steps, we also save it in an output variable named ‘issue_num’.

The next job created will need those outputs to run:

name: New Branch Created 

on:
create

jobs:
Check-Branch-Name:
runs-on: ubuntu-latest
outputs:
issue_num: ${{ steps.step1.outputs.issue_num }}
tracked: ${{ steps.step1.outputs.tracked }}
steps:
- name: Get Created Issue Number
id: step1
run: |
branch_name=`echo '${{ github.event.ref }}'`
issue_num=`echo ${branch_name#*#} | egrep -o '^[^/]+'`
re='^[0-9]+$'
if ! [[ $issue_num =~ $re ]] ; then
echo "::set-output name=tracked::false"
else
echo "::set-output name=tracked::true"
fi
echo "::set-output name=issue_num::$issue_num"

Add-Linked-Issue-To-Project:
needs: Check-Branch-Name
if: needs.Check-Branch-Name.outputs.tracked == 'true'
env:
ISSUE_NUM: ${{ needs.Check-Branch-Name.outputs.issue_num}}
runs-on: ubuntu-latest
steps:
- name: Get Issue ${{ env.ISSUE_NUM }} Project Item ID
run: |

We specify the dependency on the first job with ‘needs’ and check if the job should execute using an if statement with the output variable ‘tracked’ from the previous job. We also make use of env variables here - environment variables are used to store information that you want to reference in your workflow. Env variables can be set for an entire workflow, for a specific job or for a specific step. To access the content of an env variable use ${{ env.NAME_OF_VARIABLE }}.

To make calls on the GraphQL API we need a GitHub authorization bearer token for our repository. To create one go through these steps , copy your generated token and save it under your repository’s settings as an Actions secret (repository Settings → Secrets → Actions → New repository secret). Name it PERSONAL_TOKEN to match the example. We’re also going to need our organization name and the repository’s name set as our workflow’s environment variables:

name: New Branch Created 

on:
create

env:
GH_TOKEN: ${{ secrets.PERSONAL_TOKEN }}
ORGANIZATION: your-organization-name
REPO: the-repository-name

jobs:
Check-Branch-Name:
runs-on: ubuntu-latest
outputs:
issue_num: ${{ steps.step1.outputs.issue_num }}
tracked: ${{ steps.step1.outputs.tracked }}
steps:
- name: Get Created Issue Number
id: step1
run: |
branch_name=`echo '${{ github.event.ref }}'`
issue_num=`echo ${branch_name#*#} | egrep -o '^[^/]+'`
re='^[0-9]+$'
if ! [[ $issue_num =~ $re ]] ; then
echo "::set-output name=tracked::false"
else
echo "::set-output name=tracked::true"
fi
echo "::set-output name=issue_num::$issue_num"

Add-Linked-Issue-To-Project:
needs: Check-Branch-Name
if: needs.Check-Branch-Name.outputs.tracked == 'true'
env:
ISSUE_NUM: ${{ needs.Check-Branch-Name.outputs.issue_num}}
runs-on: ubuntu-latest
steps:
- name: Get Issue ${{ env.ISSUE_NUM }} ID and State
run: |
gh api graphql -f query='query FindIssueID {
repository(owner:"${{ env.ORGANIZATION }}",name:"${{ env.REPO }}") {
issue(number:${{ env.ISSUE_NUM }}) {
id,
state
}
}
}' > project_data.json
echo 'ISSUE_ID='
$(jq '.data.repository.issue.id' project_data.json)
>> $GITHUB_ENV
echo 'ISSUE_STATE='
$(jq '.data.repository.issue.state' project_data.json)
>> $GITHUB_ENV

We created a step called “Get Issue ${{ env.ISSUE_NUM }} ID and State”. In this step we run a GraphQL query and save its result into project_data.json. The query finds a repository by its organization name and repo name, then looks for an issue by its number and returns the issue’s id and state. We save the issue id and issue state from the result into env variables called ISSUE_ID and ISSUE_STATE for later use.

We use GitHub’s “gh api” to make an authenticated HTTP request to the GitHub API.

gh api <endpoint> [flags]

Our endpoint is the GraphQL API, and the flag -f adds a string parameter in key=value format so the key ‘query’ is set to be our raw GraphQL query FindIssueID.

name: New Branch Created 

on:
create

env:
GH_TOKEN: ${{ secrets.PERSONAL_TOKEN }}
ORGANIZATION: your-organization-name
REPO: the-repository-name

jobs:
Check-Branch-Name:
runs-on: ubuntu-latest
outputs:
issue_num: ${{ steps.step1.outputs.issue_num }}
tracked: ${{ steps.step1.outputs.tracked }}
steps:
- name: Get Created Issue Number
id: step1
run: |
branch_name=`echo '${{ github.event.ref }}'`
issue_num=`echo ${branch_name#*#} | egrep -o '^[^/]+'`
re='^[0-9]+$'
if ! [[ $issue_num =~ $re ]] ; then
echo "::set-output name=tracked::false"
else
echo "::set-output name=tracked::true"
fi
echo "::set-output name=issue_num::$issue_num"

Add-Linked-Issue-To-Project:
needs: Check-Branch-Name
if: needs.Check-Branch-Name.outputs.tracked == 'true'
env:
ISSUE_NUM: ${{ needs.Check-Branch-Name.outputs.issue_num}}
runs-on: ubuntu-latest
steps:
- name: Get Issue ${{ env.ISSUE_NUM }} ID and State
run: |
gh api graphql -f query='query FindIssueID {
repository(owner:"${{ env.ORGANIZATION }}",name:"${{ env.REPO }}") {
issue(number:${{ env.ISSUE_NUM }}) {
id
state
}
}
}' > project_data.json
echo 'ISSUE_ID='
$(jq '.data.repository.issue.id' project_data.json)
>> $GITHUB_ENV
echo 'ISSUE_STATE='
$(jq '.data.repository.issue.state' project_data.json)
>> $GITHUB_ENV

- name: Reopen Issue
if: ${{ env.ISSUE_STATE }} == 'CLOSED'
run: |
gh api graphql -f query='
mutation {
reopenIssue(input: { issueId: ${{ env.ISSUE_ID }} } ){
issue {
title
state
}
}
}'

We added “Reopen Issue”, a Mutation query that runs if the issue state is closed.

We run this query in case we closed the issue in the past and want to reopen it once a branch for it exists, implying that the fix was not done.

We now have an easy-to-read, quick and simple Workflow file that uses GitHub Actions with GraphQL queries to reopen an issue by a branch being created for it. You can modify this basic example to your own needs — you can run a job when a PR is created, you can run a job when an issue is opened, you can run a job every time a commit is made to a branch or run a job on any event trigger you desire. You now have the tools to create your own automation.

More Advanced Examples

Here are some more advanced examples of using the GitHub GraphQL API with GitHub Actions:

  • This workflow moves a newly created issue into a specific column in a project.
  • This workflow connects branches to issues by branch name. When a branch is created for an issue, the issue will move to a specific column in a project. If the issue was closed prior to the creation of the branch, the issue will reopen.
  • This workflow connects PRs to issues by branch name. When a PR is created/closed/merged the issue will move to a specific column in a project. Once an issue is moved to the ‘Done’ column the issue will be closed.

Not Convinced?

Not ready to embark on the GraphQL adventure? Looking for something out of the box to automate your workflows? Looking to mix it all up? Check out GitHub’s REST API and GitHub Marketplace. You can use the REST API instead of the GraphQL API, or you can integrate one of the apps or actions available in the GitHub Marketplace into your workflows. Whichever way you choose, automating your workflows will save you valuable time that you can now use for more coding! Woohoo!

Tips

Trying to use a Marketplace app or action but getting “Resource not accessible by integration” error? Most Marketplace apps and actions use the default “secrets.GITHUB_TOKEN”, but this token doesn’t come with read/write access to all scopes. For example, it doesn’t have write permissions for items in Projects. Set a personal access token like we did in the examples and use it instead of the GITHUB_TOKEN (make sure you give it the correct permissions).

If you’re using the GitHub GraphQL Documentation but having a hard time understanding and making queries work, take a look at the GitHub repository on Apollo GraphQL. The Documentation there is clear and they have an Explorer section where you can try out queries and see their responses.

Our goal here at liblab is to make the lives of developers easier, and that includes our own developers. If we can automate it and save time, we will — leaving our developers all the time they need to focus on the tasks that matter. If you’re looking to save time and automate your SDK creation, if you want flawless SDKs in multiple languages directly from your API, and if you care about making developers’ lives easier, check out what liblab has to offer and contact us to start your SDK journey.

← Back to the liblab blog

This post will take you through the steps to write files to GitHub with Octokit and TypeScript.

Install Octokit

To get started we are going to install Octokit.

npm install @octokit/rest

Create the code

Then we can create our typescript entry point. In this case src/index.ts

import { Octokit } from '@octokit/rest';
const client = new Octokit({
auth: '<AUTH TOKEN>'
});

We instantiate the Octokit constructor and create a new client. We will need to replace the <AUTH TOKEN> with a personal access token from GitHub. Checkout the guide to getting yourself a personal access token from GitHub.

Now that we have our client setup we are going to look at how we can create files and commit them to a repository. In this tutorial I am going to be writing to an existing repo. This will allow you to write to any repo public or private that you have write access to.

Just like using git or the GitHub desktop application we need to do a couple of things to add a file to a repository.

  1. Generate a tree
  2. Commit files to the tree
  3. Push the files

Generate a tree

To create a tree we need to get the latest commits. We will use the repos.listCommits method and we will pass an owner and repo argument. owner is the username or name of the organization the repository belongs to and repo is the name of the repository.

const commits = await client.repos.listCommits({
owner: "<USER OR ORGANIZATION NAME>",
repo: "<REPOSITORY NAME>",
});

We now want to take that list of commits and get the first item from it and retrieve its SHA hash. This will be used to tell the tree where in the history our commits should go. To get that we can make a variable to store the commit hash.

const commitSHA = commits.data[0].sha;

Add files to the tree

Now that we have our latest commit hash we can begin constructing our tree. We are going to pass the files we want to update or create to the tree construction method. In this case I will be representing the files I want to add as an Array of Objects. In my example I will be adding 2 files. [test.md](http://test.md) which will hold the string Hello World and time.txt which will store the latest timestamp.

const files = [
{
name: "test.md",
contents: "Hello World"
},
{
name: "time.txt",
contents: new Date().toString()
}
];

Octokit will want the files in a specific format:

interface File {
path: string;
mode: '100644' | '100755' | '040000' | '160000' | '120000';
type: 'commit' | 'tree' | 'blob';
sha?: string | null;
content: string;
}

There are a couple of properties in this interface.

  • path - Where in the repository the file should be stored.
  • mode - This is a code that represents what kind of file we are adding. Here is a quick run down:
    • File = '100644'
    • ExecutableFile = '100755'
    • Directory = '040000'
    • Submodule = '160000'
    • Symlink = '120000'
  • type - The type of action you are performing on the tree. In this case we are making a file commit
  • sha - The last known hash of the file if you plan on overwriting it. (This is not needed)
  • content - Whatever should be in the file

We can map to transform our file array into this proper format:

const commitableFiles: File[] = files.map(({name, contents}) => {
return {
path: name,
mode: '100644',
type: 'commit',
content: contents
}
})

Now that we have an array of all the files we want to commit we will pass them to the createTree() method. You can think of this as adding your files in git.

const {
data: { sha: currentTreeSHA },
} = await client.git.createTree({
owner: "<USER OR ORGANIZATION NAME>",
repo: "<REPOSITORY NAME>",
tree: commitableFiles,
base_tree: CommitSHA,
message: 'Updated programatically with Octokit',
parents: [CommitSHA],
});

Afterwards we have the variable currentTreeSHA . We will need this when we actually commit the files.

Next we go to actually make a commit on the tree

const {
data: { sha: newCommitSHA },
} = await client.git.createCommit({
owner: "<USER OR ORGANIZATION NAME>",
repo: "<REPOSITORY NAME>",
tree: currentTreeSHA,
message: `Updated programatically with Octokit`,
parents: [latestCommitSHA],
});

Push the commit

Then we push the commit

await client.git.updateRef({
owner: "<USER OR ORGANIZATION NAME>",
repo: "<REPOSITORY NAME>",
sha: newCommitSHA,
ref: "heads/main", // Whatever branch you want to push to
});

That is all you need to do to push files to a GitHub repository. We have found this functionality to be really useful when we need to push files that are automatically generated or often change.

If you find yourself needing to manage SDKs in multiple languages from an API, checkout liblab. Our tools make generating SDKs dead simple with the ability to connect to the CI/CD tools you are probably already using.

liblab!

← Back to the liblab blog

Client libraries are shared code to avoid repetitive tasks. Engineers love client libraries. In iOS, libraries are also referred to as frameworks or SDKs. In this post, I’ll stick to using the term library.

I’ll show you a common pattern to build a library. Libraries are used everywhere.

If you think this is a daunting task, worry not! Keep reading and you’ll see it’s easier than you think.

The Library

The goal of this post will be to build a library that

  • Retrieves the latest price of a list of cryptocurrencies
  • The library is used in a sample app that shows these prices
  • You are ready to create your own library

The Sample Server API

In this example, I will use the CoinMarketCap API. With this API you can retrieve current and historic information about cryptocurrencies.

By checking their documentation, you will notice their API is extensive. For this post, only the v2/cryptocurrency/quotes/latest/ **endpoint **will be used.

Check out their Quick Started Guide to create your own API KEY. You will need one to make requests to their service.

Creating the Swift Package

Nowadays it is very simple to create a new library in Swift. In iOS, the modern way to publish and deliver libraries is via the Swift Package Manager. From here on out, I’ll walk you step-by-step on how to create the library.

My requirements are:

  • Xcode 13.4.1
  • The library will support iOS 10+

Create a new Xcode project. Make sure you select Swift Package as the project type.

Follow the instructions and you’ll decide where to save the project on your machine. I named the library MyCryptoCoinSwiftLibrary for this tutorial. After the project is created, notice the file named Package.swift.

// swift-tools-version: 5.6

import PackageDescription

let package = Package(
name: "MyCryptoCoinSwiftLibrary",
products: [
// Products define the executables and libraries
// a package produces, and make them visible to
// other packages.
.library(
name: "MyCryptoCoinSwiftLibrary",
targets: ["MyCryptoCoinSwiftLibrary"]),
],
dependencies: [
// Dependencies declare other packages that this package
// depends on.
// .package(url: /* package url */, from: "1.0.0"),
],
targets: [
// Targets are the basic building blocks of a package.
// A target can define a module or a test suite.
// Targets can depend on other targets in this package,
// and on products in packages this package depends on.
.target(
name: "MyCryptoCoinSwiftLibrary",
dependencies: []),
.testTarget(
name: "MyCryptoCoinSwiftLibrary Tests",
dependencies: ["MyCryptoCoinSwiftLibrary"]),
]
)

If you want to know more about each field in this file, take a gander at the Package Description site by Apple. Know that in a package, you can add resources like images and videos, and not just code. The library in this post is fairly simple because it only interacts with a server API.

I am removing the .testTarget from the list of targets. I will not focus on how to write unit tests for our library in this tutorial. Yet, unit tests are very important for a real library, you want make sure you doesn’t introduce critical bugs.

Because the library needs to connect with the CoinMarketCap API, a networking layer is needed. For that, our library will make use of the most famous networking library in iOS, Alamofire. In their documentation, there are instructions on how to add it as a dependency in our Package.swift, check out Almofire’s instructions here.

This sample library will only support iOS, so make sure to list it in the platforms field.

After those modifications, Package.swift now looks like this:

// swift-tools-version: 5.6

import PackageDescription

let package = Package(
name: "MyCryptoCoinSwiftLibrary",
platforms: [
.iOS(.v10)
],
products: [
// Products define the executables and libraries a
// package produces, and make them visible to
// other packages.
.library(
name: "MyCryptoCoinSwiftLibrary",
targets: ["MyCryptoCoinSwiftLibrary"]),
],
dependencies: [
// Dependencies declare other packages that this package
// depends on.
// .package(url: /* package url */, from: "1.0.0"),
.package(url: "https://github.com/Alamofire/Alamofire.git",
.upToNextMajor(from: "5.6.1"))

],
targets: [
// Targets are the basic building blocks of a package.
// A target can define a module or a test suite.
// Targets can depend on other targets in this package,
// and on products in packages this package depends on.
.target(
name: "MyCryptoCoinSwiftLibrary",
dependencies: ["Alamofire"], // Important so
// Alamofire can be used in your library.
path: "Sources"), // Short explanation: What
// you want the library to expose to the public.
]
)

After you add dependencies to your library, you should see them listed in your Xcode project. From here, the setup is done and all that is left is the code!

The code will be simple. I won’t overcomplicate the library with a complex design. There will be a struct called CoinRetriever that will expose only one function

public func latestPrice(
coins: [String],
completionHandler: @escaping (Result< Coins, AFError>) -> Void)

This expects to receive a list of coins to retrieve their latest price. For this tutorial keep it simple and stick with just supporting the cryptocurrency symbol, for example, “BTC”, “ETH”.

It will return the response with the block completionHandler. Notice that Coins is listed as the success value, and this is something that I’ll explain now.

Models

This function will call the CoinMarketCap API endpoint /v2/cryptocurrency/quotes/latest. As listed in their docs, a sample response is:

{
"data": {
"BTC": {
"id": 1,
"name": "Bitcoin",
"symbol": "BTC",
"slug": "bitcoin",
"is_active": 1,
"is_fiat": 0,
"circulating_supply": 17199862,
"total_supply": 17199862,
"max_supply": 21000000,
"date_added": "2013-04-28T00:00:00.000Z",
"num_market_pairs": 331,
"cmc_rank": 1,
"last_updated": "2018-08-09T21:56:28.000Z",
"tags": [
"mineable"
],
"platform": null,
"self_reported_circulating_supply": null,
"self_reported_market_cap": null,
"quote": {
"USD": {
"price": 6602.60701122,
"volume_24h": 4314444687.5194,
"volume_change_24h": -0.152774,
"percent_change_1h": 0.988615,
"percent_change_24h": 4.37185,
"percent_change_7d": -12.1352,
"percent_change_30d": -12.1352,
"market_cap": 852164659250.2758,
"market_cap_dominance": 51,
"fully_diluted_market_cap": 952835089431.14,
"last_updated": "2018-08-09T21:56:28.000Z"
}
}
}
},
"status": {
"timestamp": "2022-06-02T14:44:22.210Z",
"error_code": 0,
"error_message": "",
"elapsed": 10,
"credit_count": 1
}
}

Instead of parsing the JSON, a better approach is to create models that represent the response of the API.

From bottom to top, three models can be identified for this response.

Quote

public struct Quote: Decodable {
public let price: Double
}

Coin

public struct Coin: Decodable, Identifiable {
public let id: Int
public let name: String
public let symbol: String
public let quote: [String: Quote]
}

The only reason Coin extends Identifiable is because this model is used in the sample app. This model is the input of a List View. To simplify the classes that are used in this tutorial, the List View requires the model to be Identifiable.

In a real scenario, I would suggest creating an extra model or class that is used in the UI, to keep this one decoupled from the client logic.

Coins

public struct Coins: Decodable {
public let data: [String: [Coin]]
}

Notice that all the members of these models match a value from the API response. As an example, let’s say that Coin also cares about max_supply from the response, in that case Coin would look like

public struct Coin: Decodable, Identifiable {
public let id: Int
public let name: String
public let symbol: String
public let maxSupply: Int
public let quote: [String: Quote]

enum CodingKeys: String, CodingKey {
case id
case name
case symbol
case maxSupply = "max_supply"
case quote
}
}

This is to illustrate what is needed to support names that do not map between the response and the model members. No need to add it for this tutorial.

Requests

Now create the CoinRetriever struct and add the following code

import Alamofire // 1. Import dependency listed in Package.swift

public struct CoinRetriever {
private var apiKey: String; // 2. apiKey is needed for the
// CoinMarketCap API

public init(apiKey: String) {
self.apiKey = apiKey;
}

public func latestPrice(coins: [String], completionHandler:
@escaping (Result< Coins, AFError>) -> Void) {
let headers: HTTPHeaders = [ // 3. Headers needed by
// CoinMarketCap
"X-CMC_PRO_API_KEY": apiKey,
"Accept": "application/json",
"Accept-Encoding": "deflate, gzip"
]
let parameters: Parameters = ["symbol":
coins.joined(separator: ",")]
// 4. The parameter that this method will support.
// E.g. "BTC", "ETH"
// 5. Alamofire request to the endpoint, it decodes
// the value to the Coins model previously created
let endpoint = "https://sandbox-api.coinmarketcap.com" +
"/v2/cryptocurrency/quotes/latest"
AF.request(endpoint,
parameters: parameters,
headers: headers)
.responseDecodable(of: Coins.self) { response in
guard let coins = response.value else {
completionHandler(.failure(response.error!))
return
}
completionHandler(.success(coins))
}
}
}

Steps to make a request using Alamofire and decoding the response into the models.

  1. Import the dependency listed in Package.swift. This will be used to make the actual request to the API.
  2. CoinMarketCap needs an API KEY to authenticate its request, if you still don’t have one, read here how to get one.
  3. Apart from putting the API KEY in the headers, they also suggest adding extra headers.
  4. Check the endpoint docs to see what other parameters are supported. For this tutorial only “symbol” will be used.
  5. Alamofire does the request. It decodes the response as the Coins model created before.

Publish the library

This step is the simplest of all. SPM works with GitHub out of the box. All you need to do is upload the library project to a GitHub repository. In this case, I created the repo github/MyCryptoCoinSwiftLibrary. From here, it’s all about advertising your repo, which is out of the scope of this post.

Extra Step - A Sample App

As an extra step, you might be wondering how would someone use your new library. To demonstrate this, create a sample app.

In Xcode create a new iOS app. I named mine CryptoCoinSampleApp.

As a suggestion, add the sample app to the same directory where you created the library. This way you will include a sample app alongside your library as part of your repository.

Now the important step is how to add your library that now lives in github. As a matter of fact, this process is the same for any other library published through SPM.

In your Xcode app project, click on File → Add Packages (remember the Xcode version I used! this UI/menu can change in a different Xcode version).

I searched for the GitHub repo where the library was published, that’s how you will find yours too. In a production library, instead of using a specific branch to get the code from, the normal use case is to specify a release version, something like 1.0.0. By the time you are using this approach you should be more comfortable building libraries.

After this, your code will have access to your new library, and you can import it like this:

import MyCryptoCoinSwiftLibrary

Since the app itself is out of the scope of this tutorial, I will only point you to it in the github repository.

Play with it! Create a PR with a suggestion, create an issue in the repo, let me know what you think! Feel free to use the library as well in your own sample app!

The liblab way

As you work on this tutorial, you might notice that a lot of what was coded is very repetitive and tedious. At liblab, our goal is to automate this process. We have a product that will let you create what I explained in this tutorial in an automatic way for a lot of languages and different platforms.

You can sign up at liblab.com/join to start generating your own SDKs. Join our Discord to chat with us and other liblab users, and follow us on Twitter and connect with us on LinkedIn to keep up to date with the latest news.

← Back to the liblab blog

What is Technical Debt?

Some people see technical debt as a list of missing features, but it should be a list of the problems that you know you have to vs want to solve. The list can vary but should include known bugs and errors in your code. It should also include how readable your code is and any bloat you might carry. Slowness in build and execution time needs consideration as well.

The point of figuring out your technical debt ahead of time is that you need to be honest with yourself. The more problems we have to solve, the less we get to work on problems we want to solve. So we treat technical debt the same as financial debt, by paying it down while avoiding more.

The rule of thumb should always be to pay off your debts first, then start accumulating assets because you don’t want to try to pay for assets when you have debts holding you down.

Why do we get into Technical Debt?

What is Debt? We tend to think of financial debt more than other kinds of debts, like social or technical. Debt is a tool that's used to leverage your ability to do more in less time, it's not a bad thing unless it's abused. It can have destructive consequences if ignored as it compounds on itself over time. Too much leverage can be a bad thing.

All debt must get paid off by someone in time, when you accumulate technical debt, it may be the developer that comes after you who must pay it off. This perverse incentive is why debt can be dangerous in any setting. As the old Levantine Proverb says “The Debtor is slave to the Lender” which means you will lose power over your code.

The mantra "Move fast and break things" is a popular saying in our industry because it helps us move forward. It's more akin to "Put it on the credit card" if you think of it in technical terms because what you're saying is fix it later. In the meantime, every break has a cost that needs payment and so the more you break it the more you pay for it so to speak.

“Procrastination is the souls attempt to rebel against entrapment” - Nassim Taleb

Wanting to have something done now vs waiting until later is a big reason people get into debt. They ignore what they know they need to do in place of what they want, or they procrastinate to feel good. They don't want to feel trapped by their responsibilities.

Humans also have a tendency to overestimate their own abilities, we are not good at estimating risk. The reason we underestimate risks is that otherwise, indecision would paralyze us. So putting off problems we know we need to solve for later makes us assume that we can solve those problems later. Worse, it makes us assume we can solve those problems in a much more limited timeframe than if we had done it sooner.

So as developers we are always adding technical debt to make it easier on ourselves now. We don't realize we are punishing either our future selves or those that come into the project after us.

What is a Technical Asset?

A Technical Asset does not mean future-proofing or building a feature early, but rather an early problem solved. It's an investment into a known problem that from experience you know that you, or those that come after you, will have. The only way to know if it's not future-proofing is through experience, if you've had the problem then it's an asset.

In the same way experience will tell you whether a financial asset will make you money. It's a bet. Not every investment into an asset will pay off in the way you want it to, or for your own personal benefit. The point is not to stop investing in assets, but rather to teach people how to spot good from bad investments.

The accumulation of financial assets gives you freedom in life. In much the same way technical assets give you the freedom to code without the fear of regression. You get to choose what problems you want to solve when you invest in technical assets upfront.

“A society grows great when old men plant trees in whose shade they shall never sit” - Greek Proverb

You're building technical assets for others more than for yourself and this is why it's hard to do. In software development, we like to get to the root of the answer quickly and so we make life easier for ourselves now. We tend to not think about our future selves or others who will take over the project after us.

Yet many of the greatest stories in technology development come from the opposite.

  • Amazon AWS was an asset that came at the very end of an expensive mono codebase debt payoff.
  • Facebook made investments in React and Open Compute to fix scalability.
  • Google Instant reinvented caching as an investment into what seemed impossible.

They had to first pay off their technical debt before they could build assets. These were not core products or features, but they prioritized a known problem ahead of time.

Investing in an SDK is a Technical Asset

At liblab we are in the asset building business.

If you build API’s as an organization then our job is to help you to automate your SDK build process and provide you with technical assets that you can release to your customers.

You do the part of documenting your API’s correctly using OpenAPI spec, Postman collections or GraphQL schema and we do the hard work of building long term technical assets that you don’t have to manage and instead can promote to your customers.

How to find other Technical Assets to Invest in?

Much like picking any investment asset, you have to plan and discuss it like investors. Present a prospectus about what the investment would look like and the tradeoffs. In the software world, we would write a Request for Comment or RFC to break down the idea in a way that can be peer reviewed.

You want to instill a process that creates forcing functions that limit the scope of work. Don't let your investment go off the rails without first enforcing tests and coverage. The earlier you are at enforcing linting for code readability the more proactive you will be. Building out automated CI/CD and managed deploys ensure less pain for your company long term.

Avoid survivor bias in determining a good investment by remembering the Lindy Effect:

“If a book has been in print for forty years, I can expect it to be in print for another forty years. But, and that is the main difference, if it survives another decade, then it will be expected to be in print another fifty years. This, simply, as a rule, tells you why things that have been around for a long time are not ‘aging’ like persons, but ‘aging’ in reverse. Every year that passes without extinction doubles the additional life expectancy. This is an indicator of some robustness. The robustness of an item is proportional to its life!”

Just like any bad real world investment might force companies and organizations out of the market entirely, a bad technical investment might do the same for tech companies. So although it may be counter intuitive in the software industry to choose stuff that’s been around for a while, the best approach might be to invest in implementations that have proven themselves sturdy over time.

In conclusion, you have to pick the processes that have been proven effective over time and across companies. Adopting those successful processes as technical assets allows you to accrue them for yourselves but not at the expense of paying off your technical debts beforehand. Paying your debts off early will give you the freedoms you need to work on code you want to work on instead of stuff you know you have to. This discipline helps you avoid the “put it on the credit card” mindset earlier in your process.

← Back to the liblab blog

This post will look at what security options OpenAPI gives us, what companies big and small are actually doing, and what you should be doing when you build your own API.

  1. Maybe not Hacker News, but they have a special deal with Firebase to give people unrestricted access to the firehose. I put together a quick Hacker News OpenAPI spec for some testing

Security in the OpenAPI Spec

If you’re like me and you really dig formatting languages, you’re in for a good time because the OpenAPI Specification v3.1.0 is a treat. But if you don’t plan to add this to your nightstand for bed-time reading, this section will walk you through how security is defined.

There are five types of security, and three places to define it.

Five Types of Security in OpenAPI

In a Swagger 2.0 you had three types. OpenAPI dropped one and added three.

The five types in OpenAPI are: apiKey, http, oauth2, mutualTLS, and openIdConnect. The first three are the most commonly seen, and we’ll talk about them a lot in this post. The last two are new, and are rarely seen in the wild.

(The original three types in Swagger 2.0 were basic, apiKey, and oauth2. You can still define a basic authentication with OpenAPI, using type: http; scheme: basic, but that is not recommended for reasons we’ll talk about later.)

Where Security is defined in OpenAPI

There are three places to place security in an API spec.

Under #/components/securitySchemes

This is where you define your security options.

(Simpler APIs will normally only have one option.)

The key name is whatever you want, you’ll use it to reference this object from elsewhere in the spec. The only required parameter is type, which should be one of the five types we talked about above. The other parameters change depending on the type.

Here’s an example of an API that has two security schemes.

components:
securitySchemes:
myBearerAuth:
type: http
scheme: bearer
bearerFormat: jwt
myApiKey:
type: apiKey
name: key_name
in: header

Interesting side note: The securitySchemes in components is a bit of an odd duck. Most of the time, components is for reusable objects. If you find yourself writing an object again and again, you can put it in components, and just reference that component. But with security, the securitySchemes in components is the only place to define security; it has to be defined here.

Under #/security

This is the default security requirement to use for the whole API. This should match up to a named security scheme that was (or will be) defined in #/components/securitySchemes.

If this top level security object is missing, or if it’s an empty object, then this API will have no security by default. You can see this in a lot of smaller APIs that have a few open endpoints for everyone, but then define operation-specific security for the rest.

Here’s an example:

security:
- myApiKey: []

You might be wondering why there’s an empty array here. Good wondering! This array is for scopes, which are strings that let you define fine-grained security permissions. Often it’ll look like “read:foo” and “write:foo”. Oauth2, and it’s newer cousin openIdConnect, always take scopes, other security types may not.

This is a key/value pair, and the value is always an array. If you don’t have any scopes, then you have to have an empty array.

Under a specific operation

Just like above, we use one of the defined security schemes, but this time only for a specific operation on a specific path.

If there is no security defined for an operation, then the API will use the top-level default under #/security.

In this example, we use myBearerAuth for anyone who wants to get /users.

paths:
/users:
get:
security:
- myBearerAuth:
- 'read:users'
- 'public'

Here we’re using scopes. The user of the SDK has to have either a “read:users” or a “public” scope.

And that’s the last we’ll say about scopes.

Security in Practice

In practice, when hitting an endpoint, the security is sent out with the request using one of three methods. One of these is terrible, see if you can spot it.

  • Bearer — A token in the header, which is how OAuth is used. It looks like Authorization: Bearer tokenValue
  • API Key — A key/value pair that could be in the header, query, or even in a cookie, but the best practice is to put it in the header, and often looks like Authorization: apiKey apiKeyValue
  • Basic — A username/password in the header, in the clear. It looks like Authorization: Basic username:password.

Did you spot the terrible method? That’s right, it’s Basic. Don’t send around a username and password with every API call, it’s insecure. We’ll talk about recommendations later.

What API security are big companies using?

The big companies rarely have official and easy-to-find OpenAPI specs, so this information was mostly gleaned from their documentation portals.

Facebook uses OAuth2

Dropbox uses OAuth2

Twitter uses OAuth and OAuth2

GitHub uses OAuth2, and sometimes Basic Authentication, which is worrisome.

Microsoft Azure uses OAuth2, and recommends building your own apps similarly:

The results are unsurprisingly unanimous: big companies use OAuth2.

What API security are other companies using?

Getting info on lot of other companies is more difficult. Either we have to dive deeply into thousands of API documentation portals, or we can run some simple statistics on a collection of specs. This post will go the latter route.

Methodology

To do this, I’m turning to my favorite JSON Query tool: jq! Although in reality many specs are written in YAML, so we’ll have to turn to its YAML wrapper: yq!

Here’s the basic command.

yq '.components.securitySchemes' openapi.yaml

A good source of a OpenAPI specs is the OpenAPI Directory. It’s a well-maintained resource filled with thousands of specs. We’ll run some statistics on those.

To get some simple stats, I ran a series of UNIX commands on the spec in the OpenAPI Directory. They’re rough, but they work well enough to build our instincts. You can follow along at home.

# total number of files
find . -name "*.yaml" -print | wc -l

3771

# lets save some output, to look at it twice
find . -name "*.yaml" -print0 | /
xargs -0 -n24 -P4 yq '.components.securitySchemes' /
> out 2>/dev/null

# number of APIS without a securityScheme
grep ^null out | wc -l

2452

# ranked list
cat out | grep '"type":' | cut -d\" -f 4 | sort | uniq -c | sort -rn

980 apiKey
853 oauth2
177 http
2 openIdConnect

Now there’s one problem I know of right off the bat: these statistics are overwhelmed by a small number of companies with a lot of specs. Azure alone has 653 specs, each with multiple versions, for a total of 1,832 files in their subsection of this repo. It’s a classic power law distribution.

We’ll run this all again, but this time excluding the five largest offenders, so we only look companies with a smaller number of specs. There is still more cleaning up we could do, but this gets us most of the way to what we want.

(The actual Unix commands for this part are left as an exercise to the reader, mine were pretty messy.)

Here are the new results.

# number of APIS without a securityScheme
499

# ranked list
245 apiKey
123 http
88 oauth2
2 openIdConnect

Conclusions

Now this is interesting, but not too surprising.

OAuth2 is strong choice for authentication, but it’s also a difficult and complex framework; the IETF working group lost a key editor over that fact, and even people who spend a lot of time in the OAuth2 world will sometimes recommend against it. Conversely apiKey can be handled with simple home brewed solutions.

The http type is a more complex question. Everything depends on what comes next, in the scheme parameter. This parameter is required for http types, and has to be a registered scheme.

I did some quick manual checking of APIs with an http type, and I found a fairly even mix of two schemes: ****

  • “scheme: bearer” - This is passing a token in the headers. Often this is defined as a JWT token, which is a very modern and secure method.
  • “scheme: basic” - This is passing the username and password in the headers, and is basically insecure.OpenAPI specs that use this were sometimes using it as an option alongside other security schemes, but it was still unfortunate to see it at all.

Conclusions part 2, when security is missing

I did learn one big thing: at the beginning of this post I said, “Every API has security,” and now I know that many APIs do not, in fact, have security. If I included the “no security” in the pie chart it would have been 52% of the total.

To understand this problem more, I did some spot checks of specs without security.

  • Many were small projects, and they possibly were never meant to be used. I’m suspicious anytime I see a spec that was committed to GitHub with version 1.0.0 and then never touched again.
  • Some specs were written by enthusiasts for an existing but un-specced API, and it’s possible they just didn’t write a complete spec.
  • I saw one API that did have token-based security, but was only mentioned in the examples.
  • Amazon has their AWS OpenAPI vendor extensions, and specs that use them may not show up in my methodology.
  • There were a few specs that were autogenerated from other places, like protobufs, and it’s likely that something was lost in translation.

So what should you do?

Use OAuth2 if you can, apiKey if you want.

That’s it. That’s the recommendation.

For more details, keep reading. But you can stop here and be happy with what you’ve learned.

apiKey

In spite of having the word API in it, apiKey is technically no longer the best practice for APIs. Phil Karlton said that naming thing is one of the two hardest things in computer science, and this small irony is more proof of that.

But despite what the best practice people say, whoever they are, API keys are still a commonly-used method of security, and perfectly fine for many small APIs.

If you want to keep using your API key security make sure it’s submitted in the headers and not in the query string, that way they’ll be less likely to leak. (If you’ve been using GitHub API, you may have noticed they deprecated query tokens last year, after a series of deliberate “brown-outs” to get people off their butts.)

But even here there are exceptions. Cisco uses apiKey in a cookie that is obtained from a Single-Sign-On assertion.

Basic

Don’t use basic authentication.

Don’t submit the user’s username and password in the clear.

If you still want to keep using your Basic authentication: don’t.

Even the RFS that defined Basic Authentication says don’t.

The Basic authentication scheme is not a secure method of user authentication… — RFC 2617: HTTP Authentication: Basic and Digest Access Authentication

OAuth2

OAuth2 is where you probably want to be, but it has some gotchas, and its best practices have evolved over time. A few notes to be aware of are:

Now what?

If you’ve come this far, then you may have excitedly built something like this in your OpenAPI spec.

{
"type": "oauth2",
"flows": {
"authorizationCode": {
"authorizationUrl": "https://example.com/api/oauth/dialog",
"tokenUrl": "https://example.com/api/oauth/token",
"scopes": {
"write:pets": "modify pets in your account",
"read:pets": "read your pets"
}
}
}
}

Your customers are happy to finally get some solid security, but then they ask you how to use it in their code. You spent all this time carefully implementing good security in the server, and now you have to implement the other end of it in an SDK?

That is a lot of hard, difficult work.

If only there was a better solution.

Use liblab!

Use liblab!

Give us your API, and we’ll generate SDKs with best practices built in. We won’t even generate an SDK that will put the user at risk, we’ll fail with friendly error messages and recommendations on how to fix it. We’ll make sure your security choices will never become someone else’s postmortem.

We even produce documentation for your API, so your users will know exactly how your security policies work.

liblab!