Skip to main content

3 posts tagged with "CleanCode"

View All Tags
← Back to the liblab blog

So you have started a new project. The field is green, the air is fresh, there are no weeds, how exciting!

But where do you start? What is the first thing you do?

Surely you should write some code?

Well, no.

You must have had shiny new projects before, but they mostly turned out sour at some point, became hard to understand and to collaborate in, and very slow to add new changes.

If you are wondering why, we will explore the common causes, and more importantly, solutions that we can adopt to prevent such things from happening.

These causes can range from more than one naming convention, multiple contradicting rules, improperly formatted code, no tests, which results in an overall very frightening environment to make changes in.

Which rules should you follow?

How should you format your code?

How can you be sure that your changes have not broken anything?

It would be good if we knew all the answers to these questions.

It would be even better if you didn't have to concern yourself with these issues and could solely focus on coding.

It would be best if anyone on our project didn’t have to worry about them and could just focus on coding.

The key resource in software development is time, and the faster we are able to make changes and move forward, the greater advantage we will have in the market.

There is a saying, preparation is half the battle and in this blog post we will explore various techniques we can apply to our project to help us maintain velocity and quality throughout the lifetime of the project.

Chapter 1: Setup

So you open the project in your favorite IDE and it is basically empty.

You don’t know where to start or what to do first?

Ideally, before writing any code, we should invest time into setup. To elaborate, by investing we mean paying a certain price now in order to reap greater benefits in the long run. The main intention behind this time investment is to make it easy and fast for us to make changes in the existing codebase no matter how large it grows. This also means that new members that join the project can understand the system and its convention with as little effort as possible and be confident in the changes they are making.

But what should we invest in?

If we can sum up what makes a software project good it’s very simple:

The code should be easy to understand, test and change.

That may sound too simple, but ultimately, it’s the truth.

As programmers, we want to write super powerful and reusable code, but in practice that results in files and functions that span hundreds, if not thousands of lines, have tens of parameters and behave in a myriad of ways, depending on how we call them. This makes them very hard to understand and test, which means that it takes a lot of time to change them. And if there is one constant in software: is that it changes. Setting us up correctly will save a lot of time in the long run and make it less frightening to make changes.

Code repository

Even if you are going to be working alone on a project, it is a very good idea to use a VCS (version control system).

So naturally the first thing, even before opening your IDE, should be to setup the code repository of your choice. This means that you should pick your main branch and protect it. No one, including yourself, should be allowed to directly push to it, instead all changes should be made through pull requests.

Yes, if you are working alone, you should be reviewing your own PRs. This additional filter will catch many ill-committed lines before they reach production code.

Linting

A red sign with please stay on the path on it in white writing.

Linters are tools that analyze source code for potential logical errors, bugs, and generally bad practices. They can help us enforce rules which helps improve code quality, readability, maintainability, and consistency across a codebase.

There are many linters to choose from:

  1. ESLint
  2. JSLint
  3. JSHint

How they are setup varies widely on the specific providers, but most of them support a common set of rules.

The most popular and recommended provider is ESLint, below are some important rules that every project should have:

  • complexity The number one time consumer in understanding and changing code is complexity, luckily we can enforce simplicity in code using this rule. This rule analyses and limits the number of logical statements in one function:

    function a(x) {
    if (true) {
    return x; // 1st path
    } else if (false) {
    return x + 1; // 2nd path
    } else {
    return 4; // 3rd path
    }
    } // complexity = 3
  • no explicit any The pseudo type any means that our variable or object can have literally any field or value. This is the equivalent of just removing typing. There might be times where we think about reaching for this unholy type, but more often than not we can avoid it by using other typing mechanisms such as generics. The following example shows how to resist the temptation and use careful thinking to solve a type “problem”

    function doSomethingWithFoo(foo: any): any {
    ... // do something with foo
    return foo;
    }
    function doSomethingWithFoo<T>(foo: T): T {
    ... // do something with foo
    return foo;
    }

    However, if you don’t have access to a certain type, you can use the built-in helpers such as:

    ReturnType<someLibrary['someFunction']> and Parameters<someLibrary['someFunction']>

    Alternatively you can use unknown instead of any which is safer because it will require you to cast the operand into a type before accessing any of it’s fields.

  • explicit return types Enforces explicit return types in functions. Although it is possible for the language interpreter to infer the return types of functions, it is recommended to be explicit about them so that know how some function is intended to be used, instead of guessing.

  • no-undef Disallow usage of undeclared variables.

  • no-unused-vars This is a rule that does not allow for us to have unused variables, functions or function parameters.

    We can do this by adding this rule:

    "@typescript-eslint/no-unused-vars": ["error"]

    Unused code is an unnecessary burden, since we need to maintain it and fear deleting it once it arrives to our main branches, so it’s best to prevent this from even being merged. However, there will be cases such as method overloading or when implementing an interface, where we will need to match the signature of a method, including the parameters, but we might not be using all of them.

    Imagine we have an interface:

    interface CanSchedule {
    schedule(startTime: Date, endTime: Date);
    }

    Now we want to implement this interface, however, we won’t be using both of the arguments:

    class Scheduler implements CanSchedule {
    // throws an error since endTime is unused!
    schedule(startTime: Date, endTime: Date) {
    console.log(`scheduling something for ${startTime.toDateString()}`);
    }
    }

    In that case we can add an exception to this rule, not to apply to to members with a prefix such as _. This can be done in eslint with the following rules:

    "@typescript-eslint/no-unused-vars": [
    "error",
    {
    "argsIgnorePattern": "^_",
    "varsIgnorePattern": "^_",
    "caughtErrorsIgnorePattern": "^_"
    }
    ],

    Now we can write something like:

    class Scheduler implements CanSchedule {
    // No longer throws an error
    schedule(startTime: Date, _endTime: Date) {
    console.log(`scheduling something for ${startTime.toDateString()}`);
    }
    }
  • typedef Enforces us to define types for most of the fields and variables.

    No cutting corners!

💡 However, if you find it too time consuming to set up lint rules manually, you can probably find an already configured linter with the rules that best suite your taste.

Here is a useful list of popular linter configurations for typescript:

github.com/dustinspecker/awesome-eslint

Prettier

A red lipstick

There is a saying in my native language: A hundred people, a hundred preferences.

Now imagine a project where every developer introduced their preference in coding style. Yeah, it’s terrifying for me too.

Now imagine that you can avoid all of that. Good thing is that we don’t have to imagine, we can just use a prettier. Prettier enforces a consistent code-style, which is more important than one developer’s preference.

It is very simple to setup and use:

# install it
npm install --save-dev --save-exact prettier
# add an empty configuration file
echo {}> .prettierrc.json
# format your code
npx prettier --write .

Configure it however you prefer, no one can tell you which style is good or bad, however only two important javascript caveats comes to mind:

  • Please use semicolons.

    Why?

    Javascript compilers will automatically insert semicolons in the compilation stage ASI, and if there are none, they will try to guess where they should be inserted which may result in undesired behavior:

    const a = NaN
    const b = 'a'
    const c = 'Batman'
    (a + b).repeat(3) + ' ' + c

    Now you might think this code will result in 'NaNaNaNaNaNa Batman' but it will actually fail with Uncaught TypeError: "Batman" is not a function (unless there is a function named Batman in the upper scope).

    Why is that?

    Javascript compilers will interpret this as

    const a = NaN;
    const b = 'a';
    const c = 'Batman'(a + b).repeat(3) + ' ' + c;

    due to the lack of explicitness in regards to semicolons.

    Luckily, the semi rule is enabled by default, so please don’t change it;

  • Use trailing commas,

    This is often overlooked, and might seem like it makes no difference but there is one:

    It means when you add a new property, you will need to add a comma AND the property, which is not only more work, but will result as 2 line changes in VCS (git).

    const person = {
    age: 30,
    - height: 180
    + height: 180,
    + pulse: 60,
    }

    instead of

    const person = {
    age: 30,
    height: 180,
    + pulse: 60,
    }

Ok, now what?

Ok so you have setup types, lint and formatting.

But you have to fix lint and prettier errors all the time and your productivity is taking a nose dive.

Oh but wait, there are commands you can run that will fix all linting errors and pretty your code? That’s really nice but only if you didn’t have to manually run these commands…

Automated ways of linting and prettying

Now if you’re smart (or lazy like me) you can just configure some tool to do this tedious job for you.

Some of the options are:

  1. Configure your IDE to run this on save
  2. Using onchange
  3. Introduce a pre-commit hook

Ideally, you want to run lint error fixing formatting on every save, but if your IDE or machine does not support this, you can run it automatically prior to every git commit command.


Ok, now you are ready and probably very eager to go write some code, so please do so, but come back for chapter 2, because there are important things to do after writing some code.

Or if you prefer TDD, jump straight to chapter 2.

Chapter 2: tests

So you have written and committed some nicely linted and formatted code (or you prefer writing tests first).

That is amazing, but is it enough?

Simply put, no.

It might look like a waste of time, and a tedious task, but tests are important, mmkay?

Mr Mackey from South Park with the caption Test are important Mmkay

So why is having tests important?

  1. Ensures code quality and correctness: Automated tests serve as a safety net, allowing you to validate the functionality and behavior of your code. By running tests regularly, you can catch bugs, errors, and regressions early in the development process, preferably locally, even before you push them upstream!
  2. Facilitates code maintenance and refactoring: As projects evolve, code often needs to be modified or refactored. Automated tests provide confidence that the existing functionality remains intact even after changes are made. They act as a safeguard, helping you identify any unintended consequences or introduced issues during the refactoring process.
  3. Encourages collaboration and serves as documentation: When multiple developers work on a project, automated tests act as a common language and specification for the expected behavior of the code. They promote collaboration by providing a shared understanding of the system's requirements and functionality. Also, since tests can be named whatever we want, we can use this to our advantage to describe what is expected from some component that might not be that obvious.
  4. Reduces time and effort in the long run: While writing tests requires upfront investment, it ultimately saves time and effort in the long run. Automated tests catch bugs early, reducing the time spent on manual debugging.
  5. Enables continuous integration: Since tests serve as some sort of a contract description, we can now make changes in functionality while asserting and validating if have broken their contract towards other components. They enable continuous integration by providing a reliable filter for potential bugs and unwanted changes in behavior. Developers can quickly detect any issues introduced by new code changes, allowing for faster iteration and deployment cycles.

Writing code without tests is like walking a rope without a safety net. Sure, you may get across, but failing might be catastrophic.

Let’s say that we have some complex and unreadable function but we have a test for it:

function getDisplayName(user: { firstName: string; lastName: string }): string {
let displayName = '';

for (let i = 0; i < user.firstName.length; i++) {
displayName = displayName.concat(user.firstName.charAt(i));
}

displayName = displayName.concat(' ');

for (let i = 0; i < user.lastName.length; i++) {
displayName = displayName.concat(user.lastName.charAt(i));
}

return displayName;
}
describe('getDisplayName', () => {
// because we can name these tests, we can describe what the code should be doing
it('should return user\'s full name', () => {
const user = { firstName: 'John', lastName: 'Doe' };
const actual = getDisplayName(user);

expect(actual).toEqual('John Doe');
});
});

Now we are able to refactor the function while being confident that we didn’t break anything:

function getDisplayName(user: { firstName: string; lastName: string }): string {
// test will fail since we accidentally added a ,
return `${user.firstName}, ${user.lastName}`;
}

Now you see how tests not only assert the desired behavior, but they can and should be used as documentation.

There is a choice of test types you could introduce to help you safely get across.

If you are unsure which might be the right ones for you, please check out this blog post by my colleague Sean Ferguson.

Ideally you should be using more than one type of tests. It is up for you to weigh and decide which fit your needs best, but once you do, it is very important not to cut corners and to invest into keeping a high coverage.

This is the most important investment in our codebase. It will pay the highest dividends and it will keep us safe from failing if we do this part well.

The simplest and fastest tests to write are unit tests, but they are often not enough, because they don’t assert that the users using our system are experiencing it as expected.

You can even use AI tools like Chat GPT to generate unit tests based on production code (although they will not be perfect every time).

This is done by using integration or e2e tests, albeit it takes longer to set them up and to write individual tests, it is often the better investment, since we can rely on them to cover our system from the perspective of anyone using it.

Ok, so you are convinced and you add a test suite which you will maintain. You also added a command to run tests, and you do so before committing your code. That is very nice but what if someone in the team doesn’t do the same? What if they commit and merge code without running tests? 😱

If only there is a way to automate this and make it public.

Chapter 3: tying it all together

Now all these enhancements make sense, and you feel pretty happy about them, but without any forcing functions that make running these mandatory, they don’t bring much value since there will be people bypassing them.

Luckily most code repositories like GitHub provide us with automated workflows that can make it very easy to automate and force these checks and not let code be merged if it doesn’t pass the necessary checks.

Now we can write a workflow that will check all of this for us!

A GitHub workflow that would install run linting, unit and e2e tests would look something like:

name: Linting and Testing

on: [pull_request]

jobs:
linting-and-testing:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-[email protected]
with:
access_token: ${{ github.token }}

- name: Checkout
uses: actions/checkout@v3 # or whatever is the highest version at the time

- name: Setup Node
uses: actions/setup-node@v3
with:
node-version: '18.12' # or whatever the latest LTS release is
cache: 'npm'

- name: Install dependencies
run: npm i

- name: Run ESLint check
run: npm run lint:ci # npx eslint "{src,tests}/**/*.ts"

- name: Run unit tests
run: npm run test

- name: Run e2e tests
run: npm run test:e2e

Can we code now?

Yes we can!

But as we said, preparation is half the battle.

The other, longer and harder part, is yet to come and it is paramount to stay disciplined, consistent and to keep it simple. It would be easiest to do so by practicing being pragmatic, having pride and maturity in your approach to work and having a mindset that helps not only yourself but others that you work with grow.

This is best explained by a dear and pragmatic colleague of mine, Stevan Kosijer, in a blog post series starting with Pragmatic Engineer’s Philosophy.

Conclusion

Although we might instinctively think that writing code is the most productive way to initially invest our time in a software development project, without proper setup that is almost never the case. Having confidence in your changes through having automated tests, having enforced rules we find useful and having consistent formatting will greatly impact the velocity and quality of our work.

If your project is integrating with an API, which most likely it is, my honest advice would be to use and SDK. However, if you want a high quality SDK that can be built and updated on demand, along with documentation, tests, and easy integration with your CI/CD pipeline, please check out our product and perhaps even schedule a demo at liblab.com.

← Back to the liblab blog

TypeScript, a statically typed superset of JavaScript, has become a go-to language for many developers, particularly when building SDKs that interact with web APIs. TypeScript's powerful type system aids in writing cleaner, more reliable code, ultimately making your SDK more maintainable.

In this blog post, we'll provide a focused exploration of how TypeScript's type system can be harnessed to better manage API routes within your SDK. This post is going to stay focused and concise. We’ll be looking solely at routing tips and intentionally eschewing some of the other aspects of SDK authoring, such as architecture, data structures, handling relations, and other aspects of SDK development. Our SDK will be simple: it is going to simply list a user or users. These tips will help your route definitions be less error prone and easier to read for other engineers.

At the end, we’ll cover the limitations of the tips in this post, what’s missing, and one way in which you can avoid dealing with having to author these types altogether.

Let’s get started.

Alias your types

Type aliasing is important! It can sometimes be overlooked in TypeScript, but aliases are an extremely powerful documentation and code maintenance tool. Type aliases provide additional context as to why something is a string or a number. As an added bonus, if you alias your types and make a different decision (such as shifting from a numeric ID to a GUID) down the road, you can change the underlying type in one place. The compiler will then call out most of the areas in which your code needs to change.

Here are a couple of examples that we’ll build upon later on:

type NoArgs = undefined;
type UserId = number;
type UserName = string;

Note that UserId is a number here. That may not always be the case. If it changes, finding UserId is an easier task than trying to track down which references to number are relevant for your logic change.

Aliasing undefined with NoArgs might seem silly at first, but keep in mind that it’s conveying some extra meaning. It indicates that we specifically do not want arguments when we use it. It’s a way of documenting your code without a comment. Ditto for UserName. It’s unlikely to change types in the future, but using a type alias means that we know what it means, and that’s helpful.

Note: there’s a subtlety here that’s worth calling out. NoArgs is a type here, while undefined is a value. NoArgs is not the value undefined, but is a type whose only acceptable value is undefined. It’s a subtle difference, but it means you can’t do something like const args = NoArgs. Instead, you would have to do something along these lines: const args: NoArgs = undefined.

Statically define your data structures wherever possible

This is similar to the above, and is generally accepted practice. This essentially boils down to avoiding the any keyword and avoid turning everything into a plain object ({[key: string]: any}). In this simple SDK, this means only the following:

type User = {
id: UserId;
name: UserName;
//other fields could go here
}

When we need a User or an array of Users, our SDK engineers will now have all the context they need at design-time. Types such as UserName can be more complex as well (you can use Template Types, for example), allowing you to further constrain your types and make it more difficult to introduce bugs. The intricacies of typing data structures is a much larger subject, so we’ll stick to simple types here.

Make your routes and arguments more resistant to typos

You’ve likely done it before: you meant to call the users endpoint and accidentally typed uesrs. You don’t find out until runtime that the route is wrong, and now you’re tracking it down. Or maybe you can’t remember if you’re supposed to be getting name or userName from the response body and you’re either consulting the spec, curling, or opening Postman to get some real data. Keeping your routes defined in one place means you only need to consult the API spec once (or perhaps not at all if you follow the tip at the end of the post) in order to know what your types are. Your SDK maintainers should only need to go to one place to understand the routes and their arguments:

type Routes = {
'users': NoArgs;
'users/:userId:': UserId;
};

Note that the pattern :argument: was used here, but you can use whatever is best for the libraries/helper methods that you already have. In addition, this API currently only has GET endpoints with no query parameters, so we’re keeping the types on the simple side. Feel free to declare some intermediate types that clearly separate out route, query, and body parameters. Then your function(s) that actually call API endpoints will know what to do with said parameters when it comes time to actually call an endpoint. This is a good segue into the next point:

Use generics to make code reuse easy

It’s hard to overstate how powerful generics can be when it comes to maintaining type safety while still allowing code reuse. It’s easy to slap an any on a return value and just cast your data in your calling function, but that’s quite risky, as it prevents TypeScript from verifying that the function call is safe. It also makes code harder to understand, as there’s missing context. Let’s take a look at a couple of types that can help out for our example.

type RouteArgs<T extends keyof Routes> = {
route: T;
params: Routes[T];
};

const callEndpoint = <Route extends keyof Routes, ExpectedReturn>(args: RouteArgs<Route>): ExpectedReturn => {
//your client code goes here (axios, fetch, etc.) Include any error handling.

//Don't do this, use a type guard to verify that the data is correct!
return [{id: 1, name: "user1"}, {id: 2, name: "user2"}] as unknown as ExpectedReturn
}

Note the T extends keyof Routes in our generic parameter for the type RouteArgs. This builds upon the Routes type that we used earlier, making it impossible to use any string that is not already defined as a route when you’re writing a function that includes a parameter of this type. This also enables you to use Routes[T], meaning that you don’t have to know the specific type at design-time. You get type safety for all of your calling functions.

Note that we also do not assign a type alias to the type of callEndpoint. This type is intended to only be used once in this code base. If you are defining multiple different callEndpoint functions (for example, if you want to separate out logic for each HTTP verb), aliasing your types to make sure that no new errors are being introduced would be highly recommended.

Note that type guards are mentioned in the comment. This code lives at the edge of type safety. You will never be 100% sure that the data that comes back from your API endpoint is the structure you expect. That’s where type guards come in. Make sure that you’re running type guards against these return types. Type guards are outside of the scope of this post, but guarding for concrete types in a generic function can be complex and/or tedious. Depending on your needs, you may choose to use an unsafe type cast similar to the example and put the responsibility of calling the type guard on the calling function. We won’t cover strategies for ensuring these types are correct in this post, but this is an area you should study carefully.

Tying it all together

What do we get for our work? Let’s take a look at the code that an SDK maintainer might write to use the types that we’ve defined:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: undefined})

return users
}

Hopefully it’s clear that we’ve gotten some value out of this. This call is entirely type safe (shown below), and is quite concise and easy to read.

Note that we also don’t have to specify the generic types here. TypeScript is inferring the types for us. If we make a mistake, the code won’t compile! Here are a couple of examples of bad calls and their corresponding errors:

const getUsers = () => {
const users: User[] = callEndpoint({route: 'user', params: undefined})
//Type '"user"' is not assignable to type 'keyof Routes'. Did you mean '"users"'?
return users
}

Look at that helpful error message! Not only does it tell us we’re wrong, it suggests what might be right.

What if we try to pass an argument to this route? If you remember, we defined it to explicitly accept no arguments.

const getUsers = () => {
const users: User[] = callEndpoint({route: 'users', params: 'someUserName'})
//Type 'string' is not assignable to type 'undefined'.(2322)
//{file and line number go here}: The expected type comes from property 'params' which is declared here on type 'RouteArgs<"users">'
return users
}

This is also helpful, though there is some limitation. TypeScript will not pass through the alias that we defined (NoArgs), unfortunately. However, it does tell us exactly where the source of the error is, allowing an engineer to trace exactly why a string won’t work. The engineer will then see that NoArgs type and have a clear understanding of what went wrong.

What’s missing/limitations?

The examples here could still be improved upon. Note that ExpectedReturn is part of callEndpoint. This means that an SDK maintainer would need to have some knowledge of which type to pick (if not the specific structure). Why not include this information our Routes type? That may make a good exercise for the reader.

As previously mentioned, type aliases do not get passed through to compiler errors. There are some workarounds, however.

Depending on how you’re handling various verbs, your type guards/generic functions can get quite complex. This won’t have an impact on those maintaining your SDK, but there can be an up-front cost to defining these types. It’s up to you to decide whether to pay that cost.

What was that about avoiding all this?

Hopefully with the tips in this article, you feel more confident about making maintainable SDKs. However, wouldn’t it be nice if you just didn’t have to develop an SDK at all? After all, you have an API spec; and that should be enough to generate the code, right? Fortunately, the answer is yes, and liblab offers a solution to do just that. If you don’t want to think about challenges like error handling and maintainability for your SDK, liblab’s SDK generation tools may be able to help you.

← Back to the liblab blog

In the realm of software engineering, there exists a unique breed of engineers known as pragmatists. These individuals possess a distinct approach to their craft, blending technical expertise with a grounded mindset that sets them apart from their peers. But what truly sets a pragmatic engineer apart? What is it about their approach that makes them so effective in navigating the complexities of software development?

To answer this we will dive into the provoking thoughts and ideas inspired by the books The Clean Coder: A Code of Conduct for Professional Programmers by Robert C. Martin and The Pragmatic Programmer: Your Journey To Mastery, 20th Anniversary Edition (2nd Edition) by David Thomas and Andrew Hunt.

At the core of pragmatism lies a set of behaviours that define the pragmatic engineer. One such behaviour is being an early/fast adopter. Pragmatists eagerly embrace new technologies, swiftly grasping their intricacies and staying ahead of the curve.

Curiosity is another characteristic that fuels the pragmatic mindset. Pragmatists are inquisitive souls, always ready to question and seek understanding when faced with unfamiliar concepts or challenges.

Critical thinking is a cornerstone of pragmatism. Rather than accepting solutions at face value, pragmatic engineers apply their analytical minds to evaluate and challenge proposed solutions, always on the lookout for more elegant or efficient alternatives.

Pragmatists also possess a keen sense of realism. They strive to understand the underlying nature of the problems they encounter, grounding their solutions in practicality and addressing the true essence of the challenges they face.

Embracing a broad spectrum of knowledge is another defining trait of the pragmatic engineer. Rather than limiting themselves to a single technology or domain, they actively seek to expand their skill set, becoming a polymath who can adapt to a wide range of contexts.

By understanding these foundational behaviours, we gain some insight into the pragmatic philosophy. It is a mindset that values adaptability, practicality, and a continuous thirst for knowledge. Now let’s explore the intricacies of the pragmatic engineer’s thinking, unraveling the secrets that make them such effective and versatile engineers in the ever-evolving world of software development.

In the first series of the blog post we will delve into the first three key aspects (Cat ate my source code, Just a broken window and Make a stone soup), providing sufficient time for contemplation and assimilation. Subsequently, in the next series of the blog post, we will examine the remaining three aspects (Good-enough software, Knowledge portfolio and Talk the Talk) and conclude with some final thoughts.

Enjoy the enlightening journey ahead.

Cat ate my source code

A cat licking its lips

Can you imagine a cat eating the source code? How does that statement sound to you? Do you find it silly, funny or maybe even stupid? Well, that is the same way your colleagues will feel if you try to make excuses for the mistakes you made. Never make excuses, forget that the word excuse even exists. As a pragmatic engineer you are in complete control of your career and you take full responsibility for the work you do. There’s a saying that if you don’t make mistakes it means that you’re either playing it too safe or you’re not playing at all. Mistakes are inevitable. They are the integral part of doing the work and growing as an engineer. Your team will forgive you for the mistake you made, but they won't forgive or forget if you don't take responsibility for it. It’s how you respond to a mistake that makes all the difference. Accepting that you made a mistake and taking the responsibility publicly is not the most pleasant experience, but you should be truthful, direct and never shy away from it.

Here are some ways that pragmatic engineers deal with mistakes and responsibility:

  • Choosing Options over Excuses:
    • Pragmatic engineers prioritise finding options rather than making excuses.
    • They avoid labelling something as impossible and instead approach challenges with a curious and can-do mindset.
  • Asking Questions and Preparing Options:
    • When faced with a difficult situation, pragmatic engineers ask questions to gain a deeper understanding.
    • They prepare options and alternatives to tackle the problem effectively.
  • Refactoring or Removing Project Parts:
    • Pragmatic engineers consider the possibility of refactoring or removing certain project components if necessary.
    • They investigate, create a plan, assess risks, estimate impacts, and present their findings to the team.
  • Assessing Needs and Implementing Improvements:
    • Pragmatic engineers identify the needs for proof of concept, enhanced testing, automation, or cleaning the code base.
    • They proactively address these requirements to prevent errors and optimize processes.
  • Seeking Resources and Collaboration:
    • Pragmatic engineers are not hesitant to request additional resources or seek help when needed.
    • They aren't afraid to admit when they lack the skills to solve a problem and will ask for help instead.
    • They understand the importance of putting in effort to explore options, ask for support, and leverage available resources.

💡 Just for thought: How do you react when someone gives you a lame excuse? Do you start losing trust?

Just a broken window

Windows in an old building, one if the windows has a broken pane

Why are some projects falling apart and inevitably increasing their technical debt? Even if they have the talented people and the time, so what is the problem? Have you heard about the broken windows theory? The social criminology scientific research states that if a building has just one broken window left un-repaired for a longer period of time, it will create an environment that encourages further window breaking. This is as true in nice neighbourhoods as in rundown ones. Multiple broken windows will create a sense of negligence that will then result with more civil disorder such as trash left out or graffiti on the building walls. In relatively short period the apartment owners will get a sense of hopelessness which will result in negativity spread, creating a contagious depression. As a consequence, that part of the neighbourhood becomes associated with being unsafe, and ultimately, the building gets abandoned.

Here is a reflection of the theory on the technology world:

  • The broken windows theory applies to software engineering projects as well:
    • Un-repaired broken windows such as bad design, wrong decisions, or unclean code contribute to technical debt and system entropy.
    • Technical debt decreases efficiency and throughput of the team and it increases dissatisfaction and frustration, potentially leading to more technical debt and an unsuccessful project outcome.
  • Psychology and culture play a vital role:
    • In projects filled with technical debt, it's easy to pass blame and follow poor examples (easy to say “Well it’s not my fault, I work with what I have, I will just follow the example”).
    • In well-designed projects, a natural inclination arises to maintain cleanliness and tidiness
  • Pragmatic engineers resist causing further damage:
    • They don't allow deadlines, high-priority issues, or crises to push them into increasing the project's collateral damage.
    • They adhere to The Boy Scout Rule from the book Clean Code by Robert C. Martin.
    • They diligently track broken windows and create concrete plans to fix them.
    • They don't think “I'll improve this later”, and trust that it will be done. They create tasks for things to be tracked and tackled in the future.
    • They demonstrate care, effort, and a resolve to address known issues.

💡 Just for thought: What did you do the last time you saw a broken window in your project? Did you do some action to repair it or did you look the other way and thought it’s not my fault that it’s broken?

Make a stone soup

A bowl of yellow soup with a sprig of rosemary and a stone in it

In a well-known European folk story, hungry strangers visit a village seeking food. The villagers refuse to assist, so the strangers, having only an empty cooking pot, light a fire. They place the pot on the fire, filling it with water and a large stone. As the strangers gather around the boiling pot, the villagers, being curious, slowly start approaching and asking questions. The strangers tell the villagers that they are preparing a “stone soup” and even though it’s a complete meal on it’s own they encourage them to bring some vegetables to make it even better. The first villager, who anticipates enjoying a share of the soup, brings a few carrots from his home. Quickly after, follows the second and the third villager bringing their share of onions and potatoes. Becoming even more convinced by the actions of the first three, more and more villagers start bringing tomatoes, sweetcorn, beef, salt and pepper. Finally, making sure that the soup is ready, the strangers remove the stone and serve a meal to everyone present.

What is the moral of this story?

Would you say that the villagers got tricked into sharing their food?

Why is it important in the context of software engineering?

The tale teaches a moral lesson about the power of collaboration and generosity. However, in the context of software engineering, the story could be used as an analogy to emphasise the importance of teamwork and resourcefulness in problem-solving. Just as the strangers in the story creatively used their limited resources to provide a solution, software engineers often need to think outside the box and work together to overcome challenges and deliver successful projects.

Here’s how pragmatic engineers make the stone soup:

  • Challenging Others and Communicating Vision:
    • Pragmatic engineers face the challenge of rallying others to contribute and align with their vision.
    • They draw inspiration from the tale of stone soup, emphasising the importance of proactivity and resourcefulness.
  • Acting as Catalysts for Change:
    • Pragmatic engineers take the initiative to create an initial proof of concept and lift ideas off the ground.
    • They actively work towards gaining buy-in from their colleagues.
  • Inspiring Others and Transforming Vision into Reality:
    • By presenting a glimpse of a promising future, pragmatic engineers inspire others to join their cause.
    • They collectively work to transform the shared vision into reality.
  • Creating a Win-Win Situation:
    • Through their efforts, pragmatic engineers create a win-win situation that benefits everyone involved.

💡 Just for thought: Have you ever made a stone soup for your team? How did they react?

End of part one

Are you filled with anticipation to discover the remaining aspects? If so, consider yourself fortunate as the second part of the blog series is just around the corner. Take this time to reflect on the insightful aspects discussed in this blog. Challenge yourself to apply at least one of the ideas you found most intriguing. Remember to hold yourself accountable and engage in self-reflection after a few weeks to assess your progress. Even a slight enhancement can lead to significant growth. Allow these concepts to simmer in your mind, ready to inspire your actions.

Start by doing what is necessary, then do what is possible, and suddenly you are doing the impossible."

Saint Francis of Assisi

In the upcoming blog post we will explore deeper into the topics of Good-enough software, Knowledge portfolio and Talk the talk.

Until next time, stay sharp!