Technical Debt and Ageing Systems

When developers take shotcuts or decrease the quality to gain time, they will cause a technical debt. By doing nothing and letting the system age, the technical debt is growing too. Actually, as soon as we’ve implemented the code, it starts to age.

The code that is tightly connected to a specific technology ages more than code that is written purely in its own, e.g. business logic. But sometimes it is the language itself that is ageing. Or rather, it’s impossible to separate the code itself from the infrastructure.

Developers’ view

Developers often use the term technical debt to describe what happens when they are forced to implement a design that will decrease code maintainability. Let’s say it takes three weeks to develop something properly, but because of time pressure they have to do a bad design that take two weeks instead. Later on, when the feature is in production and there might have been more changes to it, it would take them 3 weeks to refactor.

DevelopmentGood design Shortcut
Develop feature3 weeks2 weeks
Refactor to reduce technical debt3 weeks
Total3 weeks5 weeks
Technical debt example

By making a lot of shortcuts, the technical debt grows and there will be a future cost to make the code maintainable. Changes will take longer and there is a higher risk for defects.

Technical debt is the coding you must do tomorrow because you took a shortcut in order to deliver the software today.

Technical Debt: The Ultimate Guide

Ageing systems

I would like to widen the definition a bit. When building new software there is a lot of focus on features. This is necessary to get more customers and take market shares. There are also technical and architectural decisions about how to fulfil non-functional requirements like scalability and maintainability. All systems have been through this process, even the old ones. Decisions have been made based on what was available and popular at that time.

As the world change, your system will age. It might not keep up with new requirements when it comes to performance, scalability and usability. I consider this a technical debt as well, since it means that you, by doing nothing, push a growing cost to the future.

What problems does technical debt cause in ageing systems?

What will happen when you let your system age:

  • Users are limited when using the system. Your system can’t be used on a wide range of devices or operating systems. The system can’t take advantage of new technology and frameworks provided for, e.g., browsers, which might allow only a very basic user experience.
  • The system can’t scale. The technology does not support scaling or performance requirements. Maybe you use technology that can’t be moved to the cloud.
  • Unable to improve business model. The technology does not support the business model you want to implement. For example, you install the system on each customer’s site, but you actually want to offer a SaaS (Software as a Service).
  • Code changes take time and are costly. The code base has grown large without refactoring and time to delivery is huge for each change.
  • Lack of development resources. It’s hard to find developers that want to work with old technology.

How should companies handle the technical debt?

What happens if the company doesn’t handle the technical debt? Probably not so much, until it’s too late, which might take many years. As long as the competitors have a similar technical debt the company can be successful. But if you can’t run your system in the cloud, and suddenly a competitor launches a new SaaS solution, you might find yourself in a very difficult situation.

A company must fund money to handle the technical debt. The debt is a loan from the future, that will need to be paid back. An alternative is of course to continuously evolve the business and find other ways to earn money, and after that scrap the old solutions. The nature of a technical debt is that it can never be fully paid, just kept at a reasonable level. To find that level, the management have to weight different factors like competitors, risk, finance etc.

There is no simple answer to the question about how to handle technical debt. But a general advice would be to admit that it exists, and to take informed decisions. It is a much lower risk to spread out the investments over many years, doing stepwise improvements, rather than doing nothing for twenty years and then perform a complete rewrite of the whole system.

Conclusion

The technical debt is very unlikely to ever be fully paid off. It can be higher or lower, and have more or less implications on the company. Defining the amount of the technical debt is a strategic decision that has to weigh in knowledge about technology, market and finance.

Knowledge about this is usually spread over different roles in a company, and people tend to have little understanding beyond their own areas of expertise. Awareness and discussions will significantly increase the chances for a company to successfully handle their technical debt.

https://martinfowler.com/bliki/TechnicalDebt.html

Code structure – The House Metaphor

Reading code vs. writing code

In a legacy system, what is it that you spend absolutely most time on?

I would say, understanding code and finding out where to do a change. Most of us have been in the situation that we spend hours or days looking for exactly one single line that would be changed in a million line code base. And of course trying to figure out what effects that little change would get. Expected and unexpected.

In an old Cobol system I came across, even the most experienced developers often spent two weeks analysing what consequences a change would get. Of course that was before the time of unit tests and modularisation of code into microservices, but the correlation of the time it took to write code compared to change it was very obvious.

What are we looking for in a code base?

Let’s say I’ve got at task at my new job. I’ve got requirements that needs a change in the code. I have to find the exact lines to change. Being in this situation, I’m often confused. I can’t immediately see the overall structure and don’t know where to look for the code I’m about to change. I try to figure out certain things, for example:

Where is the code..

  • ..providing the UI or API?
  • ..that call the infrastructure like database or file system?
  • ..handling integrations with other systems or services?
  • ..handling the business logic?
  • ..defining configuration?

Also, I’m looking for the code that handles the business domain that I want do my changes in? In a monolith there might not be any separations by domains. In a microservice architecture there might be a better separation. For example, managing hotels is handled in the Hotel Management Service. Price changes are handled in the Pricing Service. And so on. Preferably I would like some form of graphical overview that could map the business to the system.

The house metaphor

When talking about this, my and my teammates often use the house metaphor. Many of us have owned or lived in a house, we know how they usually are structured and we get help from craftsmen to do maintenance and changes on our houses. There is a lot of similarities with a system.

A house is divided into separate rooms. Each room has a purpose; cooking, sleeping, storage, bathing and so on. Each room is optimised for the users’ (people who live there) needs.

Craftsmen come to the house with a mission to improve something or fix stuff that has broke. Depending on what they’re going to fix, they go to different rooms. They immediately know which room they are looking for. If they are about to install a new shower, they will go to the bathroom, if they are fixing the dishwasher machine they go to the kitchen. Nobody would look for a shower in the bedroom. There might of course occasionally be exceptions, I heard about someone having a TV built in to the bathroom floor.

If there isn’t a map, they will make a quick glance in each room to find out what type of rooms it is or ask the owner. Within a few minutes they grasp the location of everything in the house.

Create 2D Floor Plans easily with RoomSketcher. Draw yourself or order. Perfect for real estate, home design and office projects. High-quality for print & web.
roomsketcher.com

Before looking for the appropriate room, the craftsmen must of course find the correct house. In a city of millions of houses. First they might ask which district the house belongs to, and after that which address.

Here the craftsmen has an advantage compared to developers. We rarely get the “address” of the service we’re going to change together with the requirements. Sometimes we can start out in the UI and follow the code to find out in which are the changes should be done, and sometimes we simply have to ask someone in our team. Preferably, we always go through the changes with someone that is experienced within the system before we start to find out where to make the change.

What can we learn from managing houses?

If we could significantly decrease the time a developer spend on trying to find out what the system does, there would be a huge benefit for companies. Time could be spent on new features, and more features. So how do we do that?

  1. Define what it is you’re building. Is it a storehouse, a bungalow or a skyscraper? Sometimes you start with a bungalow and end up with a skyscraper, it’s pretty obvious that the architecture is not clear and easy to follow in that case. When this happens, refactoring is crucial to build a maintainable system.
  2. Define responsibilities. Define responsibilities of services (e.g. hotel management), layers (e.g. domain layer) and classes (e.g. controller, repository). Then developers immediately will understand where to look for the code they want to change. Is the change about the contract between the client and the API, then look in the controller classes in the API layer. Is it about the communication to the database, then look in the repository classes. The shower should be in the bathroom!
  3. Use commonly used patterns and guidelines. Most houses have a similar set of rooms, with similar purpose. This makes it easy for us to quickly understand the usage of each room. I your system follows a well-known way of structure the code, it will be easier for new users to start working with it.

And one more thing. Draw a picture of the system and make sure that the system’s file structure represents that picture. If there is a box named Hotel Management on the picture, make a folder named Hotel Management. If there is a layer named domain layer, make sure the developers can find a folder with that name in the code.

I think most developers find the above ideas pretty obvious. Despite that, many systems end up with an unclear structure. There are many reasons for that; tight time schedules, bad communication in the team, problems in combining helicopter perspective with frog perspective.

Conclusion

Systems often live up to 20 years and during that time a certain amount of developers are passing through. All of them with different experiences from previous systems they have worked with. If they could be up and running quickly, if they know what they are doing when making a change and if they easily can find a specific place in the code then time to delivery will be shortened and the quality improved. The changes will be more aligned with existing code and cause less confusion for other developers. The risk of causing bugs with the change will be lowered.

Companies will have everything to win, and nothing to lose, on encouraging the development team to keep a good code structure!

Same theme, another metaphor: https://medium.com/nick-tune-tech-strategy-blog/the-chocolate-sauce-design-heuristic-3cdc143dd2ea

What a year…

One year ago, we knew little about what 2020 would bear in mind for us. But at New Year’s Eve, my husband and I made a decision to do something that we had been planning for a long time. We would relocate from Gothenburg, where we had lived for more than 25 years, to London!

One year earlier I had started my own company and was working as a contractor. My plan was to find contracts via my social network, but when moving to London I felt I needed something more to stand out in the competition. So, I started this website. Two months later something very unexpected happened to the whole world, the Coronavirus made its entrance.

Coronavirus arrived like a stone thrown into a still pond.

Tom Whipple, The Times

We decided to fulfil our plans anyway. New schools, new house, new friends, restart career.

Leaving Sweden

As I’m very sentimental, I tried to not make a big deal of leaving our home country. We thought that we would soon meet our friends and family again and with social media and online meetings distances have shortened. Before leaving Sweden, we spent a few weeks with my parents in their house at a lake deep into the Swedish forest, as always finding one reason or another to arrange a party for our relatives. After that we headed off to London.

Settling in London

It’s impossible to describe in a few words all that we have experienced here in London. We’ve felt so welcomed by the people and the city is so beautiful! We combine the small village life in Barnes with being part of one of the biggest cities in Europe. Although we felt caught in some catch 22 situations when it comes to administration, everything has been rather smooth and we’re well settled now.

Every weekend we go in to London (as far as allowed by Corona restrictions), walking in different areas trying to familiarise with the big city. For me, who has spent all my visits shopping on Oxford Street before I moved here, it’s been an eye opener!

What would I do for a living?

For the first time in my life quit my job without having a clue of what to do next. It was a bit scary, of course. At the same time it was a relief to get a few months for reflection, and needless to say, there were a lot of practical things to take care of concerning the relocation. But I also got time to write on my blog and to study new technology and to try out a new programming language.

I soon learned that the freelancing rules were about to change in the UK, which put shortly will make freelancing much less attractive. So, I started to look for a permanent job instead. I reached out to recruiters and uploaded my CV to some job sites, I went through tests and online interviews, recorded myself on video and talked to a lot of people. I must admit it was a huge challenge. But, sooner that I thought, I could sign for a remote job at a company named Cegedim located in Manchester.

My new workplace

Even though I’ve been working in international contexts on and off all my work life, I’ve never been employed in another country before. It has turned out to be a very pleasant experience. I’ve got the chance to work with an architecture and technology that is very interesting and challenging, while at the same time being part of a fantastic company. Everybody is so nice and helpful, we learn together, and psychological safety is important. A lot of things I’ve been striving for during the last years, seems already implemented here.

I’m also part of a great team with teammates located both in UK and Egypt. We are of different ages, different nationalities and, as all of us work remotely, most of us have never met in person. But we are working excellent together to solve the challenges of going live with a new product, writing great code, and delivering new functionality to the business.

Time for writing

After getting time to focus on my writing, and also starting to participate in a few meetups, I was asked to publish some of my blog posts as articles on the London Tech Lead website. This made me delighted but also more confident that what I’m writing is something that people actually want to read.

I also got very good response on one of my blog posts on my own site that was viewed almost 900 times on a few weeks. In total my website has been visited by people in 66 different countries.

It is fascinating that the developer community is so global. This is something I wasn’t really aware of before. Even though I get almost all my knowledge on developer forums and blogs, I have never thought about where the other people are situated.

Website experiences

When I started my website my daughter Lisa and I made a presentation that we called, as a bit of a joke, “The programming lady”. We had a lot of crazy ideas about fashion and coding which (luckily) never took off. But in the brainstorming phase you must try out different thoughts, right?

Instead, I started very simple by writing most of the texts and using only one picture of myself, placed on the front side. But I soon realized that it’s hard to make a great website without professional pictures. So, a friend and I headed to Stockholm for styling, makeup and a whole day of taking pictures. Except the beautiful pictures, the photoshoot itself is one of the best things I’ve done. So fun and flattering!

But as Bill Gates says: “Content is King”. In the long run it’s the blog that keeps up the interest for a website. It’s been a lot of hard work. In the beginning I had to ask my relatives to visit the website to get any clicks at all. But after a year I have had over 4000 views by 2000 visitors, which of course is not much considering there are around 26 million developers in the world. But anyway, something I never ever dared to dream about!

Take care!

We all suffer from the implications of the Coronavirus in one way or another, and that’s something that is shadowing everything we do. I think we in the long run will see some positive things coming out of it, and I’m more and more hearing people discussing what that might be. But for the moment, with the Christmas in many ways cancelled here in London, life is a bit dull.

It’s ironic that for me personally this exciting year happened at the same time as the whole world suffered from this virus. I’m thankful for the good things and I also want to thank all of you for your great support during the year! Take care of yourself and your families and let’s hope that next year will be the one when we beat the pandemic.

Unit testing with F#

I recently heard about a project where the code was written in C#, but where they implemented the unit tests in F#. They did so because they wanted to learn and evaluate F# in a protected environment. I thought that sounded like a very good idea and wanted to try.

I picked the unit tests used in builder pattern blog post. My idea was that the builder pattern would be replaced by abilities built into the language itself. Let’s see if that is a correct assumption.

From Fluent to Composition

This C# code is taken from my builder pattern blog post. It dynamically builds an order to use in the unit test, and is written with a fluent syntax:

            var order = new OrderBuilder()
                .WithSingle()
                .WithStatusInProgress()
                .Build()
                .First();

But, how do we write fluent code in F#. The answer is, we don’t:

Now the concept of “fluent interfaces” and “method chaining” is really only relevant for object-oriented design. In a functional language like F#, the nearest equivalent would be the use of the pipeline operator to chain a set of functions together.

F# for fun and profit

F# is a composable language, using functions as building blocks. Composition can be done in different ways, with function pipelining (|>) or function composition (>>). By defining functions that has the same type as argument and return type, they can be chained together in a very clean and decent way. Let’s see how that works.

The C# model

The C# model that we will test from our F# tests is just a simple order class with two methods:

    public class Order
    {
        public OrderStatus Status { get; set; }

        public void Cancel ()
        {
            Status = OrderStatus.Canceled;
        }

        public void StartProcessing()
        {
            Status = OrderStatus.InProgress;
        }
    }

It is stateful and the methods return void. This type of classes can be written in many ways, but I think this is rather common so I’ll use it even though it’s not perfect for the F# code which we’ll see later.

Create the builder

The builder class in C# is stateful (see blog post referred above), because it has class variable to which each function adds data. Using class variables means that the methods has a side-effect, and side-effects are something that we’ve learned that we should avoid if possible.

The functions in the F# builder are very simple and with no side effects as they operate on their indata (or create new data). The first one creates the order itself and returns it. The next one takes an order as an argument, changes it and returns it:

let withSingle () = 
    Order()

let withStatusInProgress (order : Order) = 
    order.StartProcessing()
    order

In F# we don’t write “return”, the value of the last line is automatically returned. Order() is the same as “new Order()” in C#. Note that if the method “order.StartProcessing” had returned the order, we could have omitted the last line of the second function. But I wanted to keep the C# code as we usually write it, like it would be in a real case.

So, this is the builder for now. We’ll keep it simple and find out later how to handle lists (which we do in the C# builder).

Function pipelining

This is what a test can look like in F#, FsUnit is used to get the nice assert syntax. Here we use pipelining to combine different functions that produces and manipulates the order.

[<Fact>]
let ``Cancel order - pipe test`` () =
    
    //Arrange
    let order = 
        OrderBuilder.withSingle()
        |> OrderBuilder.withStatusInProgress

    //Act
    order.Cancel()

    //Assert
    order.Status |> should equal OrderStatus.Canceled

“OrderBuilder.withSingle” is called to get an order that is used as input the next function “OrderBuider.withStatusInProgress”. The order that is sent to, and received from the functions is not visible in the code. This might be confusing at first, but when you get used to it it’s nice because it keeps you code cleaner.

..the function parameters can often be ignored when doing function composition, which reduces visual clutter.

F# for fun and profit

Function composition

With function composition it’s possible to combine functions in many different ways, and store them in variables. These function variables can then be combined with each other. If we use function composition in our test it would look like this instead:

[<Fact>]
let ``Cancel order composition test`` () =
    
    //Arrange
    let createOrder = 
        OrderBuilder.withSingle 
        >> OrderBuilder.withStatusInProgress

    let order = createOrder()

    //Act
    order.Cancel()

    //Assert
    order.Status |> should equal OrderStatus.Canceled

In this test there is an extra line to actually run the function. The best written tests should be both readable and short, so probably pipelining would be the best choice in this specific example.

Build lists

How do we build lists in a clean and simple way in our tests? First let’s add another function to our builder, “withStatusCancelled”:

let withSingle () = 
    Order()

let withStatusInProgress (order : Order) = 
    order.StartProcessing()
    order

let withStatusCancelled (order : Order) = 
    order.Cancel()
    order

Let’s say the test should build a list with one order that is in progress, and one that is cancelled. By using composition, two functions are defined by combining two other functions:

    //Arrange
    let createInProgressOrder = OrderBuilder.withSingle >> OrderBuilder.withStatusInProgress
    let createCancelledOrder = OrderBuilder.withSingle >> OrderBuilder.withStatusCancelled
    
    let orders = [createInProgressOrder(); createCancelledOrder()] 

To make the code even more compact, the following can be done without losing readability:

    //Arrange
    let orders = [
        (OrderBuilder.withSingle >> OrderBuilder.withStatusInProgress)()
        (OrderBuilder.withSingle >> OrderBuilder.withStatusCancelled)()
    ]

What do you think? Which one is the best? Or does anyone have a better suggestion?

Conclusion

Since F# is a language that is built for composition, it’s great for providing flexible data to tests. The amount of code needed for the OrderBuilder is significantly reduced compared to in C#. Handling lists is very easy and can be kept outside the builder, in the test itself.

Using F# for the unit tests is not only a good and safe way to get used to a functional language. It also makes the tests simpler and reduce boilerplate and the number of lines in your code.

After getting used to it, I also think that it makes the code more readable! (If written in a good way, but that applies to C# too). I recommend you to read more about how to use it as a Domain-Specific Language to take this one step further!


Download the complete code here.

Confused? Want to learn more?
Spend 60 seconds to better understand F#
Want to learn more about building blocks?


Letter to the coding newbie

Dear Newbie,

I know you struggle a bit with your new role and how to become a successful professional programmer. I think the best would be if I just told you, already now at start, what you need to know!

Many newbies are worried that they are not smart enough, but this is rarely the problem. What I’ve seen though, is lack of communication and humbleness. So, when you get stuck, don’t wait too long before you ask. Just do it! Asking questions means you’re smart! And even smarter is to ask the one in the team who you suspect might criticize your code later!

One advantage of being a newbie, is that you’re not supposed to know so much. I mean, after all, you’re a newbie! So instead of pretending to know stuff, be curious and try to learn all the time, every day. Being a person who wants to learn, is always appreciated by others. You’ll show the team that you’re genuine and you will learn and become more productive.

If the customer is happy, then I’m happy, you may think. And while this is of course true in one sense, it’s not the whole story. Because almost equally important is being a good team member. If the team agrees about something, you need to comply. Make friends with the team members and if something seems important to them try to work in that direction too. They might know something that you haven’t understood yet. Assume you work with smart people and try not to be the smartest person in the room.

I’m very glad that you’ve chosen this career path and I promise you a lot of great moments! Even if we old dinosaurs seems stressed sometimes, we love your enthusiasm and drive, and to relive memories from the time when we were newbies ourselves!

Yours sincerely,

Christina

KAFKA – Turning systems inside out and upside down

Inside out upside down: If something such as a system or way of life is turned inside out or upside down, it is changed completely, making people confused or upset.

collinsdictionary.com

While working with domain-driven design and event-driven development, I’ve every now and then stumbled over the event streaming platform named Kafka. When asking what it is, I’ve not fully understood. I’ve also been warned that it should only be used for very specific cases.

So, I’ve been kind of reluctant to dig into it until I started to read about Azure Event Hubs. The first thing that came up when starting to type it on Google was “Azure Event Hub vs Kafka”! So, being a person that wants to be up-to-date (well…) with Azure, I’ve now decided that I need to learn more about this.

I downloaded a book with the title Designing Event-Driven Systems from the prominent company Confluent. After reading it, I was really impressed! It confirms so many problems we have by building our systems as separate islands trying to communicate to each other. There were definitely a lot of new ideas of how to solve these problems!

What about the warnings?

Event streaming, as well as other event technologies or patterns as CQRS and Event sourcing, can add a lot of extra overhead if not used for an explicit purpose. You should consider the business you’re in, how they make money, weighing pros and cons. Bringing in Kafka to an organisation is a huge step, and it is important that you get return of investment.

But with that said, how could we know why we shouldn’t use it, if we don’t know what it its. So let’s get started!

What is Kafka?

Event streaming is the digital equivalent of the human body’s central nervous system. It is the technological foundation for the ‘always-on’ world where businesses are increasingly software-defined and automated, and where the user of software is more software.

Apache

Kafka is an event streaming platform, optimised for stream processing and high throughput. If you want to read the hard facts you can continue at Apache. If you want to know why it turns your view of software development upside down, preferably continue with this post!

Before I read the book, I thought of Kafka as an advanced service bus (ESB), but there is a main difference. Organisations using ESB tend to have a centralised approach with central teams that decide about schemas, transformation and message flows. This slows down the setup of new integrations and changes of existing. This also causes systems and services to evolve at a slower pace. With Kafka the services themselves are encouraged to provide their data to others, who are free to “act, adapt and change” according to their business needs.

Centralize an immutable stream of facts. Decentralize the freedom to act, adapt, and change.

Core mantra of event-driven services

Kafka has two APIs for stream processing, Kafka Streams and KSQL. With these you can filter, join streams and run arbitrary functions on the data. There is also an API for connecting to other systems, for example databases optimised for specific purposes like search. This API can connect to legacy systems that you are moving away from.

Kafka is saving events in streams which can also be seen as logs. Keeping data as logs means that doing updates is amazingly fast, just appending a new event to the end of the log. Logs also makes it possible for Kafka to scale linearly. It is typically installed on at least three machines but can be scaled up to hundreds. When reading and writing to a log, your data is written on all machines, which makes it easy to just add one. I is nearly impossible to hit a scalability wall with Kafka, the book is describing this in more detail.

A business consists of events

A lot of things (everything?) that happens in a business can be seen as events; hotels prices are changed, bookings cancelled, refunds performed. Even when a misspelled name is corrected, it is an event. In traditional systems there are few traces of these events, since only the latest state is stored in the database. Data that could be used for historical analysis is gone, or is hard to find. The events are not visible in the system, and throughout the flow between the systems and services.

Two things that are affected by this is:

  • It’s harder to analyse historical data to make business decisions, because a lot of data is not there anymore.
  • It’s hard to find out what is causing a bug and exactly where it happens.

“Work with the system, not against it”

Keynote at #kafkasummit

Instead, we should embrace the fact that things that happens in a business can be stored as events. These events flow in streams, and are projected to other events. (The present state is often cached to improve performance.) When data is handled this way, there is a transparency of what has happened in the system and the flow can be followed throughout different services. By analysing the data in the streams, valuable information can be collected which can serve as foundation for important business decisions.

The outside data is considered as a single source of truth

There is a distinction between the data on the inside and the data on the outside of the service or system. The data on the outside is much harder to change since a lot of other services are depending on it. But the fact that many are dependent of it makes it also much more important than the encapsulated data inside the service. Kafka is storing the entire event log outside the service, fully available to other services and systems! This makes the outside data a first-class citizen and the single source of truth.

Make data on the outside a first-class citizen

Ben Stopford

The database is turned inside out

Many of us recognise the problem of synchronising changes over several teams. E.g. one team wants to try out a brilliant idea that the customers would love. To do that they need data from a system that is maintained by another team. The other team is busy for months with changes requested by other parts of the business, and can’t help out. This slows the whole organisation down and causes endless priority discussions and a fight for resources.

When the source of truth is placed outside systems and services, it is available for anyone. It is stored as a stream of events. This makes it possible for anybody to travel in time, choosing exactly the data that would be the best for their purpose. This is referred to as “turning the database inside out”! To improve performance, data is also cached in Kafka tables, refreshed asynchronously. These tables can be defined for different purposes and are queried using the KSQL language.

This decouples the data in an organisation and keeps it as a shared single source of truth.

Less data needs to be stored inside

When the data is stored in shared logs outside the system or service, there is less need to store data inside. Data can be kept in memory, or read from the log or views whenever needed. That would reduce the storage required when data is stored more than once. It also lowers the risk that the data in different systems or services starts to differ.

Conclusion

Is Kafka the answer to my naive thought about streaming data through a system in my blog post Functional Domain Modelling? I can be. Kafka works very well with functional programming. But streaming data throughout a system can be done in many ways. Some of them certainly more lightweight and less complicated than Kafka. There is a lot more to learn in that area!

Increased interest from developers

To develop the best possible services to their customers, many companies need to handle a lot of data, e.g. collected in apps or in IOTs. Huge amounts of data must be processed with exceptional performance. More and more developers realise that they must find new ways to tackle these requirements. I recently joined the (on-line) Kafka summit together with 25000 other developers. Most of the participants described themselves as beginners which indicates an increased interest in this technology. Of course, there is a huge step to take from building traditional systems with REST APIs and request/reply approach, to doing it with event streaming. It’s a completely different way of thinking.

Kafka is also a very complex platform and it requires specialised persons only to manage the hosting. This indicates a large-scale use to make it profitable. It seems like they are working on that, though. Both by improving the product itself, and by making it easier to host in the cloud e.g. by using Azure Event Hubs. Techniques for event streaming and event sourcing in general tends to drive complexity. Especially as it is less intuitive and you have to find different solutions than you normally do.

Not mature enough for all of us…yet!

I’m quite impressed with Kafka and find it very interesting. Is mainly because I’ve been working at large companies with hundreds of systems. It is quite obvious that the data flows in the organisation, rather than is stored in separate databases.

One question that I ask myself is: Would we benefit from using Kafka even in applications that does not have enormous amounts of data in combination with extreme performance demands? I think we might…eventually. These use cases might be handled by other tools than Kafka, but a guess from my side is that a transformation to a more common usage of event streaming will probably be performed during the next ten years.

I’ve used event sourcing and CQRS in several projects. From quite pragmatic solutions within a single microservice, to fully implemented throughout the whole system. There were lessons learned and we can discuss whether it was the right way to go or not. Nevertheless, it gave me a new dimension to software development that I can’t move away from. The technologies are not mature enough yet, and we as developers are still very much stuck in the existing paradigm. But I think that event streaming is something that we’ll have to learn and get used to, in most applications. Exactly in what forms, we don’t know yet.

But, for sure…

To boil a whole book down to one tiny blog post is of course impossible. There are so much more that I want to squeeze in, so many more smart conclusions and so many quotes. But one thing I’m pretty sure of:

Kafka turns the database inside out and the way we build systems upside down!

What about the connection to Azure Event Hubs? Please read more in the links below!

Links:

Apaches description of Kafka: Apache

What Azure Event Hubs are: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview
How Azure Event Hubs relates to Kafka: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview
Kafka .Net Client: https://docs.confluent.io/current/clients/dotnet.html

Pat Helland: Data on the Inside and Data on the Outside

I’m now London based

A few weeks ago, me and my family took a huge step in our lives, we relocated from Gothenburg to London!

My everyday life has totally changed; a new house, a new city, not knowing anybody around me. I find myself going shopping at the butcher shop, greengrocery, fish store and bakery rather than driving to a supermarket. Amazon is my best friend; I can order almost anything and get it delivered two hours later. I walk in forests that look like they were taken from a Robin Hood movie instead of the enormous fir forests I’m used to in Sweden. Three weeks into my new life, after some ups and downs during the actual move, I’m realizing it has started out very well!

“Real change is difficult at the beginning, but gorgeous at the end. Change begins the moment you get the courage and step outside your comfort zone; change begins at the end of your comfort zone.”

Roy T. Bennett

Relocating to a new city, and country, might also turn things upside down in your career. I’ve usually got my jobs and contracts in Gothenburg via my social network and previous customers. In London I have to join the big job market competing with a lot of other equally skilled applicants, which is both exciting and challenging. I definitely have to step out of my comfort zone, which will hopefully make me grow and enable further personal development.

The Corona pandemic is not making things easier, or, maybe it is? These days we’ve all got used to work remotely which means that more companies than ever have their teams distributed over short and long distances. From that perspective I could still find a contract in Sweden and be part of a Swedish based team although I’m London based!

I have a lot of different options of how to proceed from now, and I’m so excited to see where this will take me. If anyone has any input or idea, I would be curious to hear about it!

View from London Bridge on our first visit to the city after moving to our new house in Barnes.

Teamwork – Complement instead of Compete

In an article called “The Secrets of Great Teamwork”, published in HBR (June 2016), Haas and Mortensen claim that an enabling condition for successful teamwork is that the team has a strong structure:

High-performing teams include members with a balance of skills. Every individual doesn’t have to possess superlative technical and social skills, but the team overall needs a healthy dose of both.

Harvard Business Review

This is something that I’ve experienced a lot and learned to appreciate. In many cases, however, the different personality and skills are perceived the opposite way, and problems occur accordingly. People are getting on each other’s nerves and team friction appears.

Agile methodologies are not enough

As you probably already know, there are several methodologies that provide a good foundation for effective teamwork, for example Agile, Lean and Kanban. These methodologies are very good, and should be used adapted to the specific conditions of the team.

My experience is, though, that methodology can never replace soft skills, and believe me, I’ve been in projects where we tried – a lot! I think that the first step to improve the soft skills is to start look at each other in a different way.

Different personalities

Many conflicts and lack of cooperation derives from the fact that we simply are different. For example, some people want to move fast, other are slow but thoughtful, some want to take risks while other want to be safe. And so on.

It is good that we have different personalities, since it makes the team stronger.

Manager

Next time you get annoyed by a colleague who doesn’t think like you, try to appreciate the other person’s perspective. If you’re fast and tend to take risks, it might be good for you to think everything over once more. If you’re slow and always want to take a safe approach, you might need someone to help you getting forward.

Everybody has both good and bad sides – even you and me! If you’re very good at details, you might not have the best overview of things. Your good sides often come with bad ones.

Good qualities come with bad ones.

Manager
Different skills

The same is true for different skills in a team. One teammate might be very experienced within the technologies that the system is built with. Another has worked a lot with newer technologies. Excellent, they complement each other! Let the one who knows new technologies evaluate whether they can be useful, but let the other guy balance it all up to make sure it really applies to the business problem that the system solves.

Make the differences an advantage

So, we should see differences as an advantage instead of getting competitive against each other. It makes the team stronger. Everybody is needed and should be encouraged to contribute as much as they can, bringing out their good sides.

Strive for a complementary team!

As 4D (dynamic, diverse, dispersed, and digital) teams become more and more common, I think that one key to success will be to change the view of each other to complementary instead of competitive!

Read the Harvard article here: https://hbr.org/2016/06/the-secrets-of-great-teamwork

Unit testing – Provide flexible data

It’s rather easy to find information about how to write a good unit test in the means of the test itself. But how should we handle the data that we need to test? How can we create test data structures in a flexible way?

As a DDD (Domain-Driven Development) person, I like to be able to read the code as I read a book. This is of course very hard, but something that I strive for. Therefore I often use the builder pattern for unit tests.

Builder pattern

Generally, a good approach is to create the data within the test itself. This makes it easy to see exactly what the test do. But after some tests you often find the same code repeated many times so you start looking for a generic way of generating the data suitable for each little thing you want to test. The builder pattern comes in handy here.

Let’s say you want to test the following test case:

* When you cancel an order, it gets the status cancelled. *

The test

You need to create an order with the status InProgress. Your test will then change the order to Canceled an verify that it worked. In a real system, you would probably have a lot of tests around status changes. Therefore it would be suitable to have code that creates an order in different statuses.

With the builder pattern to generate the order the test might look something like this:

        [Fact]
        public void CancelOrderTest()
        {
            //Arrange
            var order = new OrderBuilder()
                .WithSingle()
                .WithStatusInProgress()
                .Build()
                .First();

            //Act
            order.Cancel();
            
            //Assert
            order.Status.Should().Be(OrderStatus.Canceled);
        }

Let’s start with the “Arrange” part. A new OrderBuilder is created and asked to create a single order with the status InProgress. When looking into the builder later on you’ll see why the .Build() and .First() methods are needed.

The “Act” part here is intended to cancel the order, and the “Assert” checks whether the order has got the status Canceled. The FluentAssertions package is used to get a nice and fluent syntax of the assertions.

The builder

The OrderBuilder is a class with methods that returns the class itself. This is what makes it possible to use the fluent syntax in the test. Only the Build() method returns the actual list of orders:

    public class OrderBuilder
    {
        private List<Order> _orders = new List<Order>();

        public OrderBuilder WithSingle()
        {
            _orders.Add(new Order());
            return this;
        }

        public OrderBuilder WithStatusInProgress()
        {
            _orders.Last().StartProcessing();
            return this;
        }

        public List<Order> Build()
        {
            return _orders;
        }
    }

I chose to prepare the builder to handle many orders, by adding a private list variable in the top of it. Of course that could also be a single order.

The WithSingle and WithStatusInProgress methods both operates on the private class variable and step by step build up the order that the caller of the builder wants to have. More methods can be added and freely combined and called after each other to e.g. build up a list of orders:

        //Arrange
        var orders = new OrderBuilder()
            .WithSingle().WithStatusInProgress()
            .WithSingle().WithStatusNew()
            .Build();

The Build method returns the orders and breaks the chain by not returning the TestBuilder class.

A functional approach

Another approach to the problem is to use a functional way of building up the test data. Sometimes you have a lot of properties, and want to change them in a controlled way to get exact the data you want to test. You can add a method to the builder that lets you create an order this way:

    [Fact]
    public void CancelOrderTest2()
    {
        //Arrange
        var order = new OrderBuilder()
            .WithSingle(o => o.Status = OrderStatus.InProgress)
            .Build()
            .First();

        //Act
        order.Cancel();

        //Assert
        order.Status.Should().Be(OrderStatus.Canceled);
    }

You can from the test set which properties you want to change on the order, and combine with other builder methods if you want. The WithSingle builder method that takes a function parameter looks like this:

    internal OrderBuilder WithSingle(Action<Order> action)
    {
        _orders.Add(new Order());

        action?.Invoke(_orders.Last());

        return this;
    }

It takes a function as an argument, creates an order and applies the function to it.

Conclusion

The builder pattern has helped me a lot and it is always fun to use it! I show it here as I learned it from start, but in every project I use it in a special flavor suitable exactly for that special environment.

How to work with software architecture?

Are there any good reasons to not continuously work with a system’s architecture? I don’t think so. In my opinion it should be part of every development team’s work to know in what direction they’re heading, and also to try to influence that direction. I’ll try to help by describe how it can be done.

In most teams I’ve been working with there are a lot of complaints about the current code but there are very little money for refactoring. The system is slowly degrading, changes become expensive, and random refactorings (if the team can for once persuade their management) are done without getting any real improvements.

I usually follow the same steps to get in control of the system’s architecture and find a way forward:

  1. Describe the current architecture
  2. Define the target architecture
  3. Sell in to management
  4. Implement with team

It’s absolutely fundamental that the team starts to cooperate during this process, because to be successful you have to get the whole team running in the same direction. Quite often there are different opinions that has to be discussed and considered, to eventually agree on a way forward.

Describe current architecture

It is very common that the developers can’t describe their system, or draw it on the board. They know exactly where they need to do their changes, and they read the code efficiently, but they can’t give an overview of the system. This makes it very hard to describe it to new developers, and to discuss it with other team members. Which part of the system are we actually talking about? Which are the responsibilities and purpose of that part?

So the first step to improve the system is to describe the current architecture, to draw a picture of it as-is. When describing the system, don’t get into too much details. One page is usually enough. Of course this depends of the size of your system, but be rather too brief than too detailed. I would recommend something like this:

As-is architecture

The different clients and servers are described, and which technology/tool they are developed with. There is also an overview of other systems that this system is integrating with, and how.

When you’ve finished this description, put it on the wall and use it when you talk about the system, or when there are developers that are new to the team. Even project leaders and product owners can understand this kind of picture.

Define a target architecture

Next step, now when you know what you have, is to define the target architecture; how you want the system to be.

Now, if not done earlier, and even if you’re an appointed architect, you need to start discussions within the team about the future. Because it is now you have the best chance to involve the team members, listen to them and make them part of the decisions.

Involved team members will be dedicated when the architecture is to be implemented later on!

This takes time. It is also a phase full of learning, discussions and compromises. It is important that you understand as much as possible about the business:

  • What is the business making money of? (The vacuum cleaners or the dust bags? The cars or the parts?)
  • What part of the system is core, i.e. supports the business domains that makes the most money?
  • Which parts of the system is in most need of improvement, from the business perspective?
  • How would the business see the system evolve?

If your system is not very small, you need to do some domain modeling here to divide the system into smaller parts and structure the business logic into domains where it can be found, changed and reused. Also try to define non-functional requirements like availability, performance requirements and usability. These might already be known, but it’s good to write them down and agree about them.

Preferably, do proof of concepts of the parts of the architecture that you feel uncertain about. But since you have not convinced your management about this new architecture yet, you might not have time or resources to do technical investigations at this point. Remember that you can do the target architecture on the basis of what you know at this moment, and adjust it later as you learn more.

It can also be a good idea to learn more general things about architecture and coding conventions together with the team; read books, watch videos, have study groups. This makes it easier to get out of “deadlocks” when it comes to different opinions.

When you’ve gone through this phase you should hopefully end up with a target architecture more or less different from the current architecture:

To-be architecture

Try to not get started on the technical choices made in the target architecture above. It’s hard, I know, but this is just an example!

Sell in to management

Now you know your current architecture and you have defined the target architecture. It’s time to decide how to reach the target architecture.

To implement the target architecture in one big bang release, or to rewrite the system part by part must be decided from case to case. I always recommend to do it one part at a time because then you get payback from you investment sooner. That also significantly lower the risk.

What should you start with? From my experience, the best is if you can find something that would benefit both the business and the system itself. You can argue that to rewrite something without improving the business functionality is also good for the business because it will save money in maintenance, improve automated testing etc, which is probably true. But it can be hard to convince management to do that kind of investment.

In short, when moving towards the target architecture:

Start with the parts where the business wants a lot of functionality improvement.

Use the architecture pictures that you’ve made when discussing with your management, but leave technical details out of the discussion as much as possible. Try to see it from their perspective.

Implement with team

Hopefully, the work with the target architecture has resulted in a good team spirit because that’s what you need when you start to implement the target architecture. The way forward is now clear to everyone, but there are still a lot of details that must be solved and maybe a lot of new technology to be dealt with.

It is now important to keep focus on the benefits that you did sell in to the management, and make sure that you’re deliver something that fulfills the requirements. Keep the risks and the scope down. What you deliver now will hopefully build confidence and open up new possibilities for the future. If you fail, which we all do sometimes, be clear and honest of what failed and what you’ve learned from that.

There is a lot to write about the implementation of the architecture, too much for this blog post. I would recommend you to read more about hexagonal architecture or onion architecture, business logic and layering. It is also a good idea to read about business events, and event-driven development. Some of these I might have happened to write about in other blog posts!

Read my blog posts about:
Functional Domain Modeling
Why Micro services
What is Business logic

This is a very interesting article about legacy architecture modernisation that I wish I had written myself:
Legacy Architecture Modernisation with Strategic Domain-Driven Design

Also, read more about architecture and modeling from these famous writers:
Domain-driven design by Martin Fowler
Clean architecture by Uncle Bob