Skip to main content
Start now
5 min read

Adventures in Kubernetes (Part 4)

Featured Image

These aren't the teams you are looking for

In Part 3 of our series, we mentioned that one benefit of Kubernetes (K8S) that many teams like is the ability to have an isolated namespace for each collection of microservices created by a team. We also mentioned that this is not a feature that was particularly useful for us. That isn't to say that we don't take advantage of namespaces, we do. But the cultural assumptions about how microservices are deployed and how we actually developed them, did not match when we started our K8S journey.

First, let's spell out the cultural assumptions you will find across the internet behind the ideal methods of creating microservices.

  1. Each microservice is created by a single team, and only that team maintains the microservice.
  2. Teams have full control over what gets deployed to production.
  3. Teams should not depend on other teams.
  4. Teams have isolated responsibilities and do not work on the same services.
  5. Each service has only a single responsibility. 
  6. Each service has a fixed interface so other services can use them without changes.

At OnceHub, we have a different culture from the above points, though as with all good companies, the culture is always changing and adapting as we learn more about ourselves and scale. Here are the key points of our culture, in relation to the above, when we started using Kubernetes and our culture now.

  At the Start of our Kubernetes journey In the middle of our Kubernetes journey
1 All services are worked on by all teams. Divided by feature, not responsibility. Services are worked on by teams within the specified business unit.
2 Teams update services and then "throw them over the wall" awaiting a deployment / release. Teams contribute to regular deployments at the end of each sprint, and coordinate releases with marketing or product
3 Teams get regular knowledge transfers from other teams, and coordinate dependencies. Teams coordinate with other teams in their business unit.
4 Teams are frequently changing what they work on, unless they have specific experts. Teams work on specific business units, unless they are in rotation for cross cutting issues like triage or security.
5 New services are created based on existing database structures. Services grow until they need to be split.
6 Services change with each other. Multiple "Distributed monoliths". Some services change with each other. Others are fully cohesive and isolated.

 

As you can see above, our company did not fit any of the "best practices" or assumptions which most books and articles assumed when talking about Kubernetes and microservices. For a long time, we thought we were the only ones who had this mismatch, but as we have engaged more with the community we are learning that most companies don't have the "ideal" team structure or culture. This tends to be true in general. The more people talking at conferences or writing books about how to do things, the less likely it is that your company is already doing it. And that's ok. The reason they are writing books and talking at conferences is precisely because most people aren't doing it.

What we have learned from our journey is that sometimes the "best practices" evolve to match the tools and systems you are using, and sometimes the tools and systems are a good fit for an existing practice. In regards to Kubernetes, it's a tool that was developed by and for teams that were already heavily invested in microservices. They learned what their pain points were from experience, and they already had a culture that matched the original six points. Therefore, when people write about Kubernetes they correctly use the cultural assumptions that existed around the people who created the software to solve a specific problem. However, if Kubernetes was only useful in that single cultural environment, it would not have become the successful and hyped buzzword that it is.

Normally, a company whose culture and structure does not match the underlying software can suffer from a kind of company wide cognitive dissonance, as the teams building the software come into violation of "Conway's Law". The law states that "Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization's communication structure".  Often the inverse of this law is also true. If a team's communication structure does not match the structure of the system, one or the other will change. If this change is not properly planned it can cause disruptions and a loss of productivity. Because of this, many people will advocate that a tool like Kubernetes should not be used, unless the teams building the microservices have a similar underlying structure. However, we have found that the differential between what K8S assumes, and how your teams work, is relevant to gaining the benefits of Kubernetes.

Your company can have the "worst" command and control environment, teams which are larger than "two pizzas", lack all the highly evolved agile techniques used to build the right software at the right pace, and still come out gaining the benefits which we delineated in Part 3 of this series, or even other benefits that other teams aren't even aware of yet. Kubernetes is akin to an operating system, and people who use Linux or Windows or MacOS often forget what it is that the OS is even doing for them. As you grow in your Kubernetes journey, YAML manifests and Kubernetes APIs will be the same for you. Your highly distributed, scalable, and possibly global application will benefit from the solid underlying "operating system", even if your company culture, or team isn't.

In our view, Kubernetes is successful because it even works well for teams that don't have the same ecosystem as the ones that the people who created it had. In fact, Kubernetes can sometimes be a motivating factor to help improve the team and company culture especially around areas of continuous deployments, immutable infrastructure, and operations maintenance. Because so much of K8S is defined and operated with arcane YAML files, it lends itself to automation, and the built in security rules might push teams towards working in isolated environments, where services can communicate, but the blast radius of a bad deployment is limited.

While you may not have teams that are expected, or you are in the process of making that transition, Kubernetes can still work for you, assuming all the other context and scale issues which we discussed in Part 1 and Part 2 are in alignment. In our next installment, we will write about how Conway's Law was in effect during our Kubernetes journey, and how it has helped us in creating the culture we wanted, but weren't certain on how to reach.


Avi Kessner, Software Architect

Avi started as a Flash Animator in 1999, moving into programming as Action Script evolved. He’s worked as a Software Architect at OnceHub since September 2019, where he focuses on Cloud Native Automated workflows and improving team practices. In his free time, he enjoys playing games, Dungeons & Dragons, Brazilian jujitsu, and historical European sword fighting.