Cultural Warning Signs You Are Not Ready For Microservices

Paul Seymour • Mar 12, 2019
The original plan for this post was an overview into the technologies and patterns used in microservices, in particular I wanted to look at the bits we often see missing or poorly implemented. A colleague of mine, Dan Dekel, recently gave a talk at a local meetup on a similar topic, and it generated more debate than he had expected. He covered the importance of CICD, running multiple instances of each microservice, consistency across the environments, patterns such as circuit breaker, general transient fault handling, correlation and analytics on logging, API versioning and consumer driven contract testing. It seems that opinions (and emotions) on microservices still run deep, and what ensued was a debate on synchronous vs asynchronous architectures.

When I speak with developers that have had a poor technical experience with microservices, there seems to be a common denominator. The team was often well aware of the problems, and they had ideas on how to fix it, but they were essentially stuck with a blueprint handed down from above.

So in this post I’m instead going to look at some of the cultural warning signs you might want to consider before making a decision to go with microservices.

Moving to microservices will fix our scalability issues

There is no question that well designed microservices can scale in ways that are difficult for a monolith. But seldom do I see a monolith that has gotten anywhere near the limits to which it could scale. Monoliths can and do operate at global scale loads. If your monolith is having trouble supporting a few hundred users, then it’s probably not because it’s a monolith, or because it was written to use some (now) less fashionable framework. Scale issues will exist regardless of the architecture and if you can’t fix these problems in your existing application, microservices aren’t necessarily going to improve the situation.

Microservices will improve the throughput of our development teams

This is an interesting one, because the ability to decouple teams and have them work on separate groups of microservices is a significant productivity gain.

One observation is that a monolith can create contention at the source code level. As the number of developers increases, you can easily trip over each other as changes are merged. At a certain size and level of complexity a few key individuals become the only people that have the knowledge necessary to make significant changes to the monolith, and it becomes difficult to scale around these people. In contrast, microservices will generally be divided into much smaller self-contained repositories and this makes it easier and safer to make changes. However, this isn’t an intrinsic benefit of microservices, it is really just an argument for good code structure and separation of concerns - you could achieve something similar in a monolith by organising the code differently.

The most significant productivity improvements are realised inside an organisational culture that understands and grants autonomy to the teams. And that remains true independent of whether you are building monoliths or microservices. Further, if you don’t already have that sort of culture, then a pivot to microservices is likely to create a really big mess.


Here are a few warning signs that your culture might not be ready to support microservices:
  1. You prefer to make technology decisions at the middle or senior management level
  2. You believe your teams lack the maturity to make good design and technology choices
  3. Your teams are focused on technology and not on business value
  4. You don’t have (or want) clear specialisation and domain boundaries between teams
  5. You are trying to implement a scaled Agile process and hope microservices will help
  6. You have backlogs shared between teams
  7. You have decision making layers between the Product Owner / Development team and the end user
  8. You think your current problems are the result of an incorrect technology choice
  9. You have a CAB
In one way or another, all of these items shift control away from the development team. Why is this particularly problematic with microservices? The productivity gains of microservices come from the fact that they are decoupled from each other (and by extension, other teams). This empowers teams to operate with a minimum of external dependencies and each to operate at close to their sustainable capacity.

Anything that gets in way of this autonomy has the potential to render the productivity gains of microservices irrelevant, and you’ll likely be running significantly slower than you were with a monolith or an older tech stack. If you are considering microservices, or have already started using them, I’d highly recommend reading Accelerate. It provides a comprehensive overview of research into the capabilities of high performing teams and how they can be measured. Don’t head down the microservices path without a culture that can support autonomous teams.

Share This Post

Get In Touch

Recent Posts

By Joe Cooney 02 Apr, 2024
Red-team challenges have been a fun activity for PZ team members in the past, so we recently conducted a small challenge at our fortnightly brown-bag session, focusing on the burgeoning topic of prompt injection. Injection vulnerabilities all follow the same basic pattern – un-trusted input is inadvertently treated as executable code, causing the security of the system to be compromised. SQL injection (SQLi) and cross-site scripting (XSS) are probably two of the best-known variants, but other technologies are also susceptible. Does anyone remember XPath injection? As generative models get incorporated into more products, user input can be used to subvert the model. This can lead to the model revealing its system prompt or other trade secrets, reveal information about the model itself which may be commercially valuable, subvert or waste computation resources, perform unintended actions if the model is hooked up to APIs, or cause reputational damage to the company if the model can be coerced into doing amusing or inappropriate things. As an example, entrepreneur and technologist Chris Bakke was recently able to trick a Chevy dealership’s ChatGPT-powered bot into agreeing to sell him a Chevy Tahoe for $1 . Although the U.S. supreme court has yet to rule on the legal validity of a “no takesies backsies” contract (as an employee of X Chris is probably legally obligated to drive a Tesla anyway) it is not hard to imagine a future scenario with steeper financial consequences.
27 Feb, 2024
With the advent of ChatGPT, Bard/Gemini and Co-pilot, Generative AI, and Large Language Models (LLMs) have been thrust into the spotlight. AI is set to disrupt all industries, especially those that are predominately based on administrative support, legal, business, and financial operations, much like insurance and financial organisations.
By Joe Cooney 22 Feb, 2024
One of the features of life working at PZ is our brown bag lunch and learn sessions; presentations by staff on topics of interest – sometimes, but not always technical, and hopefully amusing-as-hell. Yesterday we took a break from discussing the book Accelerate and the DORA metrics to take a whirlwind tour of the current state of play running “open source” generative AI models locally. Although this talk had been ‘in the works’ for a while, one challenge was that it needed to constantly be revised as the state of AI and LLMs changed. For example, the Stable Video Diffusion examples looked kind of lame in comparison to OpenAI’s Sora videos (released less than a week ago) and Groq’s amazing 500 token-per-second hardware demo on Monday/Tuesday , and the massive context size available now in the Gemini 1.5 models (released a few hours before OpenAI announced Sora...coincidence? An effort by OpenAI to steal back the limelight! Surely NOT!). And now a day later, with the paint still drying on a highly amusing slide-deck for the talk, Google releases their “open-source" Gemma models! The day itself presented an excellent example of why having more control of your models might be a good thing. ChatGPT 4 users began reporting “crazy” and highly amusing responses to fairly normal questions . We became alerted to this when one of our own staff reported on our internal Slack about a crazy response she received to a question about the pros and cons of some API design choices. The response she got back started normally enough, but then began to seem to channel Shakespeare’s Macbeth and some other olde English phrases and finished thusly. "Choose the right charm from the box* dense or astray, it’ll call for the norm. Your batch is yours to halter or belt. When in fetch, marry the clue to the pintle, and for the after, the wood-wand’s twist'll warn it. A past to wend and a feathered rite to tend. May the gulch be bygones and the wrath eased. So set your content to the cast, with the seal, a string or trove, well-deep. A good script to set a good cast. Good health and steady wind!" The sample JSON payload was also in keeping with the rest of the answer. { "htmlContent": "

Your HTML here

", "metadata": { "modifiedBy": "witch-of-the-wood", "safety": "sanitized", "mood": "lunar" } } Hubble, bubble, toil and trouble. Although there were no reports of the GPT4 API being affected by this (only ChatGPT) it might have given people developing automated stock trading bots using GPT4 a reason to pause and contemplate what might have been if their stock portfolio now consisted of a massive long position on Griselda’s Cauldron Supplies. As ChatGPT would say, Good health and steady wind.
Bay McGovern Patient Zero
By Demelza Green 11 Feb, 2024
Bay didn’t start her career out in software development. At school, Bay excelled at maths and physics, but adored writing, English and drama; lost in a world of Romeo and Juliet and epic fantasy.
More Posts
Share by: