Microservices and Organisational Culture

Paul Seymour • Oct 18, 2018
In our first post we considered the CICD maturity needed to manage microservices at scale. But DevOps extends beyond tooling and speaks equally to both development and organisational culture. One of great benefits of a microservice architecture is its ability to scale work across multiple teams. It facilitates ‘decoupling’ by allowing teams to operate autonomously within the domain context and microservices they own. Microservices happen to fit very nicely into a team-based autonomous delivery model, and it’s a widely held industry belief that this is one of the most compelling reasons to build using microservices.
 
But even the most capable teams, with excellent DevOps skills and practices, will struggle with microservices in an organisation that does not have a high degree of Agile maturity. We've seen organisations regressing back to Waterfall practices as a result of the technical and business demands of a troublesome microservices platform.
 
So what are some of the ways organisations can eliminate potential challenges and deliver a successful microservices project?
 
First and foremost, you need a management culture that understands Agile development processes, and is able to embrace and adapt to change. A good example is the way microservices need to be tested and released. Traditional monolithic approaches, such as releasing everything into an ‘integration’ environment and having tests confirm that all the bits play nicely, just doesn't scale when you have dozens of interdependent APIs released multiple times a day. Traditional release management processes quickly become a quagmire if there are large numbers of microservices needing to be tested, versioned and moved monolithically through different environments.
 
If your organisation has a middle management layer that is more ‘command and control’ than ‘servant leadership’, then a microservice architecture will present them with significant challenges. That may be a good thing if you are in upper management and looking to drive Agile change within an organisation. But it can be disastrous to a business that needs to release software quickly in order to achieve their quality and customer goals.
 
There are some warning signs that your organisation might not be up to microservices, and it can be difficult to fix them once development gets underway. Addressing them early will significantly increase the chances of a successful outcome.
 
Potential pitfalls to be aware of:
  • The project, DevOps, testing and architecture managers are focused on controlling rather than mentoring, facilitating and unblocking.
  • There is an expectation that teams should not own microservices; that all teams should be able to work from a shared backlog and codebase.
  • There is a reluctance to release software frequently and automatically.
  • There is a poor understand of Agile processes and principles (e.g. all teams are expected to have a shared measure of velocity).
  • There is a preference against cross functional teams (e.g. they may feel it's better to have frontend teams, backend teams and test teams).
  • There is a tendency toward a ‘hero developer’ culture; where certain key individuals are elevated above the teams that support them.
 
Microservices are a great fit for an organisational culture that thrives on rapid change, has a customer focus, and understands how to inspire and empower teams. It allows software development to operate at a scale and efficiency that is simply not possible with most other architectures. So good luck to all organisations embarking on a microservices journey, and as always, call the Team at Patient Zero – Empowered Teams, Exceptional Results!
 
In our final post we’ll look at technical Do’s and Don’ts with Microservices.

Share This Post

Get In Touch

Recent Posts

By Joe Cooney 02 Apr, 2024
Red-team challenges have been a fun activity for PZ team members in the past, so we recently conducted a small challenge at our fortnightly brown-bag session, focusing on the burgeoning topic of prompt injection. Injection vulnerabilities all follow the same basic pattern – un-trusted input is inadvertently treated as executable code, causing the security of the system to be compromised. SQL injection (SQLi) and cross-site scripting (XSS) are probably two of the best-known variants, but other technologies are also susceptible. Does anyone remember XPath injection? As generative models get incorporated into more products, user input can be used to subvert the model. This can lead to the model revealing its system prompt or other trade secrets, reveal information about the model itself which may be commercially valuable, subvert or waste computation resources, perform unintended actions if the model is hooked up to APIs, or cause reputational damage to the company if the model can be coerced into doing amusing or inappropriate things. As an example, entrepreneur and technologist Chris Bakke was recently able to trick a Chevy dealership’s ChatGPT-powered bot into agreeing to sell him a Chevy Tahoe for $1 . Although the U.S. supreme court has yet to rule on the legal validity of a “no takesies backsies” contract (as an employee of X Chris is probably legally obligated to drive a Tesla anyway) it is not hard to imagine a future scenario with steeper financial consequences.
27 Feb, 2024
With the advent of ChatGPT, Bard/Gemini and Co-pilot, Generative AI, and Large Language Models (LLMs) have been thrust into the spotlight. AI is set to disrupt all industries, especially those that are predominately based on administrative support, legal, business, and financial operations, much like insurance and financial organisations.
By Joe Cooney 22 Feb, 2024
One of the features of life working at PZ is our brown bag lunch and learn sessions; presentations by staff on topics of interest – sometimes, but not always technical, and hopefully amusing-as-hell. Yesterday we took a break from discussing the book Accelerate and the DORA metrics to take a whirlwind tour of the current state of play running “open source” generative AI models locally. Although this talk had been ‘in the works’ for a while, one challenge was that it needed to constantly be revised as the state of AI and LLMs changed. For example, the Stable Video Diffusion examples looked kind of lame in comparison to OpenAI’s Sora videos (released less than a week ago) and Groq’s amazing 500 token-per-second hardware demo on Monday/Tuesday , and the massive context size available now in the Gemini 1.5 models (released a few hours before OpenAI announced Sora...coincidence? An effort by OpenAI to steal back the limelight! Surely NOT!). And now a day later, with the paint still drying on a highly amusing slide-deck for the talk, Google releases their “open-source" Gemma models! The day itself presented an excellent example of why having more control of your models might be a good thing. ChatGPT 4 users began reporting “crazy” and highly amusing responses to fairly normal questions . We became alerted to this when one of our own staff reported on our internal Slack about a crazy response she received to a question about the pros and cons of some API design choices. The response she got back started normally enough, but then began to seem to channel Shakespeare’s Macbeth and some other olde English phrases and finished thusly. "Choose the right charm from the box* dense or astray, it’ll call for the norm. Your batch is yours to halter or belt. When in fetch, marry the clue to the pintle, and for the after, the wood-wand’s twist'll warn it. A past to wend and a feathered rite to tend. May the gulch be bygones and the wrath eased. So set your content to the cast, with the seal, a string or trove, well-deep. A good script to set a good cast. Good health and steady wind!" The sample JSON payload was also in keeping with the rest of the answer. { "htmlContent": "

Your HTML here

", "metadata": { "modifiedBy": "witch-of-the-wood", "safety": "sanitized", "mood": "lunar" } } Hubble, bubble, toil and trouble. Although there were no reports of the GPT4 API being affected by this (only ChatGPT) it might have given people developing automated stock trading bots using GPT4 a reason to pause and contemplate what might have been if their stock portfolio now consisted of a massive long position on Griselda’s Cauldron Supplies. As ChatGPT would say, Good health and steady wind.
Bay McGovern Patient Zero
By Demelza Green 11 Feb, 2024
Bay didn’t start her career out in software development. At school, Bay excelled at maths and physics, but adored writing, English and drama; lost in a world of Romeo and Juliet and epic fantasy.
More Posts
Share by: