The question is simple… “How agile do we need to be?”… The path to the answer is complex and the software architecture is a crucial part of this puzzle.

Your software architecture will determine whether it is possible to release on demand, every few seconds, every few weeks or every few months. For example, if your goal is to be able to rapidly respond to market changes and release once a day but your architecture doesn’t support automated testing, Continuous Integration (CI) or Continuous Delivery (CD) you will never realise your goal. You may get some success through the implementation of additional tooling or processes but all you are actually doing is adding complexity and reducing your ROI. No amount of money or tooling can compensate for an architecture that does not support your goals.

If you are not familiar with the architecture of the software that your company is writing, you should make the effort today to become familiar because tomorrow may be too late to discover that the architecture doesn’t allow you to be as agile as you need to be.

Why agility matters:

DevOps and the cloud are resulting in infrastructure no longer being a hindrance to a secure global presence and rapid expansion.

Cloud service providers are enabling anyone to launch native cloud solutions in a matter of minutes and to some extent, company size and billions of dollars are no longer market differentiators. A competitor with similar data, tools, knowledge, etc. who is able to sell a similar product or provide a similar service can launch on a multinational level from their home office with ease.

A cloud infrastructure experiment:

I wanted to see how easy it would be to replicate a multinational Software as a Service (SaaS) company structure in the cloud. The results lead me down a path of Continuous Delivery discovery.

Using an Azure credit and the Azure ARM templates, in just 5 hours, I was able to create a simple company network structure with a secure and recoverable cloud production environment facilitating a web presence on 4 continents.

The deployment consisted of the following three elements:

  • IaaS: A cloud based “on premises” private network consisting of several virtual machines backed by Azure AD allowing for secure access from anywhere and a VPN Gateway for connectivity to the cloud production environment.
  • IaaS: A cloud based production environment consisting of several auto scaling VM’s dedicated to various workloads. A load balancer to balance requests across various VM’s, a backup strategy ensuring point in time recovery of both data and the VM’s, a failover strategy to ensure an always available SLA and a VPN Gateway for secure connectivity to the “on premises” network. All secured by Azure AD and role based access control.
  • PaaS: Auto scaling ASP .Net web apps deployed from a github repository and hosted in the US, Europe, Asia and Australia. This was backed by a traffic manager and a content delivery network to ensure that my users were consistently given the best possible experience.

I consent that the above is a simple solution that does not implement some of the finer details of a corporate solution like a message service bus, content and data caching, containers, etc. but it is enough for a startup to get going. All of which, without the cloud, would not have been possible to implement with my measly budget of Azure credit.

Just to be clear, the projected monthly costs for this setup were way above the allocated credit but I was still able to deploy and run it for several days using nothing but the credit.

The surprise, for me, was not that I was able to do it with no budget to speak of but rather how easy it was to do. In total, it took roughly 4 hours to create the templates and a further hour for Azure to provision everything. As this was done with templates, I could reliably redeploy this exact structure again and again and again… and all it would “cost” is the 1 hour for the instances to be provisioned.

So if infrastructure is no longer a hindrance, does this mean that simply implementing an agile culture and development methodology will enable us to produce rainbows every time we snap our fingers?… Not if our architecture doesn’t support it!

Architecture and Continuous Delivery:

On my path to Continuous Delivery discovery, my experiences and research (see the experience summary and a few research links at the end of this post) have resulted in the realisation that not every project needs to be able to deliver on demand and therefore doesn’t need the architecture or systems required to enable it. But every project should have a clear definition of / requirement for how often delivery is needed.

The question is simple… “How agile do we need to be?”… The path to the answer often leads to other questions like:

  • Do our customers need us to respond to their needs in a matter of seconds or minutes or is it okay to respond to the needs every few weeks?
  • Does the competitiveness of the market require us to be able to respond to needs in seconds or minutes or is it okay to respond to needs every few weeks?
  • Is the product / solution / offering going to revolutionise the market and result in others implementing Continuous Delivery in order to compete?
  • What if a startup enters the market and implements a Native Cloud, Continuous Delivery model?
  • What if a competitor changes and implements a Native Cloud, Continuous Delivery model?
  • Etc.

Answering this question should result in a delivery goal definition similar to:

We must be able to build and deliver working software solutions in response to market changes rapidly and we must be able to release these solutions …

  • On demand.
  • Once a day.
  • Once a week.
  • Every two weeks.
  • Once a month.
  • Every six months.
  • Etc.

If you are conceptualising a software solution or if you are a software architect designing a system you need to ensure that the architecture is designed to meet the company’s or client’s delivery (agility) goals.

Let’s take the following as an example…

We must be able to build and deliver working software solutions in response to market changes rapidly and we must be able to release these solutions on the same day that they are delivered.

If we destruct that statement into its core parts we are presented with the following needs:

  • We must be able to build working software solutions rapidly.
  • We must be able to deliver working software solutions rapidly.
  • We must be able to release the working software solutions on the same day that they are delivered.

From the above, it is clear that the solution must implement Test Driven Development (TDD) that enables automated testing which in turn enables Continuous Integration (CI) and that’s the easy part.

In order to release on the same day, we need to implement Continuous Delivery (CD). True Continuous Delivery requires that changes be branched from and pushed and merged directly back to trunk / master. We therefore need to have confidence in and leverage the CI process in order to ensure confidence in the CD process.

Both of the above are great, we haven’t mentioned the architecture once. So why am I harping on about the architecture?

In order to be able to build rapidly and to confidently be able to branch from and push back to trunk we need to be 100% confident that changes will have minimal to no impact on any other part of the system.

So how do we achieve this?

With the appropriate architecture, that’s how!

Something along the lines of a completely decoupled front end that connects to a highly decoupled domain driven design microservice backend should do the trick. In fact, if we can design a completely decoupled domain driven design microservice backend, we’ll be able to achieve so much more. What’s wrong with building a microservice on demand and then scraping it when it is no longer needed or is no longer relevant? If it serves a purpose, ship it! If it no longer serves a purpose, scrap it!

If we are 100% confident that changes will have minimal to no impact on other parts of the system and 100% confident that the CI process will identify any breaking issues then we can be 100% confident to be able to release as soon as a solution has been delivered.

But what if we don’t have a decoupled domain driven microservice design?

What can we do if our design is tightly coupled resulting in a manual testing cycle?
What can we do if we have to use a feature branching strategy?

The good news is that we can achieve relative success through the implementation of a release train style solution where release dates are fixed based on what our architecture, infrastructure and processes allow. Think of features being developed, tested and parked in a release backlog ready to hop on board the next production train. The reality though is that this will never enable the implementation of true on demand Continuous Delivery and the interesting thing here is that this may actually be an acceptable level of agility. If we don’t intend to release on demand, why architect for it?

Another possible solution to the above scenario is to implement Containerisation. Containerisation allows us to take our existing web application and with little to no changes separate the application back end into self contained units that run seamlessly together. Think of this as our own Platform as a Service (PaaS) running on our existing, but simplified, Infrastructure as a Service (IaaS). CI, CD and a microservice structure then all become possible.


I hope this rather long post resonates with Developers, Architects, Project Leads and C Levels. Was the software architecture a consideration for you during your agility planning? Does your architecture support your agility goals? How has your architecture affected your ability to implement CI and CD? Are you currently implementing a release train strategy or Containerisation as a means of achieving your agility goals? Are these working for you?




I had the pleasure of working on a solution that implements a microservice backend and processes hundreds of thousands of transactions per minute with a ridiculously low error rate. This system was architected and built in 2007 which, according to Wikipedia, is 4 years before microservices became a thing. The beauty of this is that it is still going strong and easily out performs anything that I have worked on since. Where this becomes even more interesting is that the SDLC was and still is based on waterfall. However, due to the architecture, we were able to crank out complex changes and features in a relatively short time which would then sit and wait for two to six+ months before going into a lengthy test and release cycle. The decoupled nature of the architecture meant that any other changes released had little to no impact on anything sitting in the release queue so, in most cases, little to no rework was required. I salute the genius behind this architecture.

I cannot help but wonder how amazing this solution would be within an agile, CI and CD environment.


The webcast by Samir Penkar and post by Brad Murphy that reminded me of the awesomeness of microservices and lead me on a path of Continuous Delivery discovery which ultimately got me asking “How agile do we need to be?”.

Two interesting reads on Continuous Delivery.
The Case for Continuous Delivery by Jez Humble
Agile-DevOps: Continuous Delivery Implemented by Victor Hugo