Making a Technical Decision

Every technology-focused business needs the right technical and architectural experience to make technical decisions effectively, balancing the trade-offs. 

Decision-making is the cognitive process of selecting a course of action among several possible alternative options. Technical decision-making is less about the tool to pick and more about the problem to solve. Once the problem space is well defined, then evaluating the possible solutions becomes a lot easier. Technical decision-making cannot be outsourced.

Engineering organizations of all sizes make technical decisions every day. Every person in a technical role makes decisions, though the scope may vary across roles. Most of the time, these decisions are on a relatively small scale: an individual engineer or a small team solving a problem in the way that makes the most sense to them.

There is proportionality to each decision. Some decisions have a much broader impact, and it’s essential to have a process for capturing those decisions. Such decisions contribute to a deeper vision of how your stack or toolchain is evolving in response to changing technology and business landscape.

Scope the problem

First, what are you deciding (and why)? Determining the actual decision and scope is the first step toward having a rational plan to execute once the decision is made. Think about how you can frame the decision as a concrete question. 

Failing to do this results in people discussing from different premises, causing misalignment about what the decision or project actually entails. When exploring the problem area, consider the following:

  • Specificity – an actionable problem statement takes aim at a particular issue. Avoid boiling the ocean.
  • Decision-maker criteria – defines what information is needed by the decision-maker(s) to select a path forward.
  • Contextual constraints – this is the landscape the decision is made in; for example, capabilities of existing infrastructure, the business landscape, and laws of physics.
  • Timeframe and level of accuracy required – for example, a go/no-go decision must be made by the end of the quarter, or the projected cost savings must be accurate +/- 20%.
  • Plan of action – clearly understands what outcomes are achievable once this problem is solved and communicates the immediate next step.

A problem that is well understood is already half-solved and generally leads to better solutions. Avoid defining any solution at this stage. 

“If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.” ― Albert Einstein

Avoid scope creep

The impact of scope creep is almost always negative because the work increases, but the timeframe does not. Scope creep is notorious for taking focus away from the actual problem and blowing out timelines. Increasing the problem’s scope will only result in choosing an over-engineered solution or solving challenges you don’t currently have. 

There are commonly three adjustable levers: scope (i.e., what is included as part of a problem to be solved, or what factors to consider in a decision), resourcing (i.e., availability and quantity of personnel that can work as a team on the problem), and timing (i.e., when the work needs to be delivered, or by when the decision must be made). Often, the scope is increased without adjusting the other two levers – inevitably causing problems. 

Ensuring that scope additions are relevant to the problem being addressed is crucial. If the scope increases, then reevaluate the resourcing and timing for the decision or project. 


Prioritizing which problem to solve is a particularly challenging part of decision-making. There’s usually a laundry list of problems to solve or things to do at any given time. The Eisenhower Matrix is an effective way to help organize tasks by urgency and importance. 

In more detail, the quadrants are:

  • Do it now – a task or decision that requires immediate attention. When something is urgent, it must be done now, and there are clear consequences if the decision is not made within a specific timeline. 
  • Schedule it – may not require immediate attention, but these tasks or decisions help achieve long-term goals. Just because these tasks are less urgent doesn’t mean they don’t matter – instead, they need to be thoughtfully planned to ensure resources are efficiently used. 
  • Delegate it – tasks or decisions that are urgent but not important. These tasks must be completed sooner but don’t affect long-term goals. Because there is no personal attachment to these tasks or decisions and they don’t require a specific skill set, these can be delegated to other team members. 
  • Don’t do it – these are unimportant, non-urgent distractions that get in the way of accomplishing goals. 

Be wary of the “tip of the iceberg” approach. It’s easy to solve the most visible problem, but often the real issue lurks beneath the surface. Identifying the real problems requires having a deep understanding of the system (and other systems it interacts with). However, there may be limits to what one can know or what data one can collect. 

Data-driven decision-making works best when buttressed with good intuition and instincts, particularly when making decisions while missing bits of critical data. I learned the 75% method while in the Marine Corps, which prescribes collecting information until you have about 75% of the data needed. Then use your expertise, experience, and gut to fill out the other 25%. 

Weigh the factors

Making the right decisions, especially ones that have a broad impact, can be challenging. It’s never as simple as writing down a list of pros and cons. Numerous aspects and their varying importance have to be considered. This evaluation begins by defining the decision criterion. 

Decision criteria are principles, factors, or requirements used to make a decision. This criterion can be as rigid as detailed specifications and criteria scoring; or as flexible as a rule of thumb. Some examples include monetary cost, opportunity cost, ROI, risk, strategic fit, and sustainability. 

Some decision criteria are discrete; others are continuous. It’s possible to devise a scorecard-like approach or weighted matrix against which to measure each option. Remember that some criteria may be more subjective, so this is an imperfect science. However, a clear view of the criteria can demonstrate how a range of decision options compare.

Manage risk

Effective decision-making is about using the best judgment. Best judgment means having an opinion, going out on a limb, leveraging experience, and taking risks. Part of the decision-making process is managing those risks and minimizing the risk factors. 

Intuitively, when considering risk, we focus on the probability of adverse events while neglecting their impact. Risk, however, is the possibility of something terrible happening, compounded by its consequences.

The NASA Risk Management Handbook defines risk as the following elements: 

  • Scenario – a sequence of credible events that specify the evolution of a system from a given state to an undesired condition. 
  • Likelihood – the measure of how possible the scenario is to occur; it can be qualitative or quantitative. 
  • Consequences – the impact should any of the scenarios occur; the severity can be qualitative (e.g., reputational damage) or quantitative (e.g., money lost). 

Consider the reversibility of the decision. One-way door decisions are decisions that you cannot easily reverse. These decisions need to be made carefully. Two-way door decisions can be reversed. You can walk through the door, see if you like it, and if not, go back. These decisions can be made fast or even automated.

When making a critical, one-way door decision, try to reduce risk by transforming it into a two-way door decision or reducing exposure. Keep in mind that this changes the risk profile but does not change the importance or urgency of the decision. 

Declare dependencies

Decisions don’t live in isolation. Decisions are constrained by and depend on the context in which they are being made. Managing dependencies is an integral part of decision-making and executing the decision once made. Unknown dependencies introduce more risk to a decision. 

There are several types of dependencies to be aware of:

  • Technical dependency – the relationship between two decisions that affect the technical outcome of a decision.
  • Schedule dependency – the relationship between two decisions where the timing of one impacts the other.
  • Resource dependency – a shared critical resource between two decisions. 
  • Information dependency – a relationship where information shared between decisions would impact the scope or consequence of a decision. 

The directional relationship of these dependencies may be upstream or downstream. 

Upstream dependencies are things that a decision is reliant on happening before the decision can be made. These things need to happen upstream and flow down into the decision. 

If upstream dependencies have to happen before a decision can be made, downstream dependencies occur afterward. Any delay in making the decision will impact the downstream dependency. 

Make the decision

There is the possibility of making a bad decision. Because decisions can only be evaluated after the fact, it can be costly to revert a decision because it’s often already embedded into systems and processes. So what does a good decision look like? 

  • It’s a decision. Deciding not to decide is still a decision. But not deciding is a non-decision. 
  • The problem and desired outcome are clearly stated.
  • Context is considered. This may include the business or economic environment, team constraints, and other factors. 
  • The decision is informed by feedback from people with different perspectives.
  • Trade-offs are weighed, as well as how to manage them.
  • It considers what could go wrong, and risks are documented.
  • Assumptions are stated as well as a way to validate them.

There is no perfect decision without trade-offs, and it’s possible to go back and forth about a decision forever. Make the final call and provide clarity to anyone affected by the decision. At this point, the decision maker should be clear about what is being done and the next steps. Teams must be aligned on why this direction is important, the desired outcome, and how this aligns with our strategy and vision. 

Foster discussion and gather feedback

Decisions do not occur in a vacuum. The decision driver should meet consistently with stakeholders and foster discussion. This is where people can ask questions, gain a shared understanding, and reach buy-in. There may be multiple rounds of discussions during the decision-making process. It may begin with a conversation about the problem area, and more specific meetings occur later on particular topics. 

It’s easy to say, “I’m open to feedback,” but creating space for feedback is vital. It can be helpful to frame the question to gather specific feedback (e.g., “is there any context missing?”). Making space for feedback can include:

  • Space out the different constructs of the decision (e.g., just the scope of the problem for one conversation, then discuss trade-offs the next) to give stakeholders time to digest, ask questions, form opinions, and comment. 
  • Facilitate discussion to hear from a broad range of opinions, including those who dissent and others who are less inclined to share. 
  • Make people comfortable to share feedback with the group, but also work behind the scenes to collect feedback 1:1 that they may not want to share publicly. 

Feedback is often captured in the form of trade-offs (i.e., what we give up to invest in a particular option). This demonstrates that the feedback was heard and weighed, but other factors are being prioritized.

There is the possibility of making a bad decision. It is costly to revert a decision already embedded into systems and processes. Thus, gathering feedback early and often helps identify issues or overlaps sooner and allows you to gain acceptance of your technical decision. 

Document the decision

By now, the data has been collected, a clear problem is being solved, and the hypothesis on the correct direction and a way to measure the outcome have been defined. Feedback has been heard and incorporated into the decision. Now the decision must be documented. Similar to feedback, this is something that should be happening throughout the process. 

The purpose of documenting a decision is to capture all the contextual information in one place. The act of writing solidifies the decision, solicits broader feedback, and allows for widespread communication. Documenting the decision also helps in other ways, such as: 

  • Alignment – teams need to understand the historical landscape and context of the decision; a decision’s impact may outlast the decision maker’s tenure. 
  • Reduce duplicate effort – avoid wasting time and effort by repeating past discussions. 
  • Retrospective – context may change; understand why the decision was made at that time and why it was the right/wrong decision. 

It is helpful to have a decision-making model, such as DACIRACI, or ADR. No matter which you choose, it should clearly capture the decision, trade-offs, and the roles and responsibilities of those involved. 

Communicate the decision

Once the decision is made, socialize it with the stakeholders. A common anti-pattern is reaching a decision and not sharing this; eventually, it causes distrust and disharmony in those affected by the decision. Instead, bring them on the journey and make them feel part of the decision-making process. 

Ultimately, create a document of the decision to ensure future members understand its rationale. Share widely in the correct communication channels and then focus on aligning people with the next steps and why a decision has been made.

Useful Frameworks

This section provides a few different frameworks that can be adapted for driving important decisions. Frameworks are like shoes; some will fit more comfortably than others. If a framework doesn’t feel natural to you, then either find one that does or adapt the framework into terms that work for you. 


Created by Gokul Rajaram while working at Google, the SPADE framework is intended to help build the decision-making muscle to enable faster decision-making. The framework is a quick assessment of the problem and the decision being made and then used to bring others up to speed. The elements are:

  • Setting – contextualizes the decision by defining the “what” and parsing the objective to explain the “why.” 
  • People – defines the roles and responsibilities of those involved in the decision-making process. 
  • Alternatives – outline the realistic options available, the impact of each choice, and its evaluation against the decision’s setting.
  • Decide – weigh the information, consider people’s feedback, and then make the decision; choose an alternative and detail why it was chosen. 
  • Explain – articulate why the alternative was selected, the impact of the decision, and any associated risks; communicate with stakeholders. 

The decision-making framework outlines a step-by-step process to synchronize and speed up collaborative decisions. 


The Cynefin Framework gives context to decision-making by providing context and guiding a response. It is a sensemaking framework to help think through the details of a situation, classify it, and understand the appropriate response to the situation. The five domains of the Cynefin Framework are:

  • Obvious – the relationship between cause and effect is already well-known; respond according to established practices. 
  • Complicated – the relationship between cause and effect is knowable; the situation can be analyzed to form a hypothesis of what should be done.
  • Complex – the relationship between cause and effect is unknowable; experiments or spikes should be conducted to understand the situation. 
  • Chaotic – the situation is very unstable; one must act quickly, and there is no time to conduct experiments or analyze the situation. 
  • Disorder – not determined; anything whose domain has not been identified falls into this domain. 

These domains help decision-makers create an awareness of what is really complex and what is not. From there, the decision maker can respond accordingly so that no energy is wasted in overthinking the problem and to ensure that they aren’t trying to make the complex fit into standard solutions or vice versa.


The Boyd Cycle, also known as “the OODA Loop,” is a concept used to describe a recurring decision-making cycle. The goal is to process the cycle quickly and continuously, allowing the decision-maker to react quickly to the changing environment. The four elements are: 

  • Observe – continued awareness of the situation, context, and any changes. The first step is to identify the problem and gain an overall understanding of the environment. 
  • Orient – reflect upon what was observed, and consider what should be done next. This step recognizes, diagnoses, and analyzes the observed situation. 
  • Decide – suggest a course of action, considering the trade-offs and acceptable degree of risk. 
  • Act – turn the decision into action, and measure the outcome. 

Once a decision is made, the situation changes. Thus, the OODA Loop turns on itself and repeats again and again.


Balancing the past, present, and future is challenging when making a technical decision. This process requires patience, understanding, and problem-solving. Technical leaders are stewards of this process and should focus on the outcomes of a decision (rather than implementation or execution details) and aligning teams around why a decision was made. 

Technical decision-making is complex; even the best decisions sometimes don’t turn out as you’d hoped. You can improve your decision-making skills by revisiting old decisions. Was there a missed context? Trade-offs that turned out to be different than expected? A good decision-making process can help confidently deliver technical decisions consistent with long-term strategy and architectural principles.

Hello, Terraform

At work, my team owns and maintains a large lab environment for the development and testing of Rubrik Build projects. It was built in a hurry, causing some of our original design principles to be compromised. My team and I have decided to use this no-travel period as an opportunity to redesign and redeploy our lab environment. I will review our design in a later post. 

One of our design goals is to leverage infrastructure as code principles (where possible). The team’s primary tool of choice for provisioning is Terraform

Terraform allows us to define what resources we need in a declarative manner, where we simply define the end state needed for our infrastructure. Here’s a few reasons why we like using Terraform:

  • Multi-platform, similar operations across a number of providers
  • Easy provisioning and deprovisioning of resources
  • Idempotent, saves current state as a file
  • Detects diffs from current state when applying changes

This post will dive into Terraform syntax, architecture, and operations.

Terraform Syntax

The low-level syntax of Terraform is defined in HashiCorp Configuration Language (HCL). The following example shows a generic configuration code block for Terraform:

command_type "provider_resource_label" "resource_label" {
  argument_name = "argument_value"
  argument_name = "argument_value"

Let’s dig into the syntax:

  • Command — the command type resource tells Terraform you want to create a resource, such as an S3 bucket or an EC2 instance.
  • Provider Resource Label — this is the type of resource you want to create. The resource name is specified by the provider. For example, you may use aws_instance to provision an EC2 instance using the AWS provider.
  • Resource Label — this what you want to colloquially label the resource within your Terraform configuration. This label should be unique within this configuration file as it is used later when referencing the resource.
  • Arguments — allows you to specify configuration details for the resource being provisioned. These are defined as an argument name and an argument value. As an example, when provisioning an EC2 instance, you may want to specify which AMI is used. 

Note that comments using # or // or even /* or */ are supported. 

To put these concepts together, an example configuration code block may resemble:

resource "aws_instance" "my-first-instance" {
  ami = "ami-008c6427c8facbe08"
  instance_type = "t2.micro"
  availability_zone = "us-west-2c"
  tags = {
    Name = "my-first-instance"
    Environment = "test"

This example will provision a single EC2 instance in the US-West-2C availability zone, using the AMI specified, along with assigning the two tags. 

Most of your Terraform configuration is written in these code blocks. Once you master this, then you’ll be able to quickly write and provision more resources.

Terraform Architecture

A typical Terraform module may have the following structure:

└─── terraform-module-example01
│   │
│   │
│   │   terraform.tfvars
│   │
└─── terraform-module-example02
│   │
│   │
│   │
│   │
│   │   terraform.tfvars
│   │

The names of the files are not important. Terraform will load all configuration files within the directory.


A provider is the core construct that allows Terraform to interact with the APIs across various platforms (PaaS, IaaS, SaaS). Think of this as the translator between the platform API and the HCL syntax. Before you can begin provisioning resources, you must first defined which platform by specifying the provider:

provider "aws" {
  region = "us-west-2"

Place the provider block in your file or create a separate file.


I previously covered how to structure resource code blocks in the Terraform Syntax section. 

This example defines the creation of an instance based off the defined AMI, sized as t2.micro, and properly tagged:

resource "aws_instance" "my-first-instance" {
  ami = "ami-008c6427c8facbe08"
  instance_type = "t2.micro"
  availability_zone = "us-west-2c"
  tags = {
    Name = "my-first-instance"
    Environment = "test"

Define the desired outcome for your resources in the file.

Data Sources

Data sources enable you to reference resources that already exist outside of Terraform or defined by a separate Terraform configuration. This allows you to extract information that can then be fed into a new resource. First, defined the data source and then reference this as an argument value:

data "aws_ami" "ubuntu" {
  most_recent = true
  owners = ["aws-marketplace"]

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]

  filter {
    name   = "virtualization-type"
    values = ["hvm"]

resource "aws_instance" "my-first-instance" {
  ami = "${}"
  instance_type = "t2.micro"
  availability_zone = "us-west-2c"
  tags = {
    Name = "my-first-instance"
    Environment = "test"

In this example, I am again creating a new EC2 instance. However, this time I am gathering AMI information using a data source to find and use the latest Ubuntu version instead of manually defining that AMI value. This allows my configuration to be more flexible because I no longer need to manually find and input the appropriate AMI value.

A data source is declared similarly to resources, except that the information provided is used by Terraform to discover existing resources rather than provision. Once defined, data sources can be referenced repeatedly to pass information to new resources. 

Place the data source blocks in your file or create a separate file. 


To make your code more modular, you can choose to use variables instead of hard-coding values. Once defined, variables can be referenced:

provider "aws" {
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
  region = var.aws_region

I typically declare my essential variables in a separate file. This may resemble:

variable "aws_access_key" {
  description = "AWS access key for authorization"
  type = "string"

variable "aws_secret_key" {
  description = "AWS secret key for authorization"
  type = "string"

variable "aws_region" {
  description = "AWS region in which resources will be provisioned"
  type = "string"
  default = "us-west-2"

In this example, I have declared a value for the AWS region to be reused when provisioning the infrastructure defined. The descriptions are optional, and for the developer’s benefit only, but I always recommend being kind to the next person using your code. The possible variable types are string (default type), list, and map. Variables can also be declared but left blank, setting their values through environment variables or a .tfvars file. 

Sometimes the variable definition may be specified as a default in the file. Otherwise, this value should be defined by creating a file named terraform.tfvars, which allows variable values to persist across multiple executions. This is especially valuable for sensitive information such as secret keys. 

For example, the contents of the terraform.tfvars file may resemble the following variable definition:

aws_access_key = "ABC0101010101CBA"
aws_secret_key = "abc87654321zyxw"
aws_region = "us-west-2"

Terraform automatically loads all files in the current directory with the exact name of terraform.tfvars or any variation of *.auto.tfvars. If the file is named something else, you can use the -var-file flag to specify a file name.

However, keep in mind that these persistent variable definitions often contain sensitive information, such as passwords or API token, and should be treated with care. Consider adding this to your .gitignore file.


Outputs can be used to display information needed or export information after Terraform completes a terraform apply command. An example output may resemble:

output "instance_id" {
  value = "${}"

You can save the outputs files in a specific file called


When you use Terraform to build resources, a state file gets created and contains configuration information for the resources provisioned. This is what allows Terraform to determine which parts of the configuration have changed, ultimately what provides idempotency because Terraform is able to determine the resource is present and does not create it again. 

After the terraform apply command is executed, the affiliated directory will contain two new files:

  • terraform.tfstate
  • terraform.tfstate.backup

Note: any manually changes made to Terraform provisioned infrastructure will be overwritten by terraform apply.


Terraform configuration files can be packaged as modules and used as building blocks to create new infrastructure resources without having to put forth much effort. Modules are available publicly in the Terraform registry, and can be directly added to configuration files for quickly provisioning resources.

If I were to use a pre-packaged module to provision an AWS S3 bucket, the code may resemble:

module "s3_bucket" { 
  source = "terraform-aws-modules/s3-bucket/aws" 
  bucket = "my-s3-bucket" 
  acl = "private" 
  versioning = { 
    enabled = true 

In this case, you are reusing the configurations specified by the module. All you need to input are the configuration values.

Terraform Operations

Terraform is managed through a simple CLI. Terraform is a single command-line application: terraform and you specify the action through a subcommand such as apply or plan

To view a list of the available commands at any time, just run terraform with no arguments.

In order to get started, you will need to run terraform init to initialize a number settings for Terraform that will create the required environment to proceed. It will also download the necessary plugins for the selected provider.

Before provisioning, you may want to generate an execution plan, or otherwise known as a dry-run of your changes. Generate by running terraform plan. Terraform outputs a delta, showing you which resources will be destroyed (marked with a -), which will be added (marked with a +), and which will be updated in-place (marked with a ~).

Once you have reviewed the execution plan and are ready to begin provisioning, run terraform apply to the changes to be executed. If at any point you need to remove the resources, simply use the command terraform destroy. If there are multiple resources in the module, you can specifically name which resource(s) to destroy. For example: 

terraform destroy -

In general, once you have defined the infrastructure in the .tf files, working with Terraform is pretty much just running terraform plan and terraform apply repeatedly (unless you use CI).


In this post, I described Terraform syntax, architecture, and common operations. Throughout the article I used the example of creating an AWS EC2 instance, however, these principles apply to all resources types across providers. I hope this helps you get started in your infrastructure as code journey. 

Happy Terraforming!

Visualizing the Conceptual Model for Technical Architecture

I have previously written about putting together the conceptual model with logical and physical design; however, I want to dig a little deeper into the conceptual model. The conceptual model categorizes the assessment findings into requirements, constraints, assumptions, and risks:

  • Business requirements are provided by key stakeholders and the goal of every solution is to achieve each of these requirements.
  • Constraints are conditions that provide boundaries to the design.
    • These often get confused with requirements, but remember that a requirement should allow the architect to evaluate multiple options and make a design decision whereas a constraint dictates the answers and removes the ability for the architect to decide.
  • Assumptions list the conditions that are believed to be true, but are not confirmed:
    • By the time of deployment, all assumptions should be validated.
  • Risks are factors that might have a negatively affect the design.
    • All risks should be mitigated, if possible.



Describes what should be achieved in the project; describes what the solution will look like.

  • Example: The organization should comply with Sarbanes-Oxley regulations.
  • Example: The underlying infrastructure for any service defined as strategic should support a minimum of four 9s of uptime (99.99%).

The part that tends to trip people up is functional versus non-functional requirements.

Functional Requirements

A requirement specifies a function that a system or component should perform. These may include:

  • Business Rules
  • Authentication
  • Audit Tracking
  • Certification Requirements
  • Reporting Requirements
  • Historical Data
  • Legal or Regulatory Requirements

Non-Functional Requirements

A non-functional requirement is a statement of how a system should behave. These may include:

  • Performance – Response Time, Throughput, Utilization, Static Volumetric
  • Scalability
  • Capacity
  • Availability
  • Recoverability
  • Security
  • Manageability
  • Interoperability

Often times, non-functional requirements will be laid out as constraints — the part makes this concept murkier. In the context of a VCDX design, these should typically be defined as a constraint, whereas requirements are more typically functional requirements. Be careful how you word a non-functional requirement: if it’s stated as a must and there is no room for the architect to make a decision, then it’s a constraint. But if it is a should statement is gives more than one choice for a design decision then leave it as a requirement.


Anything that limits the design choice made by the architect. If multiple options are not available to make a design decision, then it’s a constraint.

  • Example: Due to a pre-existing vendor relationship, host hardware has already been selected.

If this is a bit difficult to grasp, don’t worry, you are in good company. This is a question that appears often.


In this example, because the business dictates that HP ProLiant blade servers must be used, then it is a constraint. This leaves no room for me, as the architect, to make a design decision — it has been already made for me.


Assumptions are design components that are assumed to be valid without proof. Documented assumptions should be validated during the design process. This means by the time the design is implemented, there should be no assumptions.

  • Example: The datacenter uses shared (core) networking hardware across production and nonproduction infrastructures.
  • Example: The organization has sufficient network bandwidth between sites to facilitate replication.
  • Example: Security policies dictate server hardware separation between DMZ servers and internal servers.

These examples are a bit of low-hanging fruit. Don’t be afraid to dig a little bit deeper. If there’s anything documented or stated without empirical proof, then it is an assumption and needs to be validated.


A risk is anything that may prevent achieving the project goals. All risks should be mitigated with clear SOPs.

  • Example: The organization’s main datacenter contains only a single core router, which is a single point of failure.
  • Example: The proposed infrastructure leverages NFS storage, with which the storage administrators have no experience.

No design is perfect and it is important to document as many risks as you can identify. This will give you the opportunity to be prepared and craft mitigation plans. Not paying close attention here may leave the design in a vulnerable state.

Additional Examples

Can you specify which conceptual model category is correct for each example?



Requirement The design should provide a centralized management console to manage both data centers.
Assumption The customer provides sufficient storage capacity for building the environment.
Constraint The storage infrastructure must use existing EMC storage arrays for this project.
Requirement The platform should be able to function with project growth of 20% per year.
Assumption Active Directory is available in both sites.
Requirement Solution should leverage and integrate with existing directory services.
Risk Both server racks are subject to the same environmental hazards.
Assumption BC/DR plans will be updated to include new hardware and workloads.
Requirement The SLA is 99% uptime.
Constraint External access must be through the standard corporate VPN client.
Risk Having vMotion traffic and VM data traffic on the same physical network can lead to security vulnerability because vMotion is clear text by default.


To learn more about the enterprise architecture or the VCDX program, please join me, Brett Guarino, Paul McSharry, and Chris McCain at VMworld on Wednesday, 29 August 2018 from 11:00-11:45 to discuss “Preparing for Your VCDX Defense”.

Problem Solving with the Cynefin Framework

Effective leaders know that problem solving is not “one-size-fits-all”. The action taken depends on the situation and, because the circumstances are changing, better decisions can be by using an adaptive approach. I have previously written about the 75% method that I learned in the military, but there’s another framework that I have consistently used with success.

Cynefin, pronounced “kih-neh-vihn” (don’t worry, I mispronounced it for longer than I’d like to admit), is a Welsh word that means “place”. The Cynefin framework was coined in 1999 by Dave Snowden. Simply, the Cynefin framework is used to help realize that not all situations are equal and to successfully navigate different situations, different responses are required.


The 5 Domains

Problems are categorized into five domains using the Cynefin framework (yes, five, don’t forget disorder!).

Ordered Systems

The domains on the right (obvious and complicated) are “ordered” because cause-and-effect are known or can be discovered.

Obvious (fka “Simple”)

This is the domain of best practice.

In this context, the problems are apparent cause-and-effect relationships that are well understood.

The methodology is to “sense – categorize – respond” to obvious problems. This means that the situation should be assessed, categorized by type, and then respond based on an existing process or procedure. These tend to be repeating patterns and/or consistent events…or “known knowns”.

For example, these are problems faced at a helpdesk or call center – often predictable and there are established processes in place to handle the vast majority.

Be careful – some obvious contexts may be oversimplified. This happens when leaders (or organizations, for that matter) experience success and become complacent as a result. Ensure that there are feedback loops in place so that any situations that don’t exactly fit with an established category can be reported.

Another risk with complacency is that leaders may not be receptive to new ideas. Endeavor to stay willing to pursue a new or innovative suggestion.


This is the domain of good practice. Sometimes referred to as the “domain of experts.”

Complicated problems may have multiple correct solutions. There is a relationship between cause and effect, but it may not be obvious to everyone because the problem is…well…complicated. There may be several symptoms but you are not sure how to fix them.

The methodology here is to “sense – analyze – respond”. Effectively you should assess the situation, analyze what is known (using the help of experts), and decide what the best response is using good practices. This is generally where we experience “known unknowns” where we know the questions that need to be answered, but may not know the actual answer. It is at this point that we consult the expert. With enough time, you could reasonably identify the known risk and develop a plan. Think evolutionary, not revolutionary.

The danger here is that a leader may lean too heavily on experts while ignoring good solutions from others. In tech, we tend to experience this where we rely on the experts and ignore the generalists – even though the generalist may have the winning answer. Additionally, the leader may experience analysis paralysis. This is where I recommend using the 75% method detailed here.

Unordered Systems

The domains on the left (complex and chaotic) are “unordered” because cause and effect can be deduced only with hindsight or potentially not at all.


This is the domain of emergent practice.

Sometimes it is impossible to identify a single correct solution or to spot the cause-and-effect relationship. You are likely in a complex context.

This context is typically unpredictable, making the best approach “probe – sense – respond”. Think “unknown unknowns”. You may not know the correct questions to be asking. Regardless of how much time is spent in analysis, it may not possible to accurately identify the risks, predict the solution, or the effort needed to solve the problem.

In this situation, it is best to patiently wait, look for patterns, develop, and experiment to gain more knowledge. As more knowledge is gained, then determine the next steps. Repeat as needed. The goal is to move into the “complicated” domain.

A potential risk is that leaders may fall back into habitual command-and-control modes which are futile in this context. Leaders lacking patience may try to force facts instead of waiting for patterns. It is imperative to have a feedback loop so that open discussion can occur to develop experiments for observing patterns. Think “what if we tried…” Use creativity to solve the problem.

Complicated and complex situations are similar in some ways, and are sometimes confused. If a decision based on incomplete data is being made, you are likely to be in a complex situation.


This is the domain of novel practice.

There is no relationship between cause-and-effect. This means that the primary goal here is to establish order and stability. This is likely a crisis or emergency situation.

The methodology is to “act – sense – respond”. It is necessary to be decisive in order to address the burning issues, determine where there is and isn’t stability, and then work to move the situation from chaos to complexity. Basically, shit has hit the fan – triage time: stop the bleeding and start the breathing… then determine what the real solution should be.

It may feel like in tech we live in this domain (hopefully not!). As an example, there may be an issue in production, say a bad patch that has been installed data center wide. Initially the focus will be on containing the issue and correcting it quickly. The initial solution may not be great, but it gets the job done. Once the bleeding has stopped then you can determine the better long-term solution.

In this situation, the leader must provide clear and direct communication while taking immediate action to re-establish order. A risk is an indecisive leader. This is the time to find “good enough” instead of the perfect answer.


Disorder is the space in the middle.

There is no clarity here – decompose and move to another context. Basically, if you have no idea where you are, then you’re likely in “disorder”. The immediate goal is gather information in order to move to a known domain.

In this situation, I tend to try to break the massive disorder into smaller problems and then tackle each one individually. Apply each problem to a domain and work on a solution.

Chaotic problems are dangers, especially when left unaddressed, because there is no process to fix it. This is why it is important to move into a known category.

Final Thoughts

The Cynefin Framework is an excellent model to assist in approaching different situations. Once the situation is defined, then work to solve the problem.

The goal is to adequately lead your team through any of these five domains. Many leaders can only lead effectively in one or two domains (not in all of them) and few, if any, prepare their organizations for diverse contexts. The only way to successfully get through all five domains is to keep an open mind to new and creative solutions, build a feedback loop, and not get stuck in analysis paralysis.


Additional Resources:

Cognitive Edge: The Cynefin Framework (explained by Snowden himself!)

Everyday Kanban: Understanding the Cynefin framework – a basic intro

Sherrieg: The Cynefin Framework

Harvard Business Review: A Leader’s Framework for Decision Making

Ch-ch-change Management

Change management has never been easy for the dev or the ops side of the house. Let’s face it; it’s usually a checklist item and a tool to CYA. However, we are moving to a world where change is a part of the culture and a frequent process. There is no excuse to not improve.

The ultimate goal of change management is to drive organizational results and outcomes by engaging the staff to encourage the adoption of a new way to work. Whether it is a process, system, job role, or organizational structure change (potentially…all of the above), a project can only successful if the individual changes daily behaviors and begins doing the job in a new way. This is the nature of change management.

Therefore, staffing a change management board with a crew of change-adverse individuals will get you nowhere.

Change Management

Often we look at change management as a way to spot problems after they happen. Thus it becomes a tool for responding to change, instead of leveraging change. In this world of DevOps that embraces change as a mechanism to iteratively improve on processes, change management is usually viewed as a blocker to avoid. But in most enterprises and verticals, it cannot be avoided.

Often we look at change management as a way to spot problems afterthey happen. Thus it becomes a tool for responding to change, instead of leveraging change.

In this world of DevOps that embraces change as a mechanism to iteratively improve on processes, change management is usually viewed as a blocker to avoid. But in most enterprises and verticals, it cannot be avoided.

Tooling and implementation can be detached from governance. This decoupling can result in lost communication and a reactive philosophy. Instead consider funneling all changes through the same channel so that nothing gets lost and the change advisory board (CAB) considers all changes. Begin by consolidating change, problem, and incident management into a modular platform that is a part of your DevOps tooling that can streamline everything into one pipeline.

feedback loop

This may seem outlandish at first, but by integrating change into pipelines automates the capture of change records with a set of artifacts. The goal is to ultimately improve collaboration and to build an auditable history.

Companies often establish different modes of change to balance speed, quality, and risk. Consider automating the approval gate for some modes of change. This speeds change processing and increases adoption. This shares the responsibility of effectively making change happen back on to those individuals who conduct the implementations.

Change management should be a priority and used as a single source of truth of all changes. Doing so will increase visibility for risk and compliance management.

We can distill this down to three key ideas to assist in implement efficient change management:

  • Do not decide a new direction and then dump it on your team. Involve them in the decision-making process.
  • Make work visible to all.
  • Embrace value stream mapping to find new ways to increase efficiency.

The bottom line is to be proactive about how change is managed.

Get Mapped: Value Stream Mapping


Value stream mapping (VSM) does exactly that: it is a DevOps framework (“borrowed” from manufacturing) that provides a structured way for cross-functional teams to collectively see where we are today (long release cycles, silos, damage control afterwards, etc.) and where we want to be in the future (short release cycles, infrastructure as code, iterative development, continuous delivery, etc.).

A VSM is a way of getting people to collaborate and see what is really happening. These exercises are often amazing “aha!” moment workshops that make three objectives (flow, feedback, and continuous integration) turn into a sustainable engine of improvement.

Who should participate in a VSM?

  •    Service Stakeholders and Customers
  •    Executors of a Process Tasks
  •    Management

…but not all at the same time.

The VSM process assembles everyone involved with a workflow in the same room to clarify their roles in the product delivery process and identify bottlenecks, friction points and handoff concerns. Realistically, if we include everyone at the same time, the likelihood of honesty decreases. Let’s be for real – if upper management were in the room with you, would you be 100% honest as to where the bodies are buried or exactly what processes each step entails? VSM reveals steps in development, test, release and operations support that waste time or are needlessly complicated and this requires complete transparency.

Lead Time versus Time on Task

If you can’t measure it, you can’t improve it. Why do companies go for Continuous Delivery (CD)? Why do people care about DevOps? The main reason I hear is cycle time. This is the time it takes me to get from an idea to a product or feature that your customers can use. Measurement is one of the core foundations of DevOps, and the VSM is the measurement phase. If you do it right, it’s the sharing phase as well – share the measurements and proposed changes with the entire group. Doing that well allows you to start to change culture simultaneously.

Lead time vs time on task

With a solid foundation in place, it becomes easier to capture more sophisticated metrics around feature usage, customer journeys, and ensuring that service level agreements (SLAs) are met. The information received becomes handy when it’s time for road mapping and spec’ing out the next big project.

“Lead time” is a term borrowed from manufacturing, but in the software domain, lead time can be described more abstractly as the time elapsed between the identification of a requirement and its fulfillment.

The goal of VSM development is to measure how time is spent on each task and identify processes required for each task. It becomes easier to see what processes are inefficient and creating a bottleneck. In turn, this will reduce the lead time to deliver the finished release.

Current State

The following VSM demonstrates a current state analysis of the current software release process. The main thing to note in this example is how linear it is – there are only two feedback loops: at the very beginning and towards the end at new feature testing.

current state

The apparent lack of feedback loops presents a potential problem area – there are 8 steps between the two feedback loops. Imagine getting all the way to the end before realizing there’s an issue and providing feedback. How far will the software release be set back if the problem is not detected and communicated until the new release testing phase?

Future State

Once you have the current state VSM mapped, the next step is to figure out a way to make the mapping more efficient. This is typically driven by the following:

  • How can we significantly increase the percent complete and accurate work for each step in our current state VSM?
  • How can we dramatically reduce, or even eliminate the non-productive time in the lead time of each current state step?
  • How can we improve the performance of the value added time in each current state step?


Realistically, no VSM is perfect. However, the future state that we see above demonstrates a set of processes that create a mostly ongoing feedback loop. This allows for continuous communication about the processes and release as it moves forward towards a qualified build.

Demonstrating Business Value

In the manufacturing plants, they would have one pipeline, one production line at a time. As we know, the modern software development world is not like that.

A VSM is about more than just dissecting the software delivery lifecycle to find bottlenecks and pain points, although it is certainly helpful in that area. Analyzing value streams gives management confidence that the business is focusing on the right projects and initiatives. By taking a clearer look at the KPIs and metrics across the tooling and scaling the entire organization, these leaders can make informed decisions the way most business leaders prefer to—with data to back them up.

Architecting a vSphere Upgrade

At the time of writing, there are 197 days left before vSphere 5.5 is end of life and no longer supported. I am currently in the middle of an architecture project at work and was reminded of the importance of upgrading — not just for the coolest new features, but for the business value in doing so.


Last year at VMworld, I had the pleasure of presenting a session with the indomitable Melissa Palmer entitled “Upgrading to vSphere 6.5 – the VCDX Way.” We approached the question of upgrading by using architectural principles rather than clicking ‘next’ all willy-nilly.

Planning Your Upgrade

When it comes to business justification, simply saying “it’s awesome” or “latest and greatest” simply does not cut it.

Better justification is:

  • Extended lifecycle
  • Compatibility (must upgrade to ESXi 6.5 for VSAN 6.5+)
  • vCenter Server HA to ensure RTO is met for all infrastructure components
  • VM encryption to meet XYZ compliance

It is important to approach the challenge of a large-scale upgrade using a distinct methodology. Every architect has their own take on methodology, it is unique and personal to the individual but it should be repeatable. I recommend planning the upgrade project end-to-end before beginning the implementation. That includes an initial assessment (to determine new business requirements and compliance to existing requirements) as well as a post-upgrade validation (to ensure functionality and that all requirements are being met).

There are many ways to achieve a current state analysis, such as using vRealize Operations Manager, the vSphere Optimization Assessment, VMware {code} vCheck for vSphere, etc.

I tend to work through any design by walking through the conceptual model, logical design, and then physical. If you are unfamiliar with these concepts, please take a look at this post.

An example to demonstrate:

  • Conceptual –
    • Requirement: All virtual infrastructure components should be highly available.
  • Logical –
    • Design Decision: Management should be separate from production workloads.
  • Physical –
    • Design Decision: vCenter Server HA will be used and exist within the Management cluster.

However, keep in mind that this is not a journey that you may embark on solo. It is important to include members of various teams, such as networking, storage, security, etc.

Future State Design

It is important to use the current state analysis to identify the flaws in the current design or improvements that may be made. How can upgrading allow you to solve these problems? Consider the design and use of new features or products. Not every single new feature will be applicable to your current infrastructure. Keep in mind that everything is a trade off – improving security may lead to a decrease in availability or manageability.

When is it time to re-architect the infrastructure versus re-hosting?

  • Re-host – to move from one infrastructure platform to another
  • Re-architect – to redesign, make fundamental design changes

Re-hosting is effectively “lifting-and-shifting” your VMs to a newer vSphere version. I tend to lean toward re-architecting as I view upgrades as an opportunity to revisit the architecture and make improvements. I have often found myself working in a data center and wondering “why the hell did someone design and implement storage/networking/etc. that way?” Upgrades can be the time to fix it. This option may prove to be more expensive, but, it can also be the most beneficial. Now is a good time to examine the operational cost of continuing with old architectures.

Ensure to determine key success criteria before beginning the upgrade process. Doing a proof of concept for new features may prove business value. For example, if you have a test or dev cluster, perhaps upgrade it to the newest version and demo using whatever new feature to determine relevance and functionality.

Example Upgrade Plans

Rather than rehashing examples of upgrading, embedded is a copy of our slides from VMworld which contain two examples of upgrading:

  • Upgrading from vSphere 5.5 to vSphere 6.5 with NSX, vRA, and vROPs
  • Upgrading from vSphere 6.0 to vSphere 6.5 with VSAN and Horizon

These are intended to be examples to guide you through a methodology rather than something that should be copied exactly.

Happy upgrading!

Delving into Immutable Infrastructure

Before getting too far into the topic, let’s first take a quick look at the difference between mutability and immutability.

Mutable Immutable
Continually updated, patched, and tuned to meet the ongoing needs of the purpose it serves. State does not change or deviated once constructed. Any changes result in the deployment of a new version rather than modifying existing.

Read More »

Understanding Erasure Coding with Rubrik

It is imperative for any file system to be highly scalable, performant, and fault tolerant. Otherwise…why would you even bother to store data there? But realistically, achieving fault tolerance is done through data redundancy. On the flipside, the cost of redundancy is increased storage overhead. There are two possible encoding schemes for fault tolerance: triple mirroring (RF3) and erasure coding. To ensure the Scale Data Distributed Filesystem (SDFS, codenamed “Atlas”) is fault tolerant while increasing capacity and maintaining higher performance, Rubrik uses a schema called erasure coding.

Read More »

#VirtualDesignMaster Wrap-Up

Part of me feels like it flew by but then I remember the hours spent reviewing all the designs (*ahem* Adam) and then it feels like it took an eternity to get through. Admittedly, Virtual Design Master was probably one of the coolest community driven events in which I have participated. If you are unfamiliar with Virtual Design Master, I strongly encourage you to check out the site and catch up with the five seasons.Read More »