Fotolia
A GitOps workflow won't benefit every IT organization
A move to GitOps is a major transition for an IT team and offers a wealth of perks, but it also carries significant challenges and a steep learning curve.
GitOps refers to an IT development and operations model in which a repository -- Git, for example -- stores all the code, parameters, configuration data and other related operations information associated with an application. Known as the single source of truth, this repository aligns development and IT operations teams during application changes, which creates stability and minimizes the risk of failure.
But a GitOps workflow demands a massive shift in IT practices and tools -- so is it worth it?
GitOps and CI/CD
The genesis of GitOps was rapid development practices. GitOps aims to make software more responsive to business needs by decreasing the time required to make and validate application changes. This rapid development is also called CI/CD, and despite its benefits, it increases the risk of mistakes that can break an application completely.
GitOps is a profound change for both development and operations. Both teams must adopt practices to maintain an application's code and parametric data properly and in synchrony, even as changes begin to enter production on different timelines, which requires further synchronization with other committed changes. And while teams can often use their existing programming and operations tools for development and deployment, these tools are now subordinate to those such as Jenkins, which enforce CI/CD and GitOps.
But not every shop has adopted CI/CD, and even those that have don't apply CI/CD principles to every application in use. Before it jumps into GitOps, an organization should have, or plan to have, a CI/CD-driven strategy -- and it must determine how broadly to apply CI/CD to its applications. Because GitOps' value proposition is linked so closely with CI/CD, enterprises with little or no CI/CD implementations in their plans would benefit more from traditional deployments via Kubernetes.
When GitOps isn't -- and is -- a good fit
GitOps has value for nearly every organization, but it also carries costs. IT shops must determine if those costs exceed the value, and if they really require a GitOps workflow.
Obviously, organizations that rely on third-party software instead of developing internally are unlikely to benefit from GitOps: Third-party tools rarely change often enough for change management and version control to be an issue. And if your organization is happy with its current practices, leave them alone.
For organizations that do have their own development team but don't use or plan to adopt CI/CD practices, there are two factors that make GitOps a poor choice. First, GitOps involves complex software changes made at a rapid pace. The fewer software changes an organization makes over a given period, the lower the chance that GitOps will pay off.
Second, without regular positive experiences to validate the effort of GitOps for IT staff, it's difficult to ensure they stay committed to keeping the repository accurate -- which results in a single source of untruth.
As the rate of application changes increases, the time available to transition code from test to deployment decreases, and the complexity of development branches grows, the potential for disorder rises -- which makes management via a central, single repository more necessary. Organizations that face nearly perpetual changes to applications because of fast-paced business changes, such as those dependent on online and social-media engagements, lead the GitOps charge.
GitOps aligns IT operations information -- for example, parameterization and configuration data -- with software. That means that each version of code is related to different versions of operations information, and all the various branches of change management in the development process have associated operations and configuration data. If there are few changes, there are fewer branches and versions of operations and deployment information to manage. As the volume of information to align decreases, so does the value of GitOps.
GitOps workflow and cloud operations
Consider the extent to which your business plans to transform its IT platform and strategy for cloud adoption. Don't bombard development and operations staff with concurrent process overhauls, such as cloud adoption and GitOps. Cloud-native development is still young and evolving into a cohesive set of development tools and practices, with which operations experience is limited. This makes the integration of cloud-native operations with a repository a moving target.
In cloud-native deployments, it's increasingly common to use tools such as a service mesh, like Istio or Linkerd, as well as serverless middleware, like Knative. While neither affects the GitOps workflow, or its benefits, operations tools of any kind must integrate with the repository.
Both service mesh and serverless adoption are slow, so GitOps tools for these cloud-native approaches -- at time of publication -- are new. That doesn't mean that nobody understands the application of a GitOps workflow to cloud-native production environments, but it does mean it's more difficult for a company to find examples of applications -- or experienced practitioners in the labor force. If your organization is transitioning to cloud platforms and cloud-native applications, wait until the transition is complete and stable before implementing a GitOps strategy.
In addition to a repository, the single source of truth in GitOps is also the development and operations teams that build and deploy production applications. The premise of DevOps was to formalize the understanding that an IT organization's development and IT operations teams must work more closely to efficiently deploy new applications -- and new components of existing applications. However, it will take even further cooperation to make the GitOps model real. IT ops and development teams must work together to define both development and deployment changes, so that the entire IT department can identify and connect critical information. Without this, there's no single truth to distribute.