Blog article category for blog articles on this site covering the areas of DevOps, Cloud Infrastructure, Site Reliability, Technical Writing, Project Management and Commerical Writing along with Event Management and associated areas. 

CICD and Jenkins

CICD and Jenkins

Jenkins as a CICD tool...

I am currently doing some refresher training around Jenkins, which to be honest is a lot of fun. Jenkins is correctly accepted as the industry godfather of CICD pipelines, evolving from a continuous delivery tool to a modular CICD tool over time. The abstraction level of Jenkins versus other CICD tools I have used like Azure DevOps and AWS CodeStar is notably different. This has led to my reflection on its use case when compared to CICD tools from leading cloud providers.

In working through my thoughts, the lower level of abstraction was the first notable difference Jenkins has to other CICD tools from cloud providers. The concept of CICD in stages for large actions such as deploying to a staging server for quality assurance testing with encapsulated steps defined is common to all CICD tools. What is different about Jenkins is the very wide range of cloud-agnostic plugins available to define specific use cases that other tools may not have. Let’s now do a high-level comparison of Jenkins to other cloud vendors' CICD tools.


Cloud Vendor CICD tools:

A lower level of abstraction and more system administration required

A higher level of abstraction, more managed services available and less system administration required

Cloud agnostic

Cloud-specific noting features in service connections are now available but provide lessor integration features when compared to their home cloud service

Open source


Train once but despite its relative complexity, you can use it on many platforms

Train on a cloud-based CICD tool for one setup approach to its native platform and another for competitor platforms. Usage is however more streamlined with a better user experience

CI server VM or docker container required incurring a cost but Jenkin's software is free

Most vendor tools provide this as a managed service with a free tier

More overall control as plugin administration is managed by users

Less overall control as large chunks of the underlying technology is a managed service

Can you can see the lower level of abstraction and a wider range of free plugins available drive the continuing success story of Jenkins but cloud vendors like Azure and even AWS in this case are working to catch up. The future may see the likes of Azure DevOps dominate the DevOps world as a tool of choice when you think of Microsoft’s ownership of GitHub. However, noting the agnostic nature of Jenkins, its free price tag for use and its wide range of integration features, I cannot see it being dethroned as the king of DevOps tools anytime soon. Stay tuned for more on DevOps in this blog along with articles on other areas of interest in the Writing and Cloud Infrastructure arenas. To not miss out on any updates on my availability, tips on related areas or anything of interest to all, sign up for one of my newsletters in the footer of any page on Maolte. I look forward to us becoming pen pals!

Monitoring Solutions and Digital Business

Monitoring Solutions and Digital Business

Why monitoring solution design is key to digital success

As digital transformation continues at pace thanks to the COVID pandemic, many companies are having issues with their digital transformation including what approach to take in reaching a successful outcome. To answer the question in full has far too much content for one article, so I would like to focus on monitoring and why it should be a prominent feature like no other for a successful digital company. A well-designed and implemented monitoring solution in the cloud or on-premise has primary value as an early warning system. With the correct monitoring automation in place for the infrastructure fleet, the value of the solution can extend to surrounding processes that underpin site reliability.

Think of a good monitoring solution that has integrated runbook automation to alert the on-call engineer to a CPU spike on one instance that has failed in a load balancer health check. This notifies the engineer of a single node issue via an automated ticket that requires investigation. If sticky sessions are not the cause then what is? What could it be and does it present a larger danger to our digital product’s SLA? In this example, the load balancer’s runbook automation around health checks has taken one node out of the active pool and in a separate action notified the on-call engineer of the incident via automated ticket generation. The automation can even extend if desired to restart the node, thus relieving the CPU pressure and restoring it automatically to the load balancer’s active node pool. All this process automation is not possible without a well-designed monitoring solution, which triggers automated and even manual process workflows.

To ensure the availability and maximize the performance of your digital fleet, I would recommend the following:

  • Choose your infrastructure platform carefully. On-premise has less attraction for most industries to cloud-based alternatives given the latter’s wide range of managed services, shorter time to deploy and very high SLA commitments on key infrastructure resources.

  • Model your infrastructure management processes to support site reliability setting service level objectives for key infrastructure resources and service level indicators, which can be automated via your monitoring solution.

  • Design a monitoring solution noting the time to detect, time to mitigate and audibility should feature strongly in the design and subsequent management metrics.

  • Automated monitoring agents (e.g. AWS ssm) on nodes provide application-level metrics. This reduces time to detect in a material way when compared to a monitoring solution set up using lag indicators such as logs. This metrics-based time saving can take up to 1 hour off time to detect metrics for your incident. In the case of major incidents impacting customers, it can mitigate the risk of severe damage to your digital products and your company’s brand reputation.

  • Time to mitigate especially in the case of major incidents can be reduced when your early warning monitoring solution alerts you quicker, automates lower-level remediation (without making it worse) and parses relevant log data for audit by the on-call engineer investigating the incident.

  • Centralized logging off node not only increases node health via a lighter storage burden, it also streamlines log review in an incident via centralized query tools. It also creates a better audibility structure and investigative path for technical root cause analysis after the incident.

There is no doubt that all of this process infrastructure and automation would not succeed if the end-user was not considered in designing the monitoring solution. The extent of its importance to the digital business when recognized can act as an internal productivity lever in operations and a competitive advantage against the competitor who overlooks it as a key infrastructure resource. Stay tuned for more on Cloud Infrastructure in this blog along with articles on other areas of interest in the Writing and DevOps arenas. To not miss out on any updates on my availability, tips on related areas or anything of interest to all, sign up for one of my newsletters in the footer of any page on Maolte. I look forward to us becoming pen pals!

Effective Technical Documentation

Effective Technical Documentation

Why Technical Teams need a Technical Writer

There is no doubt in my mind that company culture in technology companies is writing adverse when it comes to technical writing and effective documentation. I have seen it many times in the past where it's treated as an annoying add-on job for developers, project managers and engineers. There are many reasons why this is and many reasons why digital products require good documentation to be effective as enterprise-level products.

Good documentation drafting practices require technique, training and skill to develop. Many management solutions focus on the centralised technical writing team. This is where a writer gets a ticket to document a piece of software for a customer-facing documentation repository or maybe write a technical runbook for one or more process executions. Whatever it is, the purpose of the document can often get lost if the technical writer simply does not have the process knowledge to be effective in drafting a technical document. This is where team writers become very useful. If your use case is one where a large repository of documents is required to be created and curated let’s say for a technical operations team, the technical writer as a full-time teammate will bridge the gap between the technology and its reader. This information delivery efficacy in delivering know-how via a writer that has operational-level process knowledge is well-proven for those who implement an effective solution. This means giving the writer an assignment pool of teams and the paid time to get trained in the team’s modus operandi, their existing process infrastructure and what they are/will require new documentation for. The smaller the organization, the bigger the spread of teams a writer can be assigned to for this type of full-time engagement in the development and curation of technical documentation.

As for projects, the impulse to onboard a technical writer late in the project lifecycle just before or after a digital product goes live is a mistake. Project managers should get a technical writer on the team after the end of the development phase and get them trained in the project workflows for the new product. This allows the technical writer to gain process understanding in the documentation of each iteration of the product. This revision-based approach to documentation solidifies technical understanding for the writer, who in turn delivers this understanding to the reader via a consistent, reusable and version-controlled technical writing architecture.

Whilst there are standards like the open source DITA standard for technical writing, good operational practice can often be a matter of management opinion and company culture. Those who engage in professional technical writing practices will always seek to bring in the technical writer early in the project lifecycle. The reason for this is the recognition of the qualitative value in effective technical documentation as part of their digital product. Stay tuned for more on Cloud Infrastructure in this blog along with articles on other areas of interest in the Writing and DevOps arenas. To not miss out on any updates on my availability, tips on related areas or anything of interest to all, sign up for one of my newsletters in the footer of any page on Maolte. I look forward to us becoming pen pals!