If you think you can’t afford to implement a high-quality software development process, think again.
For over thirty years, we’ve known that certain quality control practices reduce errors without increasing costs 1. What’s more, is that maintained software can end up seeing a 90% reduction in cost when compared against software without similar quality processes 2. One reason for that is issue detection and resolution. It’s the most expensive and time-consuming form of work, often using up to 50% of a programmer’s development time3. And the amount of time increases exponentially if the bug hits production. IBM estimates the cost of that scenario 100x greater to repair4.
When software fails, the business and the team start taking the punches. It can spiral from customer support, to customer attrition, leading to lost revenues, brand damage, response management, heightened stress and anxiety for the team (leading to burnout), and the opportunity cost is fixing bugs rather than building new customer value. Those are not good days.
To get ahead of this, we’re exploring four quality control practices that have demonstrated to be not just affordable but also indicative of exceptional programming practices. This effort is centered around making the best possible software at the lowest possible cost in the least amount of time.
The four practices are:
- A Culture of Organizational Learning
- Version Control and Code Reviews
- Agile, Iterative QA Workflows
- Automated Ephemeral Infrastructure
There is a mantra to keep in mind when trying to ship higher-quality software faster, and it is woven into each practice we are discussing: Minimize Reinvention, Minimize Rework. The greatest impacts to quality, and even productivity, will be establishing systems that strive to minimize reinvention and rework5.
Facilitate Organizational Learning
The biggest and perhaps most critical factor to performance will always be people. Burnout, fear, and unrealistic expectations will break us, no matter how good a system is. Making it safe to learn will provide greater digital transformation, and learning unlocks the core DevOps tenants of fostering shared ownership and creating rapid feedback loops.
Learning is the pathway to working smarter, and building software is knowledge work. Without learning, teams likely won’t sustain performance, creativity, or even growth6, and this begins by establishing psychological safety to build trust within an organization. In her TED talk, Professor Amy Edmondson outlines the benefits of feeling psychologically safe in the workplace and why building this trust is essential. If an organization works to build trust and is rooted in the recognition and opportunity of mistakes, the team will realize the freedom and benefit of the space to make them.
Version Control and Code Reviews for Organizational Learning
As a learning organization, a fundamental way to reduce knowledge silos is through code reviews. When a developer finishes working on a feature or issue, another developer is asked to look over the code for possible errors or inconsistencies before testing. This process has been shown to remove 70% of software defects and boost productivity by at least 20%7. Remember, the earlier we catch bugs, the less costly they are to resolve. Code reviews help improve software estimation, spark innovation, and minimize reinvention. They used to be a cumbersome process, but with version control and infrastructure automation (also known as infrastructure as code), code reviews can be done within a clean, fully isolated, and on-demand environment.
Using a version control system like Git is how you enable multiple teams to work on a project without adversely impacting other groups’ work. Git’s ability for multiple, isolated work streams (branches) means code can be built, tested, integrated, or even scrapped in a controllable, transparent, and maintainable manner. Git is both free and Open Source, though in many cases, Git has a larger benefit when combined with a version control provider such as GitHub or GitLab.
Agile QA for Faster Testing Cycles
Traditional project management styles, like waterfall, build in a linear sequential phase. Each phase depends on the deliverable of the previous one, and in turn, creates a critical path towards a release. Progress is more easily measured since the full scope of work is known in advance. Waterfall works best on stable products with clearly understood technology.
When it comes to developing brand new software or using technology that’s not well understood, a waterfall approach can be problematic. If it’s never been built, you’re bound to discover additional complexities along the way. Agile software development focuses on rapidly delivering working software. Work is broken down into smaller increments and time-boxed into phases called “sprints,” usually lasting a few weeks. Sprints have a running list of deliverables prioritized by the stakeholders, though it may take several sprints to release new features. Unlike waterfall, a successful agile approach requires a high level of stakeholder participation. The project team and stakeholders are continually reviewing work through daily builds and end-of-sprint demos.
The iterative and transparent nature of an agile workflow brings new possibilities. Imagine a visible and interactive development process for your team. Phases like testing, QA, and accessibility can happen in lockstep with development. This isn’t to suggest that a designated QA stage should not occur, but that the team is catching regressions sooner when they’re more manageable and more affordable to fix (minimizing rework). Agile testing leverages continuous integration to frequently run automated tests with every code merge, ensuring that a new code base, no matter how small, is checked for quality. Test automation is an ever-changing landscape, and there are many different kinds of tests, which can play various roles in your project lifecycle.
CI tools like Tugboat ship with automated and configurable accessibility, SEO, and performance testing. Visual regression tests are used with our Visual Diffs feature to compare the UI of new code against production to make sure new visual bugs are not introduced. A build can be flagged as “failed” if the variance exceeds an amount specified in the test. The ability to set regression thresholds ensures not every build needs to be reviewed, so project velocity stays a priority, and the spirit of iterative and transparent collaboration remains intact.
Automated, Ephemeral Infrastructure for CI/CD
We’ve covered the people, the processes, but what about infrastructure? The elasticity of infrastructure has changed where it’s much easier for resources to be scaled up or down to meet demands. Testing and staging environments traditionally have been costly physical servers left running to perform transient ad-hoc tasks. The thought of on-demand, unlimited staging infrastructure at the time was immediately cost-prohibitive and a pipe dream. These days, cloud infrastructure and container-based systems are built for scaling and spinning up dynamic environments that may only need to last for minutes or weeks at a time, thereby eliminating the cost-prohibitive bottleneck.
Operations have started to adopt the playbook of developers and brought the concept of application source code to infrastructure. Infrastructure as Code (IaC) automates the provisioning of infrastructure through a human-readable configuration file that often lives in the root of your project’s Git repository. The environment your application runs in has become codified and under version control, giving you a testable and reliable way of deploying infrastructure the moment you need it while also eliminating “configuration drift.”
IaC allows you to build on-demand environments for open pull/merge requests so your entire team can preview the change, run test automation, and ensure the code works as intended in a production-like environment before going live. It’s a critical component of adopting an agile workflow and creating a transparent and collaborative development process. Designers, product managers, and client stakeholders can all provide feedback with no technical barriers to get in the way.
A lot has changed in thirty years, and a new era of quality is in front of us. Now more than ever, there are better ways to standardize tooling and processes across the business. It’s healthy to understand that building trust and focusing on being a learning organization can still pay the largest dividends no matter the era.
Infrastructure has become more like software, allowing disparate teams to begin working together earlier on in a project’s lifecycle. And there is almost an endless myriad of tooling options and methodology opinions. At Tugboat, we simply couldn’t build our deployment previews without an IaC philosophy. And at the risk of adding more noise to that frequency, we are doubling down on our efforts to keep it simple - and extract the merits from the past to propel us forward into a more scalable, collaborative future - one that is rooted in minimizing reinvention and rework.
F. Mc Garry, G. Page and D. Card, Evaluating Software Engineering Technologies in IEEE Transactions on Software Engineering, vol. 13, no. 07, pp. 845-851, 1987. doi: 10.1109/TSE.1987.233495 ↩
Capers Jones. 2000. Software assessments, benchmarks, and best practices. Addison-Wesley Longman Publishing Co., Inc., USA. ↩
Dawson, Maurice & Burrell, Darrell & Rahim, Emad & Brewster, Stephen. (2010). Integrating Software Assurance into the Software Development Life Cycle (SDLC). Journal of Information Systems Technology and Planning. 3. 49-53. ↩
Nembhard, I.M. & Edmondson, Amy. (2012). Psychological Safety: A Foundation for Speaking Up, Collaboration, and Experimentation in Organizations. **doi: 10.1093/oxfordhb/9780199734610.013.0037. ↩