Don’t fear chaos engineering if you are a DevOps tester
Source – techtarget.com
I’m a DevOps tester on a newly formed team. The engineers and operations people want to implement chaos engineering on our product. Why am I the only one upset at this?
Chaos engineering is the process of looking for weaknesses within an application by continually subjecting it to random behavior. A team sabotages its own application in production in order to evaluate how robust it is. The process can determine how applications endure unanticipated disruptions. The theory is that an application must be able to support the failure of any underlying software or hardware component. For a couple of reasons, this concept concerns experienced testers.
Intentionally trying to make an application fail is a top concern. For example, this approach prompts a DevOps tester to bring down instances in a cluster, simulate a denial-of-service attack, or create a network outage. However, breaking an application like this is also important with traditional testing.
The second concern is the idea of testing in production. For years, testers have worked in a staging environment and never on a production application. But with today’s cloud-hosted applications, it’s impossible to replicate the production environment, except in production. If you control your chaos process, then diagnosis and recovery should be rapid.
Chaos engineering is important in cloud computing and in DevOps practices. The cloud is a highly complex environment, more than the traditional data center. Testing outside of production doesn’t give insight into the quality of an application.
Chaos engineering means more than testing application quality. Overall, it engenders a more proactive development mindset. It also looks at the entire production environment, including data center issues, OS instances, hardware, load balancing, network outages and DNS. If you’re a DevOps tester, it’s one way to remain relevant in a time when many aspects of the profession are being called into question.