False positives in security automation
Recently I came across this question "What if automating security testing gives false positives?". This also applies to DevSecOps concept. False positive is a concept in which the wrong result is shown as right. Any vulnerability is just a software bug in the development stage and it will be big if the same is deployed into the production. We use security automation to fix bugs in the development stage. Security automation framework performs the security checks in every stage starting from repository to the production. The moment developer push the code to the repository the code will undergo a static code analysis, then application security testing in test environment followed by automated penetration testing later the production deployment.
Now if we are not doing the automation then 100's of bugs will be deployed in production systems, If exploited then it lead to big loss to the organization. When it comes to false positives it do produce some false positives in various stages. But the ratio would be 5:100, which means it will solve 95 bugs and mark remaining as false positive. We would be ended up sending 100 bugs to the production if we don't automate and on the other hand we may have latency in the development life-cycle if it generates false positives. False positives in security automation can be reduced by little manual intervention. As far as the security is concerned it's okay if we have false positives compared to the loss created by the vulnerability.