Johannesburg, 04 June 2024
Responsible AI boosts software security
While the prevalence of high-severity security flaws in applications has dropped slightly in recent years, the risks posed by software vulnerabilities remains high, and remediating these vulnerabilities could stand in the way of new application development. Responsible AI offers a solution to the challenge of balancing risk mitigation and software development.
This is according to John Smith, Chief Technology Officer for EMEA at Veracode, who was speaking ahead of the 2024 ITWeb Security Summit in Johannesburg.
Veracode’s State of Software Security 2024 report finds the prevalence of high-severity security flaws in applications is half of what it was in 2016; however, the situation is far from ideal. Around 63% of applications have flaws in first-party code and 70% contain flaws in third-party code. Worryingly, these flaws can take seven to 11 months to fix and 46% of organisations have persistent, high-severity flaws that constitute critical security debt.
Smith says South Africa’s software security environment is no different from the situation in the rest of the world. “We find the same challenge everywhere, in that in any programming problem you attempt to solve, there are many ways that will introduce weakness. Mistakes will happen unless you put security at the heart of development. The only way to mitigate this is by testing early and often, and prioritising remediation,” he says. “However, prioritising is difficult. Only around 10% of organisations can efficiently prioritise risk.”
He says there is an inevitable trade-off between spending developer time fixing weaknesses in software instead of creating new features and investing in remediation in case a business is hacked.
AI offers significant opportunities to support prioritising and remediation, but Smith cautions against having too much faith in generative AI at this stage. Generative AI, sourcing its data from the internet, may use inaccurate or biased data. He notes that organisations may trust the answers too implicitly and not have the proper checks in place.
Smith says the key to effective use of AI to mitigate risks lies in the data it uses. “The approach we have taken with Veracode Fix is to narrow it down to focus on fixing vulnerabilities in code. Instead of using a whole mass of data from outside, we focus on patches designed by our security researchers – using human knowledge and encoding that into the AI. This gets past the challenge of generative AI generating everything. Applying a human generated patch is a more responsible approach and removes poor quality data and AI hallucinations. It also means we have control over the IP, eliminating the risk of the model reproducing code it sourced on the internet that was on the internet under licence.”


