There’s a good reason why Tesla CEO Elon Musk believes Artificial Intelligence represents a “fundamental risk to the existence of civilization.” And it’s been proved at the Def Con hacking convention in Vegas last week.
Using Musk’s OpenAI framework, a team of researchers have been able to create malware that can trick antivirus software – even AV that incorporates machine learning.
Hyrum Anderson, technical director of data science at Endgame, showed in a keynote presentation how the system can change its binaries and avoid detection using OpenAI as the learning framework.
For comparison, Anderson mentioned research done by Google where swapping only a handful of pixels in an image can fool image recognition software to mistake a bus for an ostrich.
The artificially intelligent malware was trained over 15 hours and had 100,000 iterations. 16% of the custom malware samples were able to go past AV engines undetected and infect the target machine.
“All machine learning models have blind spots,” he said. “Depending on how much knowledge a hacker has they can be convenient to exploit.”
Those eager to test out the software-generation tool can find it on Github. Anderson encourages tinkerers to take it for a spin report their findings.
A non-for-profit AI research group, OpenAI is dedicated to “discovering and enacting the path to safe artificial general intelligence.” Based on the belief that Artificial General Intelligence (AGI) is set to become the most significant technology ever created, OpenAI’s larger scope is to build “safe” AGI, while also ensuring that the benefits of artificial intelligence are distributed widely and evenly.
OpenAI was co-founded by Elon Musk and Sam Altman, both sharing the belief that artificial general intelligence – if done wrong – can pose existential risks to humans.