Cyber & IT Supervisory Forum - Additional Resources
ARTIFICIAL INTELLIGENCE AND CYBERSECURITY RESEARCH
Beneficiaries:
• Software industry
Existing research:
The preservation of privacy and confidentiality of the information flows and of the designed solutions are issues that are rarely considered.
AI/ML-based penetration testing
Type: AI for cybersecurity Description:
AI-powered penetration testing
Objectives:
1. Using AI/ML to test a system to find security vulnerabilities that an attacker could exploit and then trying to figure out what an attacker will do.
Entities:
• Security researchers • Application developers
Beneficiaries:
• Cybersecurity practitioners • Cybersecurity industry
Existing research:
Threat actors can take advantage of training data by generating a backdoor. They can use AI to find the most likely vulnerability to exploit. Penetration testing can lead to finding vulnerabilities that give outsiders access to the data training models. There are many automated tools that complement penetration testing tools. These automated solutions have some basic AI capabilities, and these capabilities are gradually increasing thanks to ongoing research and open competitions. For example, the 2016 Cyber Grand Challenge - a DARPA-sponsored competition - challenged people to build hacking bots and compete against each other. These artificially intelligent bots perform penetration tests to look for security vulnerabilities and close them before competing teams can exploit them. For example, Mayhem was able to find, fix and search for intrusions on its host system, while simultaneously finding and exploiting vulnerabilities on rival systems.
As we write this study, Generative Pre-trained Transformer software is emerging first through OpenChat GPT and then with the promises of a handful of competitors. Research
35
Made with FlippingBook Annual report maker