Preparing for AI-enabled cyberattacks | MIT Technology Review
MIT Technology Review Insights, in conjunction with cybersecurity company Darktrace AI, has surveyed more than 300 C directors, executives and executives around the world to understand how they are facing cyber threats and how to use AI to help them fight .
That being the case, 60% of respondents say that human-induced responses to cyberattacks do not continue with automated attacks, and as organizations prepare for a greater challenge, more sophisticated technologies are critical. In fact, a vast majority of respondents (96%) reported that they had already begun to protect themselves against attacks caused by AI, enabling AI defenses.
AI cyberattacks are horrific, and the technology is fast and smart. Consider deepfakes, a type of AI weapon tool, that is, images or videos that depict scenes or people that never existed or existed.
In January 2020, the FBI warned that deepfake technology had already reached the point where artificial people could be created to pass biometric tests. As the AI neural networks are evolving at the same time, a FBI official at the time said high-definition fake videos created to mimic public figures that could undermine national security are said to be manipulated by any video creators saying in their own words.
It is just one example of the technology used for harmful purposes. AI can, at some point, conduct cyberattacks autonomously, disguising their operations and interfering with routine activity. The technology is available for anyone to use, including threat threats.
Offensive AI risks and the development of a cyber risk landscape are redefining the security of companies, as humans are already struggling to keep pace with advanced attacks. In particular, surveys responded that email and phishing attacks cause the most seriousness, with nearly three-quarters of email threats being the most worrisome. As a result, 40% of respondents said email and phishing attacks were “very worrying” and 34% said they were “worrying”. Not surprisingly, 94% of the detected malware is still emailed. Traditional methods of stopping e-mail threats are based on historical indicators — previously seen attacks — as well as the recipient’s ability to detect signals, both of which can be avoided by sophisticated phishing raids.
When offensive AI is thrown into the mix, “fake email” is hardly distinguishable from the actual communications of trusted contacts.
How attackers exploit headlines
The chronovirus pandemic provided a lucrative opportunity for cybercriminals. Email attackers in particular followed a long-established pattern of taking advantage of the headlines of the day — along with the fear, uncertainty, greed, and curiosity they create — in what are called “fearware” attacks to attract victims. When employees work remotely, without office security protocols in place, organizations saw an increase in successful phishing attempts. Max Heinemeyer, the director looking for threats to Darktrace, said his team saw the immediate evolution of phishing emails when the pandemic occurred. “We saw a lot of emails, ‘Click here to see if people around you are infected,'” he says. When offices and universities began to reopen last year, new scams arose, with emails offering “free or covid-19 cleaning programs and tests,” Heinemeyer says.
The increase in ransomware has also increased, which has coincided with the increase in remote and hybrid work environments. “The bad guys know now that everyone is based on remote work. If you play now and you can’t give your employee remote access, it’s over,” he says. “A year ago maybe people could get into work, they could work offline, but it hurts a lot more now. And we see that criminals have started to exploit that.”
What is the usual topic? Change, rapid change, and complexity in the case of global change from work to home. And this shows the problem of traditional cybersecurity, which is based on traditional signature-based approaches: static defenses are not very good at adapting to change. These perspectives extrapolate from yesterday’s attacks to determine what tomorrow will be like. “How do you predict tomorrow’s phishing wave? It just works, ”Heinemeyer says.
Download full report.
This content was created by Insights, a custom content from the MIT Technology Review. It was not written in the editorial board of the MIT Technology Review.