AI Security Trends To Look Out For in 2022
By Laura Cowan
Laura K. Cowan is a tech editor and journalist whose work has focused on promoting sustainability initiatives for automotive, green tech, and conscious living media outlets.
This article is a guest post from David Balaban, a computer security researcher who has written for a number of high-profile tech and cybersecurity websites. Cronicle accepts guest posts from professionals in the tech and media space on news and best practice topics relevant to Midwest tech industries. If you would like to write a guest post for Cronicle Press Tech News, please email the editor with your pitch and bio.
David Balaban is a computer security researcher with over 17 years of experience in malware analysis and antivirus software evaluation. David runs MacSecurity.net and Privacy-PC.com projects that present expert opinions on contemporary information security matters, including social engineering, malware, penetration testing, threat intelligence, online privacy, and white hat hacking. David has a strong malware troubleshooting background, with a recent focus on ransomware countermeasures.
AI Security Tech Trends To Watch for in 2022
The applications of artificial intelligence (AI) go far beyond the realms of sci-fi these days. It has revolutionized numerous industries, including robotics, healthcare, finance and banking, sports, software development, surveillance, entertainment, and education. It is also a stronghold of cutting-edge cybersecurity solutions that outperform traditional counterparts in terms of speed and accuracy.
The benefits of leveraging AI to thwart cybercrime are multi-pronged. Its fastest-evolving branches, machine learning and deep learning, help identify zero-day threats with unmatched precision through predictive modeling and advanced heuristics. Security platforms that use these technologies continuously fine-tune their models of baseline network activity and get better at spotting dodgy deviations over time.
The ability to analyze massive amounts of data is another major advantage of AI over humans. Such top-notch systems can autonomously scour millions of events in mere seconds to identify red flags and vulnerabilities that cybersecurity personnel would miss. This makes AI-assisted oversight an irreplaceable addition to the modus operandi of security operations centers (SOCs).
According to a report by the Capgemini Research Institute, 69% of surveyed organizations claim that using artificial intelligence enhances the efficiency of their security teams, and about 75% of executives say they are testing the tech in cybersecurity use cases. Unsurprisingly, this market is seeing substantial growth. It is expected to reach $38.2 billion by 2026, up from $8.8 billion in 2019.
What does the near future hold for AI in cybersecurity?
There is no denying that AI will become deeper integrated into the fabric of incident detection and response down the road. At this point, several trends stand out from the crowd and might reshape the security territory most dramatically in 2022. The following paragraphs will shed light on a few innovative AI-based projects from companies that are thinking outside the box.
Hardening industrial IoT protection
Industrial facilities are increasingly drifting away from the “air gap” approach, in which devices and the underlying systems were isolated from the open Internet. The new technology era has brought these networks online to ensure better interoperability and collect data for quick, informed business decisions.
This evolutionary paradigm shift, known as Smart Factory or Industry 4.0, hinges on a plethora of Industrial Internet of Things (IIoT) entities that exchange information and work in concert. The main pitfall is that threat actors now have more opportunities to infiltrate these digital environments, some of which are components of countries’ critical infrastructures.
Siemens Energy, a long-standing tech giant based in Germany, launched an AI-powered cybersecurity service to fill the void. Its Managed Detection and Response (MDR) tool uses the company’s state-of-the-art EOS.ii framework focused on energy asset intelligence via AI and machine learning techniques.
Solutions of this kind are shaping up to be the pillars of proactive defenses against cyberattacks for oil and gas, power generation, and other high-profile industries. This trend is now particularly relevant as it fits the context of the dynamic digital transformation in the energy sector.
The rise of intrusion prevention systems “on steroids”
With the proliferation of cloud technology and remote work across enterprise ecosystems, the traditional perimeter-based protection doesn’t work anymore, and the attack surface is increasing. Put through the prism of a cybercriminal’s mindset, it means more entry points for exploitation. This challenge has encouraged some companies to draw AI into play, and the results are hugely promising.
One such project is being implemented by Darktrace, an information technology firm with headquarters in the UK and the US. Its vanguard offering, the Enterprise Immune System (EIS), is a game-changer in foiling unauthorized network access.
With AI at its heart, it uses a multitude of “Client Sensors” to autonomously extract what’s called “patterns of life” out of an organization’s data streams generated at the level of endpoints, user activity, cloud-based resources, and the internal network as a whole.
This way, the solution pinpoints anomalies with ultra-high accuracy without having to maintain an up-to-date database of threats inherent to a specific business entity. It also boasts incident investigation and response capabilities. The service forestalls ransomware assaults, data breaches, account hacks, and insiders’ foul play. Importantly, EIS is a self-learning system that continuously hones its predictive models to yield higher security dividends for each customer.
On a side note, Darktrace partnered with the McLaren Racing team in early 2020 to work on an AI-driven cybersecurity project. The recent announcement of extended cooperation suggests that the service fully meets the supercar maker’s expectations.
Hardware-accelerated AI cybersecurity frameworks
Computation power is a bottleneck that may limit the efficiency of AI security tools. It defines the completeness of models underlying threat detection algorithms. If an organization experiences a shortage of processing resources when deploying an intelligent protection instrument, some threats can slip under the radar.
One of the ways to dodge this obstacle is to provide security architectures that combine the use of pre-trained machine learning models with programmable processors that collect company-specific telemetry to create new behavioral patterns in real time. NVIDIA, a US-based chip manufacturer with almost three decades of track record in its niche, launched a cloud-native AI framework like that in April 2021.
Called Morpheus, it allows developers to create workflows that cycle through and analyze large amounts of real-time data without congesting network performance. The unparalleled productivity of this solution stems from the fact that it harnesses the NVIDIA BlueField Data Processing Unit (DPU), the company’s proprietary data center infrastructure on a chip.
The resulting tool uniquely profiles all users and devices within a protected network, monitors every single transaction to spot deviations from the norm, and remediates threats before they cause damage. It is particularly effective in preventing data leaks, phishing attacks, and unknown malware.
At this point, the Morpheus framework can be integrated with third-party data center security platforms from renowned industry players, including Cloudflare, Fortinet, ARIA, and F5. It maximizes the defenses of these providers’ offerings for corporate customers. In 2022, the principle of DPU-accelerated AI implementation will likely keep creating ripples in the cybersecurity realm.
Phishing prevention at machine speed
The scourge of phishing has reached disconcerting heights over time. It is also a growingly diverse cybercrime phenomenon, with its notorious spin-offs such as CEO fraud and business email compromise (BEC) scamming enterprises out of fortunes. Credential phishing hoaxes target regular users on a large scale, too.
Unfortunately, con artists are getting better at bypassing traditional email filters. A classic Secure Email Gateway (SEG) mainly relies on a database of known phishing templates to block unwanted messages. A few tweaks in the composition and wording of a dodgy email, combined with a clever trick such as the use of a ZIP file with a double structure, can suffice to get around standard protection.
When it comes to warding off these frauds, AI makes a difference. A particularly unorthodox phishing detection mechanism is to leverage the concept of “computer vision”. The Visual-AI technology provided by a company called VISUA is a step forward in this area. Instead of examining emails at the level of code or signatures, it focuses on scrutinizing their visual manifestation. To an extent, this is like looking at a message with human eyes.
The service provides an API that can be seamlessly integrated with an organization’s existing anti-phishing system. When an email is received, Visual-AI captures its flattened image and unleashes its machine learning potential to check it for high-risk components, such as trigger words, logos, forms, and buttons. Then, it calculates the threat score along with the spotted anomalies and sends the verdict back to the main phishing detection platform, at which point a highly accurate output is generated.
Quite a few big-name businesses, including eBay, Mimecast, and Brandwatch, have successfully enhanced their phishing countermeasures with Visual-AI. Given the decent results of such symbiosis, this technique is gaining extra momentum and the list will probably keep expanding.
Challenges that throw a spanner in the works
When it comes to cybersecurity, AI is a double-edged sword. While bolstering critical defensive mechanisms, it can backfire on organizations and individuals when mishandled by malicious actors. This tech is democratizing, with plenty of related frameworks and models being open-source.
Criminals are actively using such tools to create deepfakes, manipulate customer verification instruments, brute-force passwords, contrive bots that mimic real users, and take social engineering attacks to the next level by understanding would-be victims’ pain points. To top it off, AI facilitates context-aware cyberattacks in which malware blends with a contaminated network to evade detection.
Zooming out of the cybercrime narrative, another significant caveat is in the very nature of today’s machine learning and deep learning systems. Whereas they are incredibly good at identifying the slightest anomalies in gigantic data sets, they aren’t very effective in retrieving new patterns independently. It means that the final result largely depends on the quality of man-made AI training models at the core of these systems.
That being said, it is too early to eliminate humans from the AI security question. The creation of a fully autonomous protection platform that needs zero supervision might become the next major milestone in the evolution of this technology. Hopefully, 2022 will lay the groundwork for this exciting breakthrough.