Thursday August 16, 2018

AI Driven Malware: DeepLocker at DEFCON

We get some firsthand information from our security man on the ground from DEFCON. Thanks to SCHTASK for the writeup!

Of the many briefings I attended at the Blackhat / Defcon conferences of last week, the Deeplocker Briefing presented by IBM resonated with me the most. AI driven technology has been a mainstay marketing point for most "Next Generation" security platforms. If I had a nickel for everytime I heard "Deep Learning / AI driven security" in the last two years I'd be a rich man.

But what happens when the tables are turned and AI is used for malicious means? Researchers at IBM have an idea... Introducing: DeepLocker.

In a nutshell, DeepLocker is a highly evasive piece of kit (It could lay dormant in an application and never surface for be detected unless its target presents itself in from of a webcam.) that provides an attacker with the ability to precisely attack specific targets via facial and voice recognition, geolocation, and pretty much any defined triggers that can be learned by an AI. What does this mean? It means that cyber attacks can be extremely prolific, undetectable, and extremely surgical.

"To demonstrate the implications of DeepLocker's capabilities, we designed a proof of concept in which we camouflage a well-known ransomware (WannaCry) in a benign video conferencing application so that it remains undetected by malware analysis tools, including antivirus engines and malware sandboxes. As a triggering condition, we trained the AI model to recognize the face of a specific person to unlock the ransomware and execute on the system. Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target. When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim's face, which was the preprogrammed key to unlock it.

It's important to understand that DeepLocker describes an entirely new class of malware - "any number of AI models could be plugged in to find the intended victim, and different types of malware could be used as the "payload" that is hidden within the application."

The above was accomplished live, just 20 feet from me. IBM had its target simply walk by the infected laptop and the malware payload was immediately triggered.

Keep in mind that this research is conducted by IBM and is not (that we know of) in the wild. However, IBM estimates that AI driven threats will be making their debut in the public arena very soon.

Discussion