Lessons from DeepLocker for red team operations
Learn how AI can be used to craft super stealthy condition-based payloads for red team operations.
Imagine payloads so stealthy and evasive that they are almost impossible to detect. Wouldn’t you love to have such payloads and techniques in your arsenal?
Meet, DeepLocker, a proof-of-concept developed by IBM Research and revealed at Black Hat USA 2018 that demonstrates how artificial intelligence (AI) and deep learning can be weaponized to create highly targeted and evasive malware.
Follow my journey of 100 Days of Red Team on WhatsApp, Telegram or Discord.
Unlike traditional malware, which usually relies on simple logic or patterns to execute, DeepLocker uses a Deep Neural Network (DNN) to control when and where its malicious payload is activated. This means the malware stays completely inactive and undetectable, until certain very specific conditions are met.
For example, DeepLocker can be trained to recognize a particular person’s face. When that person appears in front of the webcam, the malware activates. Until then, it behaves just like any normal app. This makes it extremely stealthy and hard to detect through traditional security measures.
So, how can red teams leverage this?
This opens up interesting possibilities for red team operators. In theory, they could use AI-based models to dynamically decide when and how to activate payloads based on the surroundings or inputs. This means creating highly targeted, stealthy, and context-aware tools, also known as condition-based exploitation tooling. This kind of precision targeting can greatly reduce the risk of detection. It also ensures that payloads do not activate in unintended environments—reducing exposure and minimizing forensic traces.
Another idea is to create C2 beacons with embedded AI triggers so they only attempt outbound communication if environmental AI models verify key identifiers (background noise, user presence, system config).
If you want to dive deep into the technical details of how DeepLocker works, and how it can be used to improve efectiveness of red team attacks, below is the recording of a presentation, DeepLocker - Concealing Targeted Attacks with AI Locksmithing by Dhilung Kirat, Jiyong Jang and Marc Ph. Stoecklin, presented at BlackHat USA 2018.
Red Team Notes
DeepLocker introduces a new class of context-aware, low-noise techniques that can enhance tradecraft for advanced simulation engagements.
For red teams, DeepLocker offers a model for what's possible when deep learning meets offensive security. By combining precision targeting with environmental awareness, red teams can build payloads that are not only effective, but stealthy, adaptive, and harder to detect.
Follow my journey of 100 Days of Red Team on WhatsApp, Telegram or Discord.