Use cases of large language models for red team trade-craft
Learn about different ways red teams can leverage Large Language Models (LLMs) during engagements.
Lets accept it. Large Language Models (LLMs) are here to stay and they are going to change the way we do things. This is true for red team tradecraft as well. While as red team operators we take pride in our knowledge, skills and the ability to execute sophisticated attacks by hand, we must be ready to let go some of our pride and embrace LLMs and generative AI to enhance our tradecraft wherever possible.
In this post, I discuss certain scenarios where embracing LLMs might be a good idea for red team professionals:
Social Engineering / Phishing - One of the primary use cases of LLMs for red team operations is their ability to generate realistic social engineering content, such as phishing emails, spear-phishing messages, and fake technical support communications. By leveraging the vast corpus of knowledge these models are trained on, LLMs can create convincing, contextually accurate messages that target specific individuals or organizations, which can be a critical component of initial access tactics.
Tools Development - When it comes to tools, we love to re-invent wheels. While most of it is to understand the underlying mechanics, the time spent can be spent mastering something else. To this effect, LLMs can aid in the development of custom tools and scripts for red team engagements. They can be used to generate payloads, obfuscate code, or even write out detailed attack scenarios tailored to a specific environment. With the ability to rapidly generate and test code, LLMs accelerate the execution phase of red team exercises.
OSINT Gathering & Analysis - LLMs can assist with data mining and processing large volumes of information from open-source intelligence (OSINT) or other publicly available data. For example, they can summarize reports, extract relevant details from documents, and even assist in creating profiles on targets based on their online presence. This can be a time-saving resource for red teams conducting thorough reconnaissance in preparation for an attack.
Training - Lets say you want to learn about persistence mechanisms in Linux. You can take a self-paced course or read a book or just leverage a search engine and go through each mechanism. However, with LLMs you can just use a prompt like, “tell me in detail about linux persistence mechanisms for red team tradecraft” and it will generate a detailed text explaining various persistence mechanisms. LLMs enable you to design your own curriculum by just giving it a couple of prompts. Imagine the savings in training costs.
Reporting & Documentation - LLMs can support post-engagement activities by helping red team members craft convincing reports and generate documentation.
Red Team Notes
A red team can leverage Large Language Models (LLMs) for following:
- Create convincing phishing, spear-phishing, and fake technical support messages tailored to target individuals or organizations.
- Rapidly generate payloads, obfuscate code, and write attack scenarios for specific environments.
- Summarizing reports, extracting relevant details, and building profiles from open-source intelligence.
- Designing training curriculum customized to the team's requirements.
- Generate convincing reports and documentation.
Follow my journey of 100 Days of Red Team on WhatsApp, Telegram or Discord.