Exfiltrating secrets via public CI logs (working technique)
Learn how to exfiltrate GitHub secrets by printing them to public CI logs. Includes a hands-on lab.
GitHub Actions often rely on secrets such as API tokens, OAuth credentials, or cloud access keys. If not handled carefully, these secrets can accidentally end up in public CI logs—where anyone can find and use them.
These leaked credentials might not be just harmless test tokens—they could be valid OAuth tokens used by big companies. With access to these tokens, red team operators will be able to clone private repositories and gain access to internal source code. They wouldn’t need to break into any servers or accounts—they can just read public logs.
Follow my journey of 100 Days of Red Team on WhatsApp, Telegram or Discord.
This kind of attack is surprisingly simple to pull off. Anyone can go to a public GitHub repository, click on the "Actions" tab, and view the logs from the latest workflow runs. If a workflow contains something like echo $SECRET_TOKEN
, that token will show up in plain text in the logs. In some cases, developers include these echo
commands for debugging purposes, not realizing they’re exposing sensitive data.
To better understand how this works, you can try it in a test environment. Create a public GitHub repository and set up a simple GitHub Actions workflow. In the workflow, add a secret through the GitHub UI, and then echo that secret in the CI job. When the job runs, open the logs—you’ll see the secret printed right there. In a real-world scenario, this secret could be an AWS key, a GitHub token, or a cloud API key.
OR you can just try out this lab.
GitHub masks most secrets by default (even if they are base64 encoded). If you try to echo a secret passed through ${{ secrets.MY_SECRET }}
, GitHub will automatically replace it with ***
in the logs. However, secrets can still leak in other ways. For example, if a secret is manipulated before being printed, written to a file, or sent as part of a debug message, GitHub’s masking might not catch it. Also, secrets that are hardcoded in workflows or accidentally committed to the repo won’t be masked at all.
In the lab, we make this work by XORing the secret before echoing it to public logs.
This makes public logs a valuable source of information during red team operations. A red team operator can explore public GitHub repositories, browse workflow logs, and look for signs of exposed data.
What makes this technique dangerous is how easy it is to automate. Red team operators can write simple scripts that crawl GitHub, search for public CI logs, and look for patterns like echo
, printenv
, or other commands that might leak sensitive values. Since many organizations rely on public or open-source CI tools, even small mistakes can lead to massive leaks.
When secrets are leaked, they can be used to move deeper into a target’s infrastructure—especially when they give access to cloud environments or private codebases.
The tj-actions
incident is a great example of how small mistakes in CI/CD pipelines can lead to serious breaches. It’s a reminder that secrets should always be handled with extreme care, even in trusted automation workflows.
Red Team Notes
- CI/CD systems like GitHub Actions often use secrets (API keys, tokens, etc.) in builds. If not handled properly, these secrets can leak into public logs — making them visible to anyone.
- GitHub now masks secrets automatically (***) in logs, even for encoded values (e.g., base64). However, secrets can still leak if:
- They're hardcoded
- Obfuscated in uncommon ways (e.g., XOR)
- Printed indirectly (e.g., through debug logs or temp files)
Follow my journey of 100 Days of Red Team on WhatsApp, Telegram or Discord.