An S3 account ID enumeration and bucket discovery tool
Given a single accessible S3 bucket, bucky extracts the 12-digit AWS account ID that owns it, then discovers additional buckets belonging to the same account by fuzzing bucket names against a wordlist — all in one command.
- Enumerates the account ID — Uses the
s3:ResourceAccountIAM condition key to brute-force the 12-digit AWS account ID one digit at a time through inline STS session policies - Discovers more buckets — Fuzzes bucket names from a wordlist to find additional buckets owned by the same account using concurrent workers
- Reports everything — Account ID, bucket region, and all discovered buckets
S3 bucket names originally shared a single global namespace, which gave rise to bucketsquatting — attackers could squat on names they expected organizations to create. AWS addressed this by rolling out account-regional namespaces, a naming scheme that ties each bucket to its owning account and region:
{name}-{accountID}-{region}-an
A bucket called myapp-123456789123-eu-north-1-an would live at https://myapp-123456789123-eu-north-1-an.s3.eu-north-1.amazonaws.com. AWS now recommends this format for new buckets, though it is not yet the default. (For background on how the new namespace kills bucketsquatting, see One Cloud Please.)
The trade-off is that the account ID and region are now baked into the bucket name itself. Once an attacker obtains a target's account ID, they can pair it with a wordlist and systematically construct valid bucket URLs — and S3 will confirm whether each one exists, even if the bucket is private. That response alone is useful reconnaissance.
Bucky exploits exactly this. Drawing on research from Pwned Labs, it first recovers the 12-digit account ID from any accessible bucket using the s3-account-search technique, then brute-forces the {name}-{accountID}-{region}-an pattern to uncover the target's broader S3 footprint.
- You create an IAM role in your own AWS account that has
s3:ListBucketors3:GetObjecton the target bucket - You call
sts:AssumeRoleon that role, passing an inline session policy with as3:ResourceAccountcondition like"1*" - If the target bucket's owning account starts with
1, theHeadBucketcall succeeds; otherwise it returns 403 - Repeat for each digit position (0–9) until all 12 digits are discovered
- The inline session policy acts as a permission boundary — it intersects with the role's permissions, so access is only granted when the condition matches
This requires at most 120 API calls (12 positions × 10 digits) and typically completes in under a minute.
go install github.com/umair9747/bucky@latestgit clone https://github.com/umair9747/bucky.git
cd bucky
go build -o bucky .
sudo mv bucky /usr/local/bin/bucky --updateThis pulls and installs the latest version from the repository.
Before using bucky, you need an IAM role in your own AWS account that can access the target S3 bucket. The target bucket must be publicly accessible or have a bucket policy that allows access from your account.
Create trust-policy.json — this allows your IAM user to assume the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<YOUR_ACCOUNT_ID>:root"
},
"Action": "sts:AssumeRole"
}
]
}Replace
<YOUR_ACCOUNT_ID>with your 12-digit AWS account ID. You can scope the principal down to a specific IAM user or role ARN instead of:root.
Create s3-policy.json — the role needs broad S3 read access (the s3:ResourceAccount condition in bucky's inline session policy handles account-level filtering):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": "*"
}
]
}aws iam create-role \
--role-name bucky-role \
--assume-role-policy-document file://trust-policy.json
aws iam put-role-policy \
--role-name bucky-role \
--policy-name S3ReadAccess \
--policy-document file://s3-policy.jsonaws iam get-role --role-name bucky-role --query 'Role.Arn' --output text
# Output: arn:aws:iam::123456789012:role/bucky-roleIf your IAM user doesn't already have sts:AssumeRole permission, attach it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/bucky-role"
}
]
}The primary way to use bucky — enumerate the account ID from a known bucket, then fuzz for more buckets in one shot:
bucky \
--role-arn arn:aws:iam::123456789012:role/bucky-role \
--bucket target-bucket \
--wordlist wordlists/default.txt \
--access-key AKIA... \
--secret-key wJal...With a known object key (improves reliability):
bucky \
--role-arn arn:aws:iam::123456789012:role/bucky-role \
--bucket s3://target-bucket/index.html \
--wordlist wordlists/default.txt \
--workers 20Multiple buckets (comma-separated):
bucky \
--role-arn arn:aws:iam::123456789012:role/bucky-role \
--bucket bucket1,bucket2,bucket3 \
--wordlist wordlists/default.txtMultiple buckets from a file:
bucky \
--role-arn arn:aws:iam::123456789012:role/bucky-role \
--bucket-file targets.txt \
--wordlist wordlists/default.txtSave results to JSON:
bucky --bucket target-bucket --wordlist wordlists/default.txt --json
bucky --bucket target-bucket --wordlist wordlists/default.txt --json --output results.jsonOutput:
=== Phase 1: Account ID Enumeration ===
[*] Verifying access to bucket: target-bucket
[+] Access confirmed
[+] Bucket region: us-west-2
[*] Enumerating account ID...
[██████████████████████████████] 12/12 | 675351422352
[+] Account ID: 675351422352
=== Phase 2: Bucket Discovery ===
[*] Fuzzing buckets for account: 675351422352
[*] Loaded 312 candidates from wordlist
[+] FOUND: target-backups
[+] FOUND: target-logs
[██████████████████████████████] 312/312 | 2 found
=== Summary ===
Account ID : 675351422352
Source : target-bucket
Region : us-west-2
Discovered : 2 additional bucket(s)
- target-backups
- target-logs
If you only need the account ID:
bucky enum \
--role-arn arn:aws:iam::123456789012:role/bucky-role \
--bucket target-bucketMultiple buckets and JSON output work with subcommands too:
bucky enum --bucket bucket1,bucket2 --json
bucky enum --bucket-file targets.txt --json --output results.jsonIf you already have an account ID and want to discover buckets:
bucky fuzz \
--role-arn arn:aws:iam::123456789012:role/bucky-role \
--account-id 675351422352 \
--wordlist wordlists/default.txt \
--workers 20All flags can be set via environment variables. Flags take precedence when both are set.
| Variable | Flag | Description |
|---|---|---|
AWS_ACCESS_KEY_ID |
--access-key |
AWS access key ID |
AWS_SECRET_ACCESS_KEY |
--secret-key |
AWS secret access key |
AWS_SESSION_TOKEN |
--session-token |
AWS session token (for temporary credentials) |
AWS_REGION |
--region |
AWS region (default: us-east-1) |
AWS_DEFAULT_REGION |
--region |
Fallback if AWS_REGION is not set |
AWS_PROFILE |
--profile |
AWS named profile from ~/.aws/credentials |
BUCKY_ROLE_ARN |
--role-arn |
ARN of the IAM role to assume |
BUCKY_BUCKET |
--bucket |
Target S3 bucket name |
BUCKY_ACCOUNT_ID |
--account-id |
Target AWS account ID (fuzz only) |
BUCKY_WORDLIST |
--wordlist |
Path to wordlist file |
BUCKY_WORKERS |
--workers |
Concurrent fuzzing workers (default: 10) |
export AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=wJal...
export BUCKY_ROLE_ARN=arn:aws:iam::123456789012:role/bucky-role
# Now just specify the target
bucky --bucket target-bucket --wordlist wordlists/default.txtBucky ships with a default wordlist at wordlists/default.txt containing ~300 common S3 bucket naming patterns:
- Generic names (
backups,logs,data,assets,uploads) - AWS service patterns (
cloudtrail-logs,terraform-state,lambda-artifacts) - Environment variants (
prod-logs,staging-data,dev-backups) - Application patterns (
api,web-assets,cdn-origin) - Infrastructure (
ci-artifacts,build-output,docker-images)
For targeted engagements, generate a custom wordlist by prepending the organization name:
ORG="acme"
while read -r line; do
[[ "$line" =~ ^#.*$ || -z "$line" ]] && continue
echo "${ORG}-${line}"
echo "${line}-${ORG}"
echo "${ORG}${line}"
done < wordlists/default.txt > wordlists/acme.txtAny wordlist compatible with tools like ffuf or gobuster works. Format: one bucket name per line, # comments, blank lines ignored.
| Command | Description |
|---|---|
bucky |
Full workflow — enumerate account ID then fuzz for more buckets |
bucky enum |
Enumerate the 12-digit account ID from a known bucket |
bucky fuzz |
Fuzz bucket names against a known account ID |
Global flags (all commands):
| Flag | Description |
|---|---|
--access-key |
AWS access key ID |
--secret-key |
AWS secret access key |
--session-token |
AWS session token |
--region |
AWS region (default: us-east-1) |
--profile |
AWS named profile |
--role-arn |
ARN of the IAM role to assume |
--json |
Save results to JSON file |
--output |
JSON output file path (default: bucky-{uuid}.json) |
bucky (full workflow):
| Flag | Description |
|---|---|
--bucket |
Target S3 bucket(s), comma-separated |
--bucket-file |
File with bucket names/URLs, one per line |
--wordlist |
Path to wordlist file |
--workers |
Concurrent workers (default: 10) |
bucky enum:
| Flag | Description |
|---|---|
--bucket |
Target S3 bucket(s), comma-separated |
--bucket-file |
File with bucket names/URLs, one per line |
bucky fuzz:
| Flag | Description |
|---|---|
--account-id |
Target 12-digit AWS account ID |
--wordlist |
Path to wordlist file |
--workers |
Concurrent workers (default: 10) |
--regions |
Comma-separated regions for name mutations |
The --bucket flag and --bucket-file accept multiple input formats. Bucky automatically extracts the bucket name (and optional key) from:
| Format | Example |
|---|---|
| Plain name | my-bucket |
| S3 URI | s3://my-bucket/path/to/key |
| Virtual-hosted URL | https://my-bucket.s3.us-west-2.amazonaws.com/key |
| Path-style URL | https://s3.us-west-2.amazonaws.com/my-bucket/key |
Multiple buckets can be passed comma-separated via --bucket:
--bucket bucket1,s3://bucket2/key,https://bucket3.s3.amazonaws.com/Or listed one per line in a file via --bucket-file:
my-bucket
s3://another-bucket/index.html
https://third.s3.us-east-1.amazonaws.com/
When multiple buckets resolve to the same account ID, bucky deduplicates and runs fuzzing only once per unique account.
Use --json to save results to a JSON file. The default filename is bucky-{uuid}.json in the current directory, or specify a path with --output:
bucky --bucket target-bucket --wordlist wordlists/default.txt --json
bucky --bucket target-bucket --wordlist wordlists/default.txt --json --output results.jsonOutput format:
{
"timestamp": "2024-01-15T10:30:00Z",
"results": [
{
"bucket": "target-bucket",
"account_id": "675351422352",
"region": "ap-south-1",
"discovered_buckets": [
"dev-675351422352-ap-south-1-an.s3.ap-south-1.amazonaws.com",
"prod-675351422352-ap-south-1-an.s3.ap-south-1.amazonaws.com"
]
}
]
}Bucky automatically detects the region of target buckets. When S3 returns a 301 redirect (bucket is in a different region than --region), bucky resolves the correct region via the x-amz-bucket-region HTTP response header and caches it for all subsequent requests. No manual region configuration is needed for the target bucket.
If you have any questions or feedback about Genzai or just want to connect with me, feel free to reach out via LinkedIn or Email.
