In the previous post, we explored S3 Access Control Lists (ACLs)
and learned why AWS recommends disabling them for most modern use cases.
Now it's time to dive into the proper way of securing your S3 buckets:
IAM policies and bucket policies.
Unlike ACLs, which are considered legacy and can become operationally chaotic,
IAM policies offer centralized, scalable, and auditable access management.
They're the foundation of modern AWS security architecture and the tool
you should reach for when securing your S3 resources.
Today we'll focus on IAM policies - the backbone of AWS access control that actually makes sense.
IAM Policies vs Bucket Policies - What's the Difference?
Before we dive deep, let's clarify the two main policy types for S3:
IAM Policies are attached to IAM users, groups, or roles and define what actions
those identities can perform across AWS services. They're identity-centric -
"what can this user/role do?"
Bucket Policies are attached directly to S3 buckets and define who can
access that specific bucket and what they can do with it. They're resource-centric -
"who can access this bucket?"
Both use the same JSON policy language, but they serve different purposes
and are evaluated together by AWS when determining access. Think of them as
complementary layers of security rather than competing mechanisms.
Understanding IAM Policy Structure
IAM policies follow a standardized JSON structure that's both powerful and
relatively straightforward once you understand the components:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3ReadAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-secure-bucket",
"arn:aws:s3:::my-secure-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": "203.0.113.0/24"
}
}
}
]
}
Let's break this down:
- Version: Currently, '2012-10-17' is the recommended and most up-to-date policy language version. It's best practice to use this version for all new policies.
- Statement: An array of individual permission statements
- Sid: Optional statement identifier for easier management
- Effect: Either "Allow" or "Deny"
- Action: The specific AWS API actions being granted/denied
- Resource: The AWS resources the policy applies to
- Condition: Optional conditions that must be met for the policy to apply
Essential S3 Actions and Resource Best Practices
Understanding the right S3 actions is crucial for creating effective policies.
Here are the most commonly used ones:
Bucket-Level Actions
{
"Action": [
"s3:ListBucket", // List objects in bucket
"s3:GetBucketLocation", // Get bucket region
"s3:GetBucketVersioning", // Check versioning status
"s3:ListBucketVersions" // List object versions
],
"Resource": "arn:aws:s3:::my-bucket"
}
Object-Level Actions
{
"Action": [
"s3:GetObject", // Download objects
"s3:PutObject", // Upload objects
"s3:DeleteObject", // Delete objects
"s3:GetObjectVersion", // Get specific object versions
"s3:DeleteObjectVersion" // Delete specific versions
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
Notice how bucket-level actions use the bucket ARN (arn:aws:s3:::my-bucket
)
while object-level actions use the object ARN pattern (arn:aws:s3:::my-bucket/*
).
Practical IAM Policy Examples
Read-Only Access Policy
Perfect for analytics tools or monitoring systems that need to read data
but shouldn't modify anything:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"${aws_s3_bucket.analytics_data.arn}",
"${aws_s3_bucket.analytics_data.arn}/*"
]
}
]
}
In Terraform, always use resource references:
# Good - uses Terraform resource reference
resource "aws_iam_policy" "analytics_read" {
name = "AnalyticsReadAccess"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:ListBucket"
]
Resource = [
aws_s3_bucket.analytics_data.arn,
"${aws_s3_bucket.analytics_data.arn}/*"
]
}
]
})
}
# Bad - hardcoded bucket name
resource "aws_iam_policy" "analytics_read_bad" {
policy = jsonencode({
Resource = [
"arn:aws:s3:::company-analytics-data", # Don't do this!
"arn:aws:s3:::company-analytics-data/*"
]
})
}
Application Upload Policy
For applications that need to upload files but shouldn't be able to
delete or list existing content:
# Define the policy using Terraform data source
data "aws_iam_policy_document" "app_upload" {
statement {
effect = "Allow"
actions = [
"s3:PutObject",
"s3:PutObjectAcl"
]
resources = ["${aws_s3_bucket.app_uploads.arn}/*"]
condition {
test = "StringEquals"
variable = "s3:x-amz-server-side-encryption"
values = ["AES256"]
}
}
}
resource "aws_iam_policy" "app_upload" {
name = "AppUploadPolicy"
policy = data.aws_iam_policy_document.app_upload.json
}
This approach ensures your policy always references the correct bucket,
regardless of environment or bucket naming changes.
Time-Based Access Policy
Sometimes you need to restrict access to specific time windows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": [
"arn:aws:s3:::business-hours-data/*"
],
"Condition": {
"DateGreaterThan": {
"aws:CurrentTime": "08:00:00Z"
},
"DateLessThan": {
"aws:CurrentTime": "18:00:00Z"
}
}
}
]
Common Pitfalls and Security Considerations
The Wildcard Trap
One of the most dangerous mistakes is using overly broad wildcards:
// DON'T DO THIS
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
This grants unlimited S3 access across your entire AWS account.
Always be specific about actions and resources.
Resource ARN Best Practices
Avoid hardcoded bucket names in your policies! Using fixed bucket names like
arn:aws:s3:::my-bucket
creates several problems:
- Policies become environment-specific and hard to reuse
- Risk of accidentally granting access to wrong buckets
- Makes bucket renaming nearly impossible
- Creates maintenance nightmares across multiple environments
Instead, use variables and references.
As it's overview blog-post, sometime I used harcoded buckets name, in sake of simplicity.
Missing Bucket vs Object Permissions
A common source of confusion is forgetting that listing bucket contents
and reading objects require different permissions on different resources:
// Correct approach
{
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket" // Note: no /*
},
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*" // Note: with /*
}
]
}
Cross-Account Access Considerations
When granting cross-account access, always use explicit conditions
to prevent unauthorized access:
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::shared-bucket/*",
"Condition": {
"StringEquals": {
"aws:SourceAccount": ["123456789012", "987654321098"]
}
}
}
Implementing S3 IAM Policies with Terraform
Terraform makes managing IAM policies much more maintainable than
clicking through the AWS console. Here's how to implement the policies
we discussed:
Basic Policy Attachment
# Create the IAM policy
resource "aws_iam_policy" "s3_read_only" {
name = "S3ReadOnlyAccess"
description = "Read-only access to specific S3 bucket"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:ListBucket"
]
Resource = [
aws_s3_bucket.app_data.arn,
"${aws_s3_bucket.app_data.arn}/*"
]
}
]
})
}
# Attach to a role
resource "aws_iam_role_policy_attachment" "s3_read_only" {
role = aws_iam_role.app_role.name
policy_arn = aws_iam_policy.s3_read_only.arn
}
Using Data Sources for Flexibility
For more complex scenarios, you can use Terraform data sources
to make your policies more dynamic:
data "aws_iam_policy_document" "s3_upload_policy" {
statement {
effect = "Allow"
actions = [
"s3:PutObject",
"s3:PutObjectAcl"
]
resources = [
"${aws_s3_bucket.uploads.arn}/uploads/${var.environment}/*"
]
condition {
test = "StringEquals"
variable = "s3:x-amz-server-side-encryption"
values = ["AES256"]
}
condition {
test = "IpAddress"
variable = "aws:SourceIp"
values = var.allowed_ip_ranges
}
}
}
resource "aws_iam_policy" "s3_upload" {
name = "S3UploadPolicy-${var.environment}"
policy = data.aws_iam_policy_document.s3_upload_policy.json
}
Policy Validation and Testing
Always validate your policies before applying them. Terraform can help
catch syntax errors, but logic errors require testing:
# Use locals for policy validation
locals {
# Validate bucket ARN format
bucket_arn_pattern = "^arn:aws:s3:::[a-z0-9][a-z0-9\\-]*[a-z0-9]$"
# Ensure bucket name follows naming conventions
valid_bucket_name = can(regex("^[a-z0-9][a-z0-9\\-]*[a-z0-9]$", var.bucket_name))
}
# Use validation blocks
variable "bucket_name" {
type = string
description = "S3 bucket name"
validation {
condition = can(regex("^[a-z0-9][a-z0-9\\-]*[a-z0-9]$", var.bucket_name))
error_message = "Bucket name must contain only lowercase letters, numbers, and hyphens."
}
}
Policy Testing and Validation
The AWS IAM Policy Simulator is your best friend for testing policies
before deployment. It allows you to simulate API calls and see whether
they would be allowed or denied by your policies.
For Terraform users, consider using the aws_iam_policy_document
data source
instead of hardcoded JSON - it provides better syntax validation and
makes policies more readable.
Summary
IAM policies are the modern, scalable way to secure S3 resources.
Unlike ACLs, they provide centralized management, powerful conditions,
and excellent auditability. Key takeaways:
- Use specific actions and resources - avoid wildcards
- Remember bucket-level vs object-level permissions require different resource ARNs
- Leverage conditions for additional security controls
- Test policies thoroughly before production deployment
- Use Terraform for maintainable, version-controlled policy management
The combination of IAM policies and bucket policies gives you fine-grained
control over who can access your S3 resources and what they can do with them.
In the next article, we'll explore bucket policies and how they complement
IAM policies to create a comprehensive S3 security strategy.
As always, even with a two-month-old keeping me busy, I'm committed to
sharing practical AWS security knowledge. The NixOS
experiment is currently
stopped, however I starting thinking about re-activation of my newsletter.
Who knows, for sure the format will be diffrent, as link aggregation is just
boring!
Top comments (0)