81

I've recently inherited a Rails app that uses S3 for storage of assets. I have transferred all assets to my S3 bucket with no issues. However, when I alter the app to point to the new bucket I get 403 Forbidden Status.

My S3 bucket is set up with the following settings:

Permissions

Everyone can list

Bucket Policy

{
 "Version": "2012-10-17",
 "Statement": [
    {
        "Sid": "PublicReadGetObject",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::bucketname/*"
    }
 ]
}

CORS Configuration

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
    </CORSRule>
    <CORSRule>
        <AllowedOrigin>https://www.appdomain.com</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Static Web Hosting

Enabled.

What else can I do to allow the public to reach these assets?

1
  • In my scenario, the error is caused by disabled Public Access of the S3 bucket as it's linked to CloudFront. No solution found so far. Possibly, I may require to set-up PreSignedUrl mechanism but less content on it too, docs seem like a college student's exam copy, over-filled with unrelated/useless content to score more marks by writing lengthy answer. Commented Sep 12, 2022 at 7:37

18 Answers 18

54

I know this is an old thread, but I just encountered the same problem. I had everything working for months and it just suddenly stopped working giving me a 403 Forbidden error. It turns out the system clock was the real culprit. I think s3 uses some sort of time-based token that has a very short lifespan. And in my case I just ran:

ntpdate pool.ntp.org

And the problem went away. I'm running CentOS 6 if it's of any relevance. This was the sample output:

19 Aug 20:57:15 ntpdate[63275]: step time server ip_address offset 438.080758 sec

Hope in helps!

Sign up to request clarification or add additional context in comments.

8 Comments

Thanks for posting it. I had the same problem just now with Windows. Correcting the time of the system solved it.
God bless you, good man. And please never hesitate to answer old threads.
This was my issue. Somehow the clock in my docker container had drifted, which isn't an obvious thing to diagnose!
I was using VM snapshots and tearing my hair out. I want to upvote this answer ten times.
Correcting the time did the tick. even with 0.1 sec deviation s3 returns 403 forbidden error.
|
48

It could also be that a proper policy needs to be set according to the AWS docs.

Give the bucket in question this policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
    }
  ]
}

3 Comments

I had to add a statement with "/*" removed from the Resource.
If I remove the /* from the end of the resource string, the policy editor comes back with Action does not apply to any resources
In my case, I forgot to add BUCKET_NAME/* and only gave myself access to the root of the bucket. Thanks!
30

The issue is that the transfer was done according to this thread, which by itself is not an issue. The issue came from the previous developer not changing permissions on the files before transferring. This meant I could not manage any of the files, even though they were in my bucket.

Issue was solved by re-downloading the files cleanly from the previous bucket, deleting the old phantom files, re-uploading the fresh files and setting their permissions to allow public reading of the files.

1 Comment

As this is the accepted answer I'd just like to add that AWS s3 sync will not transfer the ACL setup for each object. There's a probability that the objects were set to public individually using the ACL list. This project github.com/cobbzilla/s3s3mirror offers the -C option which I didn't manage to make work. As a last resort you can set bucket policy for each folder inside your bucket allowing the Principal: * to GetObjects.
15

I had same problem just adding * at end of policy bucket resource solved it

{
  "Version":"2012-10-17",
  "Statement":[{
    "Sid":"PublicReadGetObject",
        "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::example-bucket/*"
      ]
    }
  ]
}

3 Comments

Why is this downvoted? adding /* to the end of my resource fixed the issue.
i have no clue for which people have down voted its the actual answer though
@user3470929 I didn't downvote but it's getting downvotes because the original question already used /* - or at least they've edited to reflect that in the question.
6

Here's the Bucket Policy I used to make index.html file inside my S3 Bucket accessible from the internet:

enter image description here

I also needed to go to Permissions -> "Block Public Access" and remove the block public access rules for the bucket. Like so:

enter image description here

Also make sure the access permissions for the individual Objects inside each bucket is open to the public. Check that here: enter image description here

2 Comments

The Make Public button is what I was missing!
EDIT - my mistake was I forgot to use the correct aws profile :(
3

Another "solution" here: I was using Buddy to automate uploading a github repo to an s3 bucket, which requires programmatic write access to the bucket. The access policy for the IAM user first looked like the following: (Only allowing those 6 actions to be performed in the target bucket).

    {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": ""arn:aws:s3:::<bucket_name>/*"
        }
    ]
}

My bucket access policy was the following: (allowing read/write access for the IAM user).

{
"Version": "2012-10-17",
"Id": "1234",
"Statement": [
    {
        "Sid": "5678",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::<IAM_user_arn>"
        },
        "Action": [
            "s3:DeleteObject",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::<bucket_name>/*"
    }

However, this kept giving me the 403 error.

My workaround solution was to give the IAM user access to all s3 resources:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "*"
        }
    ]
}

This got me around the 403 error, although clearly it doesn't sound like how it should be.

Comments

2

Since August the end of June 2023, TLS 1.2 or later is enforced on S3. https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-endpoints/

If your application connects to S3 over HTTPS, make sure it is configuered to use TLS 1.2.
Applications that use older TLS versions will get the 403 error (Could happen with .Net 4.5 and lower for example).

For .Net applications an easy solution would be to set the application to use .Net 4.6.2 at least in the app or web config.

1 Comment

I was pulling my hairs. But your solutions worked perfectly for me. I was using .net 4.7.2 but still getting the same issue. So I have to add this code of line before accessing S3 file URL: ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12; PdfReader reader = new PdfReader(s3FilePath);
1

One weird thing that fixed this for me after already setting up the correct permissions, was I removed the extension from the filename. So I had many items in the bucket all with the same permissions and some worked find and some returned 403. The only difference was the ones that didn't work had .png at the end of the filename. When I removed that they worked fine. No idea why.

Comments

0

For me, none of the other answers worked. File permissions, bucket policies, and clock were all fine. For me, the issue was intermittent, and while it may sound trite, the following have both worked for me previously:

  1. Log out and log back in.
  2. If you are trying to upload a single file, try to do a bulk upload. Conversely, if trying to upload a single file, try to do a bulk upload.

Comments

0

Just found the same issue on my side on my iPhone app. It was working completely fine with Android with same configuration and S3 setup but iPhone app was throwing an error. I reached Amazon support team with this issue, after checking logs on their end; they told me your iPhone has date and time. Then I went to settings of my iPhone and just adjusted correct date and time. Then I tried to upload new image and it worked as expected.

If you are having same issue and you have wrong date or time in your iphone or simulator; this may help you.

Thanks!

Comments

0

For me it was the Public access under Access Control tab.

just ensure the read and write permission under public access is Yes by default its - which means No.

Happy coding.

JFYI: am using flutter for my android development.

enter image description here

Comments

0

Make sure you use the correct AWS Profile!!!! (dev \ prod etc...)

Comments

0

I hit this error when trying to PUT a file to S3 from JavaScript using a URL presigned in Python. Turns out my Python needed the ContentType attribute.

Once I added that, the following worked:

import boto3
import requests

access_key_id = 'AKIA....'
secret_access_key = 'LfNHsQ....'
bucket = 'images-dev'
filename = 'pretty.png'

s3_client = boto3.client(
  's3',
  aws_access_key_id=access_key_id,
  aws_secret_access_key=secret_access_key
)

# sign url
response = s3_client.generate_presigned_url(
  ClientMethod = 'put_object',
  Params = {
    'Bucket': bucket,
    'Key': filename,
    'ContentType': 'image/png',
  }
)

print(" * GOT URL", response)

# NB: to run the PUT command in Python, one must remove the ContentType attr above!
# r = requests.put(response, data=open(filename, 'rb'))
# print(r.status_code)

Then one can PUT that image to S3 using that url from the client:

var xhr = new XMLHttpRequest();
xhr.open('PUT', url);
xhr.onreadystatechange = () => {
  if (xhr.readyState === 4) {
    if (xhr.status !== 200) {
      console.log('Could not upload file.');
    }
  }
};

xhr.send(file);

Comments

0

In my case, I was generating a signed url for upload and was receiving a 403 error.

The API to generate the signed url was on running on an ECS cluster which had task role assigned. The task role did not have access to PutObjectAcl for public read of the file and hence was receiving a 403 error.

Updating the task role for the cluster fixed the issues.

TLDR: For public read, check if credentials/Role/policy have PutObjectAcl permissions.

Comments

0

I'm not sure if this will help anyone, but we started getting "Access Forbidden" last week on code that had been working for months. I upgraded aws-sdk to v3 and had to create some new functions, and it started to work again.

Comments

0

If nothing here helps you just synchronize date and time on Windows. I also had the same error because the time is always wrong on my windows pc until I press the button Time > synchronize time.

Comments

0

Please also note: if your bucket has Server-side Encryption enabled with KMS, you will need to grant your role access to the relevant actions on that KMS key. This should work:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowS3",
            "Action": "s3:*",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::${bucket_id}/*",
                "arn:aws:s3:::${bucket_id}"
            ]
        },
        {
            "Sid": "AllowKMS",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:kms:${region}:${account_id}:key/${key_id}"
        }
    ]
}

Comments

0

This error can be tricky sometimes, as the error message, even when using --debug, does not always tell you the root cause. In my case, it was because I set the option 'requestor pays' on the S3 bucket. Took hours to figure that out. Here is a really good reference for S3 cross-account access denied errors https://repost.aws/knowledge-center/s3-cross-account-access-denied

The reasons it you can get an access denied are:

  • The user's IAM policy doesn't grant access to the bucket.
  • The object is encrypted by AWS Key Management Service (AWS KMS), and the user doesn't have access to the AWS KMS key.
  • A deny statement in the bucket policy or IAM policy is blocking the user's access.
  • The Amazon Virtual Private Cloud (Amazon VPC) endpoint policy is blocking access to the bucket.
  • The AWS Organizations service control policy is blocking access to the bucket.
  • The object doesn't belong to the AWS account that owns the bucket.
  • You turned on Requester Pays for the bucket.
  • You passed a session policy that's blocking access to the bucket.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.