13

I'm trying to write a csv file into an S3 bucket using AWS Lambda, and for this I used the following code:

data=[[1,2,3],[23,56,98]]
with open("s3://my_bucket/my_file.csv", "w") as f:
   f.write(data)

And this raises the following error:

[Errno 2] No such file or directory: u's3://my_bucket/my_file.csv': IOError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 51, in lambda_handler
with open("s3://my_bucket/my_file.csv", "w") as f:
IOError: [Errno 2] No such file or directory: u's3://my_bucket/my_file.csv'

Can I have some help with this please ?

PS: I'm using python 2.7

Thanking you in advance

3
  • 2
    Lambda doesn't have native device driver support for s3:// URIs like that. Write the CSV file to local file system (/tmp) and then use boto3's put_object() method. You can also stream the file contents into S3 using boto3, if preferred. Commented Apr 6, 2018 at 14:33
  • @jarmod Can you please give me an example, many thanks Commented Apr 6, 2018 at 14:43
  • 1
    See stackoverflow.com/questions/40336918/… Commented Apr 6, 2018 at 14:47

3 Answers 3

14

Better to answer later than never. There are four steps to get your data in S3:

  • Call the S3 bucket
  • Load the data into Lambda using the requests library (if you don't have it installed, you are gonna have to load it as a layer)
  • Write the data into the Lambda '/tmp' file
  • Upload the file into s3

Something like this:

import csv
import requests
#all other apropriate libs already be loaded in lambda

#properly call your s3 bucket
s3 = boto3.resource('s3')
bucket = s3.Bucket('your-bucket-name')
key = 'yourfilename.txt'

#you would need to grab the file from somewhere. Use this incomplete line below to get started:
with requests.Session() as s:
    getfile = s.get('yourfilelocation')

#Only then you can write the data into the '/tmp' folder.
with open('/tmp/yourfilename.txt', 'w', newline='') as f:
    w = csv.writer(f)
    w.writerows(filelist)
#upload the data into s3
bucket.upload_file('/tmp/yourfilename.txt', key)

Hope it helps.

Sign up to request clarification or add additional context in comments.

2 Comments

For case with concurrent invocations of the lambda - won't it create collisions using same '/tmp/yourfilename.txt' ?
It certainly will but one option would be to limit this Lambda to only 1 execution at a time.
-3

I am not aware of using AWS Lambda, but I have been using Boto3 to do the same. It is a simple few line code.

#Your file path will be something like this:
#s3://<your_s3_bucket_name>/<Directory_name>/<File_name>.csv

import boto3

BUCKET_NAME = '<your_s3_bucket_name>'
PREFIX = '<Directory_name>/'
s3 = boto3.resource('s3')
obj = s3.Object(BUCKET_NAME, PREFIX + '<File_name>.csv')
obj.put(Body=content)

Comments

-5
with open("s3://my_bucket/my_file.csv", "w+") as f:

instead of

with open("s3://my_bucket/my_file.csv", "w") as f:

notice the "w" has changed to "w+" this means that it will write to the file, and if it does not exist it will create it.

3 Comments

Okay so does the s3://my_bucket/ directory actually exist?
Yesn but the file does not exist
The s3://my_bucket/ "directory" doesn't exist locally... It's on S3. And it's not even a directory, it's an S3 bucket. It can't be accessed like a local file, you have to use boto3.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.