bin/hadoop fs -put /path/to/source s3://<s3id>:<s3secret>@<bucket>/path/to/destination
This is so cool. I’m guessing that I could also use S3 as my input or output directory for Map/Reduce jobs.
The first step is to get fuse installed. It’s not as simple as “yum install fuse” – it doesn’t ship with RHEL5/CentOS5. Ok, now we have fuse :) Next, get hdfs-fuse from http://code.google.com/p/hdfs-fuse/downloads/list