How to do it...
You can use client-node1 to configure Hadoop S3A client.
- Install Java packages in the client-node1:
# yum install java* -y

- Download the Hadoop .tar file from https://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/hadoop-2.7.3.tar.gz:

- Extract the Hadoop .tar file:
# tar -xvf hadoop-2.7.3.tar.gz

- Add the following in the .bashrc file:
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
export
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/
bin:/root/hadoop-2.7.3/bin

- Update the /root/hadoop-2.7.3/etc/hadoop/core-site.xml file with the following details. Add the RGW node IP and Port and we have the RGW user pratima as the access key and secret key.

- You can now upload a file using the hadoop distcp command to your RGW first-bucket:
# hadoop distcp /root/anaconda-ks.cfg s3a://first-bucket/
You will have initial map logs in the command line:

Once it will finish the upload, you will have the following logs:

- Now you can verify if the anaconda-ks.cfg file got uploaded to the first-bucket