Showing posts with label s3. Show all posts
Showing posts with label s3. Show all posts

Wednesday, July 1, 2015

Elastic beanstalk docker - map symfony logs to S3

In config.yml

monolog:
    handlers:
        main:
            type:         fingers_crossed
            action_level: error
            buffer_size:  200
            handler:      nested
        nested:
            type:  stream
            path:  %log_dir%/moonlight_%kernel.environment%.log
            level: debug

Make log_dir in parameter.yml to be /var/log/nginx or anywhere you want. 

Create a file called Dockerrun.aws.json

{
  "AWSEBDockerrunVersion": "1",
  "Ports": [
    {
      "ContainerPort": "80"
    }
  ],
  "Logging": "/var/log/nginx"
}

The logging entry above needs to be the same as log_dir you set in parameter.log.

In Elastic Beanstalk settings, click on Configuration on the left side, then software configuration.

Check "Enable log file rotation to Amazon S3. If checked, service logs are published to S3."

If you are using a custom IAM, you will need to grant read and write permissions to S3:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1435793320000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion",
                "s3:ListBucket",
                "s3:ListBucketVersions"
            ],
            "Resource": [
                "arn:aws:s3:::elasticbeanstalk-*/resources/environments/logs/*"
            ]
        }
    ]
}

Log rotations happen about every 15 mins. You can search the s3 directory elasticbeanstalk-*/resources/environments/logs/* for logs.


Saturday, June 13, 2015

Create your own Docker Registry with S3

The purpose of this post is to be able to deploy your own custom image to ElasticBeanstalk using docker registry through storing the images on Amazon S3.

Let's begin by cloning Docker Registry 2.0.

git clone https://github.com/docker/distribution.git

Generate self-signed certificates.

cd distribution
mkdir certs
openssl req \
         -newkey rsa:2048 -nodes -keyout certs/domain.key \
         -x509 -days 365 -out certs/domain.crt

Add TLS to config.yml


vi ./cmd/registry/config.yml

Add the tls block to the http section like the following:

http:
    addr: :5000
    secret: asecretforlocaldevelopment
    debug:
        addr: localhost:5001
    tls: 
        certificate: /go/src/github.com/docker/distribution/certs/domain.crt
        key: /go/src/github.com/docker/distribution/certs/domain.key

Remove filesystem settings and use AWS s3 as repository storage:

storage:
   #filesystem:
   #        rootdirectory: /tmp/registry
   s3:
      accesskey: awsaccesskey
      secretkey: awssecretkey
      region: us-west-1
      bucket: bucketname
      encrypt: true
      secure: true
      v4auth: true
      chunksize: 5242880
      rootdirectory: /s3/object/name/prefix

Settings: http://docs.docker.com/registry/configuration/#storage

Save this.

Build the image with a name (ex. docker_registry)

> docker build -t docker_registry .

Tag it. Note that I am using boot2docker on MacOSX. You can get your IP address by running "boot2docker ip".

> docker tag docker_registry:latest 192.168.59.103:5000/docker_registry:latest

Run the registry.

> docker run -p 5000:5000 docker_registry

If you try to push your an image, you will get a error saying you need to add an insecure registry.

> boot2docker ssh "echo $'EXTRA_ARGS=\"--insecure-registry 192.168.59.103:5000\"' | sudo tee -a /var/lib/boot2docker/profile && sudo /etc/init.d/docker restart"

Push an image:

> docker push 192.168.59.103:5000/{image}

Wednesday, May 14, 2014

AWS s3 - The specified bucket is not valid.

If you receive the message "The specified bucket is not valid." while trying "enable website hosting", make sure your bucket name adhere to the following:

- Should not contain uppercase characters
- Should not contain underscores (_)
- Should be between 3 and 63 characters long
- Should not end with a dash
- Cannot contain two, adjacent periods
- Cannot contain dashes next to periods (e.g., "my-.bucket.com" and "my.-bucket" are invalid)

Thursday, February 21, 2013

Using Java AWS SDK to upload files to Amazon S3

Amazon S3 is a highly available and durable storage suitable for storage large files that do not change frequently. This post will focus on how to upload files programmatically via the Java Amazon SDK. For an introduction to S3, read What is Amazon Simple Storage Service (Amazon S3)?

My specs:
  • Eclipse Juno
  • SpringMVC 3.1.x
  • Maven 3.0.x

Install AWS Toolkit

In eclipse, click on help in the menu bar and then "Install New Software".

In the "Work with:" input box, put " http://aws.amazon.com/eclipse" and Click Add...

Check on the AWS Toolkit for Eclipse and click Yes to install all the tools.

In the Eclipse toolbar, you will see a red cube icon. Click on the down arrow next to this icon. Click Preference.

Fill in your Access Key ID and Secret Access Key. Give it an Account Name (Ex. use your email). You can find your keys in the Amazon Management Console (My Account/Console -> Security Credentials). Click on Apply and OK.

In the Eclipse menu bar, click on Window -> Preferences. Expand the AWS Toolkit. Right click on your key. Click Select Private Key File. Associate it with your private key. Click OK.

Click on the down arrow next to the Amazon cube icon. Select Show AWS Explorer View. You should be able to see the Amazon S3 service and all your related buckets (if you have any).


Download and Install the AWS SDK for Java

You can download it here. Click on the AWS SDK for Java button.

Extract the file. Code Samples are located in /samples.

If you are using Maven, you can add the AWS SDK as a dependency in the pom.xml file.


< dependency>
< groupId>com.amazonaws</ groupId>
< artifactId>aws-java-sdk</ artifactId>
< version>1.3.32</ version>
< /dependency>


Choose the version you want here.

Alternatively, you can just add it as a library (Right Click on the project -> Java Build Path -> Libraries -> Add External JARs).


Running the default AWS Sample Apps

We will begin by setting up a sample project that you can check out how S3 works.

Click on the down arrow next to the Amazon icon.

Select New AWS Java Project.

Give a Project name.

Select your account.

Select Amazon S3 Sample, Amazon S3 Transfer Progress Sample, and AWS Console Application. Click Next.

Expand the newly created project. Left click on the AwsConsoleApp.java. In the Eclipse menu bar, click on Run -> Run.

You should see output like the following:


===========================================
Welcome to the AWS Java SDK!
===========================================
You have access to 3 Availability Zones.
You have 14 Amazon EC2 instance(s) running.
You have 0 Amazon SimpleDB domain(s)containing a total of 0 items.
You have 8 Amazon S3 bucket(s), containing 71841 objects with a total size of 224551364 bytes.



If you run the S3Sample.java, you will get the following:


===========================================
Getting Started with Amazon S3
===========================================

Creating bucket my-first-s3-bucket-39065c55-2ee5-413a-9de1-6814dbb253c1

Listing buckets
 - my-first-s3-bucket-39065c55-2ee5-413a-9de1-6814dbb253c1

Uploading a new object to S3 from a file

Downloading an object
Content-Type: text/plain
    abcdefghijklmnopqrstuvwxyz
    01234567890112345678901234
    !@#$%^&*()-=[]{};':',.<>/?
    01234567890112345678901234
    abcdefghijklmnopqrstuvwxyz

Listing objects
 - MyObjectKey  (size = 135)

Deleting an object

Deleting bucket my-first-s3-bucket-39065c55-2ee5-413a-9de1-6814dbb253c1


Integrate the S3 SDK

To begin, you need to have the file AwsCredentials.properties at the root of you class path. You can just copy the one generated during the sample project to your project class path. Or you can just create one with the following content:

secretKey=
accessKey=


Create an authenticated S3 object:

AmazonS3 s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());

Objects in S3 are stored in the form of buckets. Each bucket is globally unique. You cannot create a bucket with a name that another user has created. Each bucket contains key and value pairs you can define in any ways you want.


Create a bucket:

String bucketName = "my-s3-bucket-" + UUID.randomUUID();
s3.createBucket(bucketName);

For readability, I have skipped the exception handling, I will come back to it at the end. The name of the bucket must conform to all the DNS rules. I usually name them using my domain name.


Delete a bucket:

s3.deleteBucket(bucketName);


List all buckets:

for (Bucket bucket : s3.listBuckets()) {
    System.out.println(" - " + bucket.getName());
}


Save an object in a bucket:

String key = "myObjectKey";

PutObjectRequest putObject = new PutObjectRequest(bucketName, key, myFile);
s3.putObject(putObject);

myFile is of class File above.


Delete an object:

s3.deleteObject(bucketName, key);


Get/Download an object:

String key = "myObjectKey";
GetObjectRequest getObject = new GetObjectRequest(bucketName, key);
S3Object object = s3.getObject(getObject);


List objects by prefix:

ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
                    .withBucketName(bucketName)
                    .withPrefix("My"));
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
    System.out.println(" - " + objectSummary.getKey() + "  " +
                                   "(size = " + objectSummary.getSize() + ")");
}


Uploading large files

Use TransferManager whenever possible. It makes use of S3 multipart uploads to achieve enhanced throughput, performance, and reliability. It uses multiple threads to upload multiple parts of a single upload at once.

AWSCredentials myCredentials = new BasicAWSCredentials(...);
TransferManager tx = new TransferManager(myCredentials);
Upload myUpload = tx.upload(myBucket, myFile.getName(), myFile);

 while (myUpload.isDone() == false) {
     System.out.println("Transfer: " + myUpload.getDescription());
     System.out.println("  - State: " + myUpload.getState());
     System.out.println("  - Progress: " + myUpload.getProgress().getBytesTransfered());
     // Do work while we wait for our upload to complete...
     Thread.sleep(500);
 }


Exceptions

Whenever you call any of the AWS API, you should surround the calls with try and catch clauses like the following:

try{
    // AWS requests here

} catch (AmazonServiceException ase) {
            System.out.println("Caught an AmazonServiceException, which means your request made it "
                    + "to Amazon S3, but was rejected with an error response for some reason.");
            System.out.println("Error Message:    " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
        } catch (AmazonClientException ace) {
            System.out.println("Caught an AmazonClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with S3, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message: " + ace.getMessage());
        }
    }


If you interested in securing your S3 contents for your authenticated users only, check out AWS Java - Securing S3 content using query string authentication.