Tuesday, February 26, 2013

Git Remove a file from being tracked but not from source

Sometimes, you don't want to commit files like .project, .classpath, .DS_store. You may have already added all these in .gitignore but those files were previously commited.

You can do
git rm --cached {filename}
Your file would remain in your source directory.

Monday, February 25, 2013

AWS Java - Securing S3 content using query string authentication

Amazon S3 is a highly available and durable hosting environment that can let you serve websites, images, and large files. Sometimes, you may want to secure your contents so only you or your authenticated users can access them. This becomes more important when it's pay content.

This post is about using query string authentication to make the content to be available for a specified period of time.

Specs:
  • Java 1.7
  • Eclipse Juno

Before you begin, make sure you have all the AWS Eclipse tools ready. Read Using Java AWS SDK to upload files to Amazon S3 for how to install the AWS SDK tool and a basic guide on how to upload, delete and retrieve files on S3.

Signing the request will require the following structure:

Authorization = "AWS" + " " + AWSAccessKeyId + ":" + Signature;

Signature = Base64( HMAC-SHA1( YourSecretAccessKeyID, UTF-8-Encoding-Of( StringToSign ) ) );

StringToSign = HTTP-Verb + "\n" +
 Content-MD5 + "\n" +
 Content-Type + "\n" +
 Date + "\n" +
 CanonicalizedAmzHeaders +
 CanonicalizedResource;

CanonicalizedResource = [ "/" + Bucket ] +
  +
 [ sub-resource, if present. For example "?acl", "?location", "?logging", or "?torrent"];

CanonicalizedAmzHeaders = 


Assuming that you either have read the post above or you have implemented upload, upload a file to your Amazon S3 account.

In the AWS Management console, set the file's ACL permissions to your administrative account only (By default, it should be already if you didn't programmatically changed the ACL permission).

We will implement the following function called getS3Url().




We have set the expiration date to be one hour later. You would see the following expiration message an hour later.


< Error>
< Code>AccessDenied</ Code>
< Message>Access Denied</ Message>
< RequestId>8ECB67C2458CE483</ RequestId>
< HostId>
vL6wXNOkvYlpHXbvvlG1SGhy3q/+Ocb3guXtyaDZjmEu24Z4XQpwjfmNAvM+SViz
</ HostId>
</ Error>


Thursday, February 21, 2013

Using Java AWS SDK to upload files to Amazon S3

Amazon S3 is a highly available and durable storage suitable for storage large files that do not change frequently. This post will focus on how to upload files programmatically via the Java Amazon SDK. For an introduction to S3, read What is Amazon Simple Storage Service (Amazon S3)?

My specs:
  • Eclipse Juno
  • SpringMVC 3.1.x
  • Maven 3.0.x

Install AWS Toolkit

In eclipse, click on help in the menu bar and then "Install New Software".

In the "Work with:" input box, put " http://aws.amazon.com/eclipse" and Click Add...

Check on the AWS Toolkit for Eclipse and click Yes to install all the tools.

In the Eclipse toolbar, you will see a red cube icon. Click on the down arrow next to this icon. Click Preference.

Fill in your Access Key ID and Secret Access Key. Give it an Account Name (Ex. use your email). You can find your keys in the Amazon Management Console (My Account/Console -> Security Credentials). Click on Apply and OK.

In the Eclipse menu bar, click on Window -> Preferences. Expand the AWS Toolkit. Right click on your key. Click Select Private Key File. Associate it with your private key. Click OK.

Click on the down arrow next to the Amazon cube icon. Select Show AWS Explorer View. You should be able to see the Amazon S3 service and all your related buckets (if you have any).


Download and Install the AWS SDK for Java

You can download it here. Click on the AWS SDK for Java button.

Extract the file. Code Samples are located in /samples.

If you are using Maven, you can add the AWS SDK as a dependency in the pom.xml file.


< dependency>
< groupId>com.amazonaws</ groupId>
< artifactId>aws-java-sdk</ artifactId>
< version>1.3.32</ version>
< /dependency>


Choose the version you want here.

Alternatively, you can just add it as a library (Right Click on the project -> Java Build Path -> Libraries -> Add External JARs).


Running the default AWS Sample Apps

We will begin by setting up a sample project that you can check out how S3 works.

Click on the down arrow next to the Amazon icon.

Select New AWS Java Project.

Give a Project name.

Select your account.

Select Amazon S3 Sample, Amazon S3 Transfer Progress Sample, and AWS Console Application. Click Next.

Expand the newly created project. Left click on the AwsConsoleApp.java. In the Eclipse menu bar, click on Run -> Run.

You should see output like the following:


===========================================
Welcome to the AWS Java SDK!
===========================================
You have access to 3 Availability Zones.
You have 14 Amazon EC2 instance(s) running.
You have 0 Amazon SimpleDB domain(s)containing a total of 0 items.
You have 8 Amazon S3 bucket(s), containing 71841 objects with a total size of 224551364 bytes.



If you run the S3Sample.java, you will get the following:


===========================================
Getting Started with Amazon S3
===========================================

Creating bucket my-first-s3-bucket-39065c55-2ee5-413a-9de1-6814dbb253c1

Listing buckets
 - my-first-s3-bucket-39065c55-2ee5-413a-9de1-6814dbb253c1

Uploading a new object to S3 from a file

Downloading an object
Content-Type: text/plain
    abcdefghijklmnopqrstuvwxyz
    01234567890112345678901234
    !@#$%^&*()-=[]{};':',.<>/?
    01234567890112345678901234
    abcdefghijklmnopqrstuvwxyz

Listing objects
 - MyObjectKey  (size = 135)

Deleting an object

Deleting bucket my-first-s3-bucket-39065c55-2ee5-413a-9de1-6814dbb253c1


Integrate the S3 SDK

To begin, you need to have the file AwsCredentials.properties at the root of you class path. You can just copy the one generated during the sample project to your project class path. Or you can just create one with the following content:

secretKey=
accessKey=


Create an authenticated S3 object:

AmazonS3 s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());

Objects in S3 are stored in the form of buckets. Each bucket is globally unique. You cannot create a bucket with a name that another user has created. Each bucket contains key and value pairs you can define in any ways you want.


Create a bucket:

String bucketName = "my-s3-bucket-" + UUID.randomUUID();
s3.createBucket(bucketName);

For readability, I have skipped the exception handling, I will come back to it at the end. The name of the bucket must conform to all the DNS rules. I usually name them using my domain name.


Delete a bucket:

s3.deleteBucket(bucketName);


List all buckets:

for (Bucket bucket : s3.listBuckets()) {
    System.out.println(" - " + bucket.getName());
}


Save an object in a bucket:

String key = "myObjectKey";

PutObjectRequest putObject = new PutObjectRequest(bucketName, key, myFile);
s3.putObject(putObject);

myFile is of class File above.


Delete an object:

s3.deleteObject(bucketName, key);


Get/Download an object:

String key = "myObjectKey";
GetObjectRequest getObject = new GetObjectRequest(bucketName, key);
S3Object object = s3.getObject(getObject);


List objects by prefix:

ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
                    .withBucketName(bucketName)
                    .withPrefix("My"));
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
    System.out.println(" - " + objectSummary.getKey() + "  " +
                                   "(size = " + objectSummary.getSize() + ")");
}


Uploading large files

Use TransferManager whenever possible. It makes use of S3 multipart uploads to achieve enhanced throughput, performance, and reliability. It uses multiple threads to upload multiple parts of a single upload at once.

AWSCredentials myCredentials = new BasicAWSCredentials(...);
TransferManager tx = new TransferManager(myCredentials);
Upload myUpload = tx.upload(myBucket, myFile.getName(), myFile);

 while (myUpload.isDone() == false) {
     System.out.println("Transfer: " + myUpload.getDescription());
     System.out.println("  - State: " + myUpload.getState());
     System.out.println("  - Progress: " + myUpload.getProgress().getBytesTransfered());
     // Do work while we wait for our upload to complete...
     Thread.sleep(500);
 }


Exceptions

Whenever you call any of the AWS API, you should surround the calls with try and catch clauses like the following:

try{
    // AWS requests here

} catch (AmazonServiceException ase) {
            System.out.println("Caught an AmazonServiceException, which means your request made it "
                    + "to Amazon S3, but was rejected with an error response for some reason.");
            System.out.println("Error Message:    " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
        } catch (AmazonClientException ace) {
            System.out.println("Caught an AmazonClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with S3, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message: " + ace.getMessage());
        }
    }


If you interested in securing your S3 contents for your authenticated users only, check out AWS Java - Securing S3 content using query string authentication.

Eclipse this compilation is not on the build path of a java project

This may happen when the project is not recognized as a java project, making the intellisense not working.

My specs are as follow:

  • maven 3.0.x
  • JDK 1.7
  • SpringMVC


What you can do is to make eclipse to know it's a Java project.

Left click on your project in eclipse.
Click Project -> Properties.
Click on Project Facets, select Java.

If the above does not solve your problem, check this post - Eclipse "this compilation unit is not on the build path of a java project".

Friday, February 15, 2013

Java Multipart upload OutOfMemory error - Java heap space

When you are uploading a large file under a low memory machine, you may get an OutOfMemory error - "java.lang.OutOfMemoryError: Java heap space".

When you are using a library that first saves the uploading file to memory, you will need ample amount of RAM to upload it.

For example, if you upload something that is 64MB, the machine will need 64MB of RAM. If two files of 64MB are being uploaded simultaneously, the machine will need 128MB of RAM.

We will present two solutions below:

  1. increase the heap size
  2. read and upload files in chunks


1. Increase the heap size - Temporally Solution

If you have a memory leak, this is just treating the symptom and not curing the disease. RAM will eventually run out.

To give more RAM to Tomcat, put the following in ~/.bashrc.
export CATALINA_OPTS="-Xms512m -Xmx512m"
If you are installing from the Ubuntu distribution, put the following inside /etc/init.d/tomcat7. Use echo to make sure you can
JAVA_OPTS="-Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms512m -Xmx1024m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+DisableExplicitGC"
Restart Tomcat.


2. Read and upload files in chunks

If you are not doing this already, please do.

Wednesday, February 13, 2013

NodeJS and ExpressJS - Tools and Code Structure

The purpose of this post is to present a set of tools and frameworks for rapid web development and a way to organize your code.

Tools and Frameworks

Development Tools
  • Sublime Text for coding

Frameworks
  • NodeJS
  • ExpressJS - API for web development

View/Template Technology
  • ejs - very close to html and easy learning curve
  • ejs-locals - supports decorator pattern

Libraries
  • futures - NodeJS is an event driven single threaded framework and it's easy to have multiple nested callbacks that make things look very spaghetti
  • i18next - localization both server and client sides
  • express-validator - provides validation and santization
  • string - provides common string operations like truncate
  • nodetime - NodeJS profiling

Code Organization

The following is the project structure I usually use. The "/" suffice means that it's a folder.  "-->" means it's inside a folder, while "---->" means it's inside the 2nd level of a folder.

root_folder/
-->providers/
-->models/
-->routers/
-->views/
---->partials/
---->mobile/
---->layouts/
->utils/
-->locales/
-->public/
---->images/
---->javascripts/
---->stylesheets/
-->app.js
-->app_mobile.js

The NodeJS application is located in the root_folder/ above.

providers/ is the business logic or data access layer.

models/ stores object entities.

routers/ stores controllers. I could have called this controllers/.

views/ stores the templates. All the default templates are stored inside this parent folder. I store the mobile templates inside the views/mobile/ folder. views/partials/ stores components like header, footer,  or templates that both the web and mobile share, etc.

utils/ stores utility functions that can be used anything in the application. (Ex. HttpRequest, string operations)

locales/ stores files for i18n.

public/ stores static assets like images, javascripts, and css stylesheets.

app.js and app_mobile.js store routers and global configurations for the web and mobile versions of the applications respectively.


Application Entry Point Configuration:

In the above folder structure, app.js and app_mobile.js controls the application flow. I have put some comments in the important parts of the code snippet.

// include the default libraries

var express = require('express');
  engine = require('ejs-locals'),
  i18n = require("i18next");

// include your routers/controllers

var loginRouter = require('./routers/loginRouter');
var postRouter = require('./routers/PostRouter');

// i18n settings

var i18nOptions = {
  fallbackLng: 'en',
  resSetPath: 'locales/__lng__/__ns__.json',
  detectLngFromPath: 0,
  saveMissing: true,
  debug: false
};

i18n.init(i18nOptions);

// Server Configuration

var app = module.exports = express();
app.engine('ejs', engine);

app.configure(function(){
  app.set('views', __dirname + '/views');
  app.set('view engine', 'ejs');
  app.set('view options', {layout: 'mobile/layout.ejs'});
  app.set('mobile_prefix', 'mobile/');
  app.use(express.bodyParser());
  app.use(i18n.handle);
  app.use(expressValidator);
  app.use(express.methodOverride());
  app.use(express.cookieParser());
  app.use(express.session({ secret: 'your_secret_code', cookie: { maxAge: 31536000000} }));

  app.use(flash());

  app.use(function(req, res, next){

    res.locals.currentUser = req.session.user;
    res.locals.currentHost = req.header('host');
    res.locals.currentUrl = req.url;
    res.locals.currentLocale = '/' + i18n.lng();
    res.locals.isMobile = true;
    
    /* 
    if your image assets are language dependent, you can save them in folders locales/en/, locales/es
    */
    if(i18n.lng() == 'en') {
      res.locals.currentImageDir = '';
    } else {
      res.locals.currentImageDir = '/' + i18n.lng();
    }

    next();
  });

  app.use(app.router);
  app.use(express.static(__dirname + '/public'));
});

i18n.registerAppHelper(app)
    .serveDynamicResources(app);

i18n.serveWebTranslate(app, {
    i18nextWTOptions: {
      languages: ['fr', 'en'],
      }
});

app.configure('development', function(){
  app.use(express.errorHandler({ dumpExceptions: true, showStack: true }));
});

app.configure('production', function(){
  app.use(express.errorHandler());
});

var requireRole = function(role) {
  return function(req, res, next) {
    if(req.session.user != null && req.session.user.role === role)
      next();
    else
      res.redirect('/' + req.i18n.lng() + '/login');
  }
};

var requireAuth = function() {
  return function(req, res, next) {

    if(req.session.user != null) {
      next();
    } else {
      console.log('Redirect link: ' + req.path);
      req.session.redirect_to = req.path;
      res.redirect('/' + req.i18n.lng() + '/login');
    }
  }
};

app.get('/:lng/login', loginRouter.login);
app.post('/:lng/login', loginRouter.submitLogin);
app.get('/:lng/posts', requireAuth(), postRouter.listAllPosts);

app.configure('production', function(){
  app.use(express.errorHandler());
});

http.createServer(app).listen(8080, function(){
  console.log("Server listening on port %d in %s mode", 8080, app.settings.env);
});

Tuesday, February 12, 2013

Spring - Could not open Hibernate Session for transaction

This happens when the Spring application is idle for a while and a user tries to log in to the application.

The message is as follows:
Could not open Hibernate Session for transaction; nested exception is org.hibernate.TransactionException: JDBC begin failed
 Subsequent logins after the first login failure would sucede.

This has to do with stale database connections. There are connections killed by the server that are still considered to be alive by the connection pool.

There are two solutions:

  1. set minimum connection pool size to zero
  2. set "testonborrow"
"testonborrow" will also test if the connection is alive prior to issuing the desired query.

To implement any of the above, you can use c3p0 or Apache's dbcp library.

The following will demonstrate "testonborrow":

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
        <property name="driverClassName" value="${jdbc.driverClassName}"/>
        <property name="url" value="${jdbc.url}"/>
        <property name="username" value="${jdbc.username}"/>
        <property name="password" value="${jdbc.password}"/>
        <property name="maxActive" value="100"/>
        <property name="maxWait" value="1000"/>
        <property name="poolPreparedStatements" value="true"/>
        <property name="defaultAutoCommit" value="true"/>
        <property name="validationQuery" value="SELECT 1" />
        <property name="testOnBorrow" value="true" />
</bean>

Saturday, February 9, 2013

Symfony 2 SwiftMailer Spoof and Redirect Response Abnormal behavior

Specs:
  • Symfony 2.1.x
  • FOSUserBundle
  • SwiftMailer
I was trying to modify the registration flow in FOSUserBundle to do two things:
  • After a user registers, he/she will be logged in automatically (no need to confirm emails)
  • Send an email to the user upon successful registration
However, I find that after the user registers, he/she is always redirected to the login page (I want them to be in an authenticated page). If the user refreshes this page (via the refresh button in the browser), they would be logged in.

After numerous hours of debugging, I found that if I turn off the memory spooling option in SwiftMailer, everything would work fine.

In app/config/config.yaml
swiftmailer:
    transport: %mailer_transport%
    encryption: %encryption%
    auth_mode: login
    host:      %mailer_host%
    username:  %mailer_user%
    password:  %mailer_password%
    #spool:     { type: memory }
If anyone knows why this is happening, please comment below.

Micro Instance out of memory - add swap

I was trying to update my Symfony project and I got the following while I was trying to update the database schema or assets:
Fatal error: Uncaught exception 'ErrorException' with message 'Warning: proc_open(): fork failed - Cannot allocate memory in
An Amazon EC2 micro.t1 instance only has 613MB RAM. It is not enough to run a lot of processes.

What I can do is to 1) switch to a small instance or 2) add a 1GB swap in disk.


Here are the commands to add a 1GB swap


sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 34.1356 s, 31.5 MB/s

sudo /sbin/mkswap /var/swap.1
Setting up swapspace version 1, size = 1048572 KiB
no label, UUID=9cffd7c9-8ec6-4f6c-8eea-79aa3173a59a
sudo /sbin/swapon /var/swap.1


To turn off the swap do the following:

sudo /sbin/swapoff /var/swap.1

Monday, February 4, 2013

Elastic Load Balancer and Nginx - How to force HTTP to HTTPS


Amazon's Elastic Load Balancer supports HTTPS termination. Sometimes, you may want to rewrite all HTTP requests to HTTPS requests. Elastic Load Balancer supports a HTTP header called X_FORWARDED_PROTO. It the request going through the Elastic Load Balancer is HTTPS, the value of X_FORWARDED_PROTO will be HTTPS.

In your Nginx site conf file, check if X_FORWARDED_PROTO is HTTPS. If it is not, rewrite it to use HTTPS.

upstream domain.com {
        ip_hash;
        server 10.194.206.112:9002 max_fails=1 fail_timeout=10s;
        server 10.212.44.16:9002 max_fails=1 fail_timeout=10s;
}

server {
        listen 80;
        server_name domain.com;
        access_log /vol/logs/nginx/web_portal.access.log;

        location / {

                if ($http_x_forwarded_proto != 'https') {
                        rewrite ^ https://$host$request_uri? permanent;
                }

                proxy_pass      http://domain.com;
                proxy_next_upstream error timeout invalid_header http_500;
                proxy_connect_timeout 1;
                proxy_set_header        Host            $host;
                proxy_set_header        X-Real-IP       $remote_addr;
                proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_intercept_errors on;
                error_page 502 503 504 =200 http://www.domain.com/error.html;
        }
}

Saturday, February 2, 2013

Amazon SES om.sun.mail.smtp.SMTPSendFailedException: 554 Message rejected: Email address is not verified.

If you are in the sandbox, make sure you verified both the receiver and sender email addresses.

If you have been approved for production request, make sure the sender email address is approved.

Sticky Session for Elastic Load Balancer

Elastic Load Balancer offers 2 kinds of stickiness:

  • Load Balancer Generated Cookie Stickiness
  • Application Generated Cookie Stickiness
If you support remember_me in your application, you should use Application Cookie.

Steps:

In the AWS Management Console, click on Load Balancers on the left sidebar.

Select a load balancer.

In the Description tab, click on edit for Port Configuration.

Choose Enable Application Generated Cookie Stickiness.

Input your Application's Cookie Name and click on Save.

EC2 instance terminated from custom AMI

If every time your instance terminates when you try to launch it from an AMI, the AMI is probably corrupted.

What you need to do is to recreate the AMI from the original instance.

Make sure all processes are turned down except the ssh daemon.

Run netstat -tupln

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      701/sshd
tcp6       0      0 :::22                   :::*                    LISTEN      701/sshd
udp        0      0 0.0.0.0:68              0.0.0.0:*                           491/dhclient3

Make sure only the above processes are running and then create the AMI.

How to deal with log files in AWS EC2

When you are launching your application for production, it's best to keep the logs in a separate drive.

What I typically do is to mount 2 EBS volumes on a instance. One for source code; one for log files.

For convenience, you can mount the source code directory as /var/www and the log files directory as /var/log.

If you are looking for information on how to mount and format a volume, read Amazon EC2 - Mounting a EBS drive.

Amazon RDS - how to increase storage size and upgrade instance size

In the RDS Management Console, click on DB Instances in the left sidebar.

Right click on an instance and click on Modify.

You can change the DB Instance Class and increase the size here.

Keep in mine that you can only increase size but not decrease size here. If you really want to reduce the storage size, you will need to manually export and import the data.

Click Apply Immediately if you want the changes to take effect right away. Otherwise, it will be carried up in the maintenance time window.

FYI, Changing a 20GB volume to a 100GB volume took around 1 hour for me.

Friday, February 1, 2013

Setting up SSL with Elastic Load Balancer

What is Elastic Load Balancer (ELB) used for?
  • distribute traffic to EC2 instances (single or multiple AZ)
  • can detect health of each EC2 instance
  • stick user sessions to EC2 instance
  • supports SSL termination and offloads SSL decryption
  • integrates with Amazon CloudWatch; can see request count and latency
  • auto-scaling for EC2 instances

Some common structures:
  • Internet-facing ELB serving reverse proxies (ex. Nginx, Lighttpd, Apache) linking to application servers
  • ELB to load balance all your application servers; ELB can be used as an internal backend load balancer

Pros:
  • fault tolerant by load balancing multi availability zones
  • can detect the health of EC2 instances
  • auto-scaling


In the following sections, we will demonstrate how to set up an Elastic Load Balancer with SSL.


Obtaining the private and public keys:

Skip this if you have your keys already.

Begin by generating a Certificate Signing Request (CSR)
openssl req -new -newkey rsa:2048 -nodes -keyout domain.key -out domain.csr

Fill in the following:
Country Name (2 letter code) [AU]:CA
State or Province Name (full name) [Some-State]:Ontario
Locality Name (eg, city) []:Toronto
Organization Name (eg, company) [Internet Widgits Pty Ltd]:ABC Software, Inc.
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:*.domain.com
Email Address []:admin@domain.com
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Two files will be produced:

  • domain.csr
  • domain.key


Submit the CSR to the registrar. After your registrar verified your request, it will send you the signed key. If you used GoDaddy, click here.


In the case of GoDaddy, you will get back a zip file containing:

  • domain.com.crt
  • gd_bundle.crt


When submitting to the Elastic Load Balancer, you will need to convert the keys to the pem format.

Private Key
openssl rsa -in domain.key -text
Public Key Certificate (For GoDaddy, skip this)
openssl x509 -inform PEM -in domain.com.crt
Certificate Chain (For GoDaddy, skip this)
openssl x509 -inform PEM -in gd_bundle.com.crt


Setting up an Elastic Load Balancer

In the AWS Management Console (EC2 dashboard), click on Load Balancers on the left sidebar.

Click on Create Load Balancer.

Create a name for your load balancer.

Load Balancer Protocol and Port are the internet-facing interface, whereas the Instance Protocol and Port are where backend instances would be connected to.

Add HTTPS and HTTP like the following:

Load Balancer Protocol
Load Balancer Port
Instance Protocol
Instance Port
Actions
HTTP
80
HTTP
80
HTTPS (Secure HTTP)
443
HTTP
80

Remove HTTPS if you don't need it.

Click Continue.

In the SSL screen, fill in the following:
  • certificate name - Put anything you want. It's just an identifier
  • private key
  • public key certificate
  • certificate chain

See the section above from how to obtain the private key, public key certificate and the certificate chain.

In the next section, you will be choosing the SSL ciphers.

AES with a key size > 2048bits is the most secure, while RC4 is the fastest stream cipher algorithm.

I use ELBDefaultNegotiationPolicy. Feel free to customize this.

Click on Continue to the Health Check section.

For Ping Path, change it to "/" instead of "/index.html". All the instances this Elastic Load Balancer will connect to will need to have Ping Protocol and Port open in their respective Security Group.

Adjust the other options if necessary. Click Continue.

Select the EC2 instances you want to connect to.

Review your settings and you are done!


DNS for Elastic Load Balancer

You should never create an A record with the Elastic Load Balancer because the IP addresses associated with it can change anytime. Instead, you should use a CNAME record.

If you want to associate the zone apex with a CNAME record, you may would to use Amazon Route 53.

In AWS Management Console, click on the Load Balancer tab in the left column.

Click on your load balancer. Note down the A record in the DNS Name Description.

In Amazon route 53, create your Hosted Zone. Read Using Amazon Route 53 to map a subdomain to an instance for more information.

Click on your desired Hosted Zone. Click on Create Record Set.

In the Edit Record Set panel, select A record for the Type drop down box. Select Yes for Alias. Fill in your Elastic Load Balancer's alias to the Alias Target. Click Save Record Set.

Use any SSL checker online to make sure it's working.