Showing posts with label ubuntu. Show all posts
Showing posts with label ubuntu. Show all posts

Friday, November 15, 2013

Munin not generating graphs - Make sure CRON job is running

I am currently using Ubuntu 12.04 on EC2.

If your munin master is not running, you should check if munin is set up as a CRON job.

List all the scheduled cron jobs:
crontab -l
If munin-cron is not set up, we will add it. Edit the crontab file
crontab -e
Let's make munin master run every 5 mins. Append the following to the end of the file
*/5 * * * * /usr/bin/munin-cron
Let's make munin run.
sudo -u munin munin-cron

Wednesday, October 9, 2013

Elastic Search on EC2 - Install ES cluster on Amazon Linux AMI

We will install ElasticSearch (ES) on a EC2 instance.

Here's the specs:
  • Amazon Linux AMI 2013.09
  • Medium instance
  • 64-bit machine
  • Elastic Search 0.90.5
  • Spring MVC
  • Maven
Begin by launching an instance.  You may get an out of memory error in /var/log/syslog if you use a micro instance when you launch a machine.  If you are not sure how to launch an instance, read Amazon EC2 - Launching Ubuntu Server 12.04.1 LTS step by step guide.

For the security group, you will need to open the following ports:
  • 22 (SSH)
  • 9300 (ElasticSearch Transport)
  • 9200 (HTTP Testing)

Attach Two EBS drives

We will be using one for saving data and one for logging.  Create and attach two EBS drives in the AWS console.

You will have two volumes: /dev/xvdf and /dev/xvdg.  Let's format them using XFS.
yum -y install xfsprogs xfsdump
sudo mkfs.xfs /dev/xvdf
sudo mkfs.xfs /dev/xvdg
Make the data drive /vol. Make the log drive /vol1.
vi /etc/fstab
Append the following:
/dev/xvdf /vol xfs noatime 0 0
/dev/xvdg /vo1 xfs noatime 0 0
Mount the drives
mkdir /vol
mkdir /vol1
mount /vol
mount /vol1
Read Amazon EC2 - Mounting a EBS drive for more information.

ssh into the instance
ssh -i {key} ubuntu@{ec2_public_address}

Update the machine
sudo yum -y update

Install Oracle Sun Java

In order to run ES efficiently, a JVM must be able to allocate large virtual address space and perform garbage collection on large heaps without pausing JVM.  There are also some stories online talking about OpenJDK is not as good as Oracle Java for ES.  Feel free to let me know in the comments below if this is not the case.

Download Java 7 from Oracle.

Put it in /usr/lib/jvm.

Extract and install it
tar -zxvf jdk-7u40-linux-x64.gz
Rename the folder from jdk1.7.0_40 to jdk1.7.0

You should now have jdk1.7.0 inside /usr/lib/jvm

Set java, javac.
sudo /usr/sbin/alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0/bin/java" 1
sudo /usr/sbin/alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0/bin/javac" 1
Correct the permissions.
sudo chmod a+x /usr/bin/java
sudo chmod a+x /usr/bin/javac
sudo chown -R root:root /usr/lib/jvm/jdk1.7.0
Set to the Sun Java by:
sudo /usr/sbin/alternatives --config java
Check your java version.
java -version

Download and install ElasticSearch

Download ElasticSearch (Current version as of this writing is 0.90.5).
sudo su
mkdir /opt/tools
cd /opt/tools
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.5.zip
unzip elasticsearch-0.90.5.zip
Install ElasticSearch Cloud AWS plugin.
cd elasticsearch-0.90.5
bin/plugin -install elasticsearch/elasticsearch-cloud-aws/1.15.0

Configuring ES

AWS can shut down your instances at any time.  If you are storing indexed data in ephemeral drives, you will lose all the data when all the instances are shut down.

There are were two ways to persist data:
  • Store data in EBS via local gateway
  • Store data in S3 via S3 gateway
A restart of the nodes would begin to recover data from the gateway. The EBS route is better for performance, while the S3 route is better for persistence [S3 is deprecated].

We will be setting up a ES cluster and use a local gateway. S3 gateway is deprecated at the time of this writing.  The ES team has promised a new backup mechanism in the future.

vi /opt/tools/elasticsearch-0.90.5/config/elasticsearch.yml

cluster.name: mycluster
cloud:
    aws:
        access_key:
        secret_key:
        region: us-east-1
discovery:
    type: ec2

We have specified a cluster called "mycluster" above. You will need to input your aws access keys and create a S3 bucket.

We also need to ensure the JVM does not swap by doing two things:

1) Locking the memory (find this setting inside elasticsearch.yml)
bootstrap.mlockall: true
2) Set ES_MIN_MEM and ES_MAX_MEM to the same value. It is also recommended to set them to half of the system's available ram. We will set this in the ElasticSearch Service Wrapper later in the article.

Create the data and log paths.
mkdir /vol/elasticsearch/data
mkdir /vol1/elasticsearch/log
Set the data and log paths in /config/elasticsearch.yml
path.data: /vol/elasticsearch/data
path.logs: /vol1/elasticsearch/logs 
Let's edit config/logging.yml
vi /opt/tools/elasticsearch-0.90.5/config/logging.yml
Edit these settings and make sure these lines are uncommented and present

logger:
  gateway: DEBUG
  org.apache: WARN
  discovery: TRACE


Testing the cluster
bin/elasticsearch -f
Browse to the ec2 address at port 9200
http://ec2-XX-XXX-XXX-XXX.compute-1.amazonaws.com:9200/
You should see the following:
{
  "ok" : true,
  "status" : 200,
  "name" : "Storm",
  "version" : {
    "number" : "0.90.5",
    "build_hash" : "c8714e8e0620b62638f660f6144831792b9dedee",
    "build_timestamp" : "2013-09-17T12:50:20Z",
    "build_snapshot" : false,
    "lucene_version" : "4.4"
  },
  "tagline" : "You Know, for Search" 
}


Installing ElasticSearch as a Service

We will be using the ElasticSearch Java Service Wrapper.

Download the service wrapper and move it to bin/service.
curl -L -k http://github.com/elasticsearch/elasticsearch-servicewrapper/tarball/master | tar -xz
mv /service /opt/tools/elasticsearch-0.90.5/bin
Make ElasticSearch to start automatically when system reboots.
bin/service/elasticsearch install
Make ElasticSearch Service a defaul command (we will call this es_service)
ln -s /opt/tools/elasticsearch-0.90.5/bin/service/elasticsearch /usr/bin/es_service
Start the service
es_service start
You should see:
Starting ElasticSearch...
Waiting for ElasticSearch......
running: PID:2503 

Tweaking the memory settings

There will be three settings you want to care about:

  • ES_HEAP_SIZE
  • ES_MIN_MEM
  • ES_MAX_MEM
It is recommended to set ES_MIN_MEM to be the same as ES_MAX_MEM.  However, you can just set ES_HEAP_SIZE as it will be assigned to both ES_MIN_MEM and ES_MAX_MEM.


We will be tweaking these settings in the service wrapper's elasticsearch.conf instead of elasticsearch's.

vi /opt/tools/elasticsearch-0.90.5/bin/service/elasticsearch.conf

set.default.ES_HEAP_SIZE=1024

There are a few things you need to beware of.

  1. You need to leave some memory for the OS for non elasticsearch operations. Try leaving at least half of the available memory.
  2. As a reference, use 1024Mb for every 1 million documents you are saving.
Restart the service.

Ubuntu EC2 - Install Sun Oracle Java

Download Java 7 from Oracle.

Put it in /usr/lib/jvm.

Extract and install it
tar -zxvf jdk-7u40-linux-x64.gz
Rename the folder from jdk1.7.0_40 to jdk1.7.0

You should now have jdk1.7.0 inside /usr/lib/jvm

Set java, javac.
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0/bin/java" 1
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0/bin/javac" 1
Correct the permissions.
sudo chmod a+x /usr/bin/java
sudo chmod a+x /usr/bin/javac
sudo chown -R root:root /usr/lib/jvm/jdk1.7.0
If you have more than one version of java, you can always switch them using
sudo update-alternatives --config java
Check your java version.
java -version

Tuesday, July 16, 2013

Install Ansible on ubuntu EC2

Begin by spinning a new EC2 ubuntu instance.


Install Ansible and its dependencies
sudo apt-get install python-pip python-dev
sudo pip install ansible
sudo apt-get install python-boto 
Make sure boto version is larger than 2.3

To check boto version:
pip freeze | grep boto

Make the hosts file
sudo mkdir /etc/ansible
sudo touch /etc/ansible/hosts
Put the IPs of your machines in the hosts file.

Ex. [webservers] is a group name for the 2 IPs below.
[webservers]
255.255.255.255
111.111.111.111

Check the Playbook Settings

ansible playbook playbook.yml --list-hosts

You will see the servers that the Playbook will run against:

  play #1 (create instances): host count=1
    localhost

  play #2 (configure instances): host count=0


Play the Playbook

ansible-playbook playbook.yml


AWS credentials

If you are going to use the ec2 module, you will need to set up the access keys in your environment.
vi ~/.bashrc
Append the following with your keys (You need to log in to your AWS console to get the access key pairs)
export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY}
export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_KEY}

Friday, January 25, 2013

Setting up Lighttpd Load Balancer on EC2 Ubuntu

Lighttpd is an asynchronous server. Along with Nginx, Lighttpd is one of the fast servers designed to counter the C10k problem. If you want to set up Nginx, read Setting up Nginx on EC2 Ubuntu.

This tutorial will demonstrate how to use Lighttpd to load balance application servers.


Creating a EC2 Instance

In the AWS Management Console, begin by creating a t1.micro Ubuntu Server 12.04.1 LTS 64-bit. (If you don't know how to create an instance, read Amazon EC2 - Launching Ubuntu Server 12.04.1 LTS step by step guide.

Here are some guidelines:
  • Uncheck Delete on Termination for the root volume
  • Add port 22, 80 and 443 to the Security Group, call it lighttpd.

Install Lighttpd

ssh -i {key} ubuntu@{your_ec2_public_address}

sudo apt-get update -y

sudo apt-get install -y lighttpd

Lighttpd should be running. To check its status, run

service lighttpd status

All the configuration files are located in /etc/lighttpd

To enable/disable a module
  • Use /usr/sbin/lighty-enable-mod and /usr/sbin/lighty-disable-mod
  • Or create a symbolic link from /etc/lighttpd/conf-available/{module} to /etc/lighttpd/conf-enabled/module
To load balance application servers, we will be using the 10-proxy.conf file as a template.

cd /etc/lighttpd/conf-available
cp 10-proxy.conf 11-proxy.conf
vi 11-proxy.conf

We are interesting in the following two variables:
  • proxy.balance - choose from hash, round-robin or fair
  • proxy.server - put the servers you want to load balance to
For example:
proxy.balance     = "hash"
proxy.server     = ( "" =>
                     (
                       ( "host" => "10.204.199.85",
                         "port" => 80
                       ),
                       ( "host" => "10.202.111.140",
                         "port" => 80
                       )
                     )
                    )
The above settings will load balance to two other servers based on IP.

Restart the server.
service lighttpd restart
Test the server.

To check the status:
netstat -ntulp

Tuesday, January 22, 2013

How to build a NodeJS AMI on EC2

This demo will provide guidelines on how to configure a NodeJS EC2 instance and create a NodeJS AMI on Ubuntu.

Specs:

Ubuntu Server 12.04.1 LTS 64-bit


Create a Ubuntu Server 12.04.1 LTS 64-bit t1.micro instance


Uncheck delete on termination for the EBS-root disc.

Create a Security Group called Node JS Production (or anything you want).

Add port 22, 80, 443, 3000 to the Security Group. (I am adding port 3000 because I run the app from port 3000)

Launch the instance.

In the AWS Management Console, Volumes -> Create Volume.

Make the volume with
  • type = Standard
  • Size = 20GB
  • Availability Zone must match the EC2's Availability Zone
  • make the drive name xvdf

Associate this EBS with the EC2 instance we just created.

ssh into your instance.
Ex. ssh -i {key} ubuntu@{ec2-address}.compute-1.amazonaws.com
sudo apt-get update

We are going to format the xvdf with XFS file system. Refer to Amazon EC2 - Mounting a EBS drive.


Install NodeJS and other dependencies

sudo apt-get -y nodejs npm

If you run "node --version", you will find the node version is 0.6.12. We want to use 0.8.18, since it's a lot faster.

sudo npm install -g n
sudo n 0.8.18

Now "sudo node --version" will show version 0.8.18 while "node --version" will show 0.6.12


Install Git and fetch your code (Optional)

sudo apt-get install git -y
mkdir /vol/src
cd /vol/src

git config --global user.name "your_name"
git config --global user.email "your_email"
git config --global github.user "your_github_login"
git clone ssh://git@github.com/username/repo.git

You will want to establish a connection with Github using ssh rather than https because if you are building an image that can be used for auto-scaling you don't want to input the username and password every time. See Generating SSH Keys for more details.

Test your application by running

sudo node {your_app}


Making the NodeJS start on boot

To make a robust image, we want the NodeJS app to start on boot and respawn when crashed. We will write a simple service. All service scripts are located in /etc/init.

Let's create the file /etc/init/{your_app_name}_service.conf

sudo vi http://upstart.ubuntu.com/wiki/Stanzas

Put the following into the file:

#######################

#!upstart

description "my NodeJS server"
author      "Some Dude"

# start on startup
start on started networking
stop on shutdown

# Automatically Respawn:
respawn
respawn limit 5 60

script
    cd /vol/src/{your_app}
    exec sudo node /vol/src/{your_app}/app.js >> /vol/log/app_`date +"%Y%m%d_%H%M%S"`.log 2>&1
end script

post-start script
   # Optionally put a script here that will notifiy you node has (re)started
   # /root/bin/hoptoad.sh "node.js has started!"
end script
#######################


Refer to upstart stanzas for more details about what each field mean.

Create the directory to store NodeJS outputs:

sudo mkdir /vol/log

I have marked each log file with the start time of the app. You will probably want to change this to create logs daily.

To check if the services are running:

initctl list | grep {your_app_name}_service.conf

To start a service:

sudo service {your_app_name}_service.conf start

To stop a service:

sudo service {your_app_name}_service.conf stop


Now reboot your EC2 instance in the AWS console.

Test if your site is started.


Create a NodeJS AMI


In the AWS Management Console, click instances at the left sidebar.

Right click on the Wordpress instance created above and click on Create Image.

Fill in the image name. I like to name things in a convention that is systematic. If you are planning to write deploy scripts and do auto-scaling, it is easier to identify what an image is. I use the following convention:

{namespace}_{what_is_it}_{date}

Ex. mycompany_blog_20130118

You will want the date because you may create an image every time you deploy new codes.

Leave the other options as default, and click on Create Image.

On the left sidebar, click on AMIs under Images.

You can see the status of the AMI we just created.

You should launch an instance from this AMI and test all the data is there.

Thursday, January 17, 2013

Running Wordpress on Amazon EC2


This article is about how to install Wordpress on Amazon EC2 with MySQL running on Amazon RDS.


Launch a EBS-backed AMI

In the ec2 console,  click launch select Ubuntu Server 12.04.1 LTS 64-bit (AMI id = ami-3d4ff254).
Use t1.micro.
Set delete on termination to false for the root device.
Set termination behaviour to Stop.
Add port 22, 80, 443 for security group


Install Software

sudo apt-get update
sudo apt-get install apache2 libapache2-mod-auth-mysql php5-mysql mysql-client libapache2-mod-php5

We are not going to install mysql-server, since we will be using RDS.


Use Amazon RDS as the database

If you want to set up your own MySQL database, you can do so. For the purpose of this tutorial, we will use Amazon RDS because it takes care of scaling, replication and backups (to S3) without minimim effort.

Read Using MySQL on Amazon RDS to create a MySQL database.

After you created a database, note the database name, username, password, and endpoint address of the DB instance. The endpoint address will be like wordpress.a2ks0zoxdxq.us-east-1.rds.amazonaws.com.

You can ssh into your ec2 instance and run the mysql command to access the database.

mysql -h {endpoint_address} -P 3306 -u{username} -p{password}

Note that when I used a different syntax for the above mysql command, I kept on getting access denied error. Please use the syntax I specified above.


Download Wordpress

sudo mkdir /var/www
cd /var/www
wget http://wordpress.org/latest.tar.gz
tar -xzvf latest.tar.gz
rm latest.tar.gz
mv wordpress {name_of_your_blog}


Configure Wordpress

mv wp-config-sample.php wp-config.php
vi wp-config.php

Change these:

define('DB_NAME', 'database_name_here');
/** MySQL database username */
define('DB_USER', 'username_here');
/** MySQL database password */
define('DB_PASSWORD', 'password_here');
/** MySQL hostname */
define('DB_HOST', 'localhost');
/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');
/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

The DB_HOST will be the endpoint we specified above. Include the port :3306 as well.
Ex. wordpress.a2ks0zoxdxq.us-east-1.rds.amazonaws.com:3306

Generate keys for the following:
https://api.wordpress.org/secret-key/1.1/salt/

define('AUTH_KEY',         'put your unique phrase here');
define('SECURE_AUTH_KEY',  'put your unique phrase here');
define('LOGGED_IN_KEY',    'put your unique phrase here');
define('NONCE_KEY',        'put your unique phrase here');
define('AUTH_SALT',        'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT',   'put your unique phrase here');
define('NONCE_SALT',       'put your unique phrase here');

Configure Apache

cd /etc/apache2/sites-available
cp default wordpress
vi wordpress

Change DocumentRoot and Directory from /var/www/ to your blog's directory.

Change AllowOverride from none to all. (If you don't do this, you can't do pretty links in Wordpress.)


                Options Indexes FollowSymLinks MultiViews
                AllowOverride all
                Order allow,deny
                allow from all
       

Save the File.

a2dissite default
a2ensite wordpress

a2enmod rewrite

service apache2 reload

Launch the site

If you are starting a new Wordpress site, access the site from the browser, and following the on screen instruction and you are done.


Porting data from local MySQL to RDS MySQL

To export data:

mysqldump -u{username} -p{password} -h {host} {database} > backup.sql

To import data:

mysql -u{username} -p{password} -h {host} {database} < backup.sql


Friday, December 28, 2012

Cassandra - installing on Ubuntu 12.04 Amazon EC2

> sudo apt-get update

> sudo vi /etc/apt/sources.list

Add the following lines:
deb http://www.apache.org/dist/cassandra/debian 12x main
deb-src http://www.apache.org/dist/cassandra/debian 12x main
> sudo apt-get update

You would get 
W: GPG error: http://www.apache.org 10x InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 4BD736A82B5C1B00
Request the public key. You may need to change the public key (bold text above) when doing a gpg.

> gpg --keyserver wwwkeys.pgp.net --recv-keys 4BD736A82B5C1B00
> sudo apt-key add ~/.gnupg/pubring.gpg
> sudo apt-get update

> sudo apt-get install cassandra

> ps auwx | grep cassandra

> kill -9 {pid_of_cassandra}

Change the data, log locations:

> sudo vi /etc/cassandra/cassandra.yaml

Change the following locations if needed. (Cassandra does not run well on EBS - refer to the Apache Cassandra Document)
data_file_directories - (/var/lib/cassandra/data)
commitlog_directory -  (/var/lib/cassandra/commitlog)
saved_caches_directory - (/var/lib/cassandra/saved_caches)


Start Cassandra with:
> sudo /etc/init.d/cassandra start

Stop Cassandra with:
> sudo /etc/init.d/cassandra stop

Note: If you see the following, that means Cassandra is already started
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 7199; nested exception is:
java.net.BindException: Address already in use
> netstat -tulpn

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:8005          0.0.0.0:*               LISTEN      20358/java    
tcp        0      0 127.0.0.1:9160          0.0.0.0:*               LISTEN      5996/jsvc.exec
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      18848/mysqld  
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      20358/java    
tcp        0      0 0.0.0.0:47697           0.0.0.0:*               LISTEN      5996/jsvc.exec
tcp        0      0 0.0.0.0:50324           0.0.0.0:*               LISTEN      5996/jsvc.exec
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      619/sshd      
tcp        0      0 127.0.0.1:7000          0.0.0.0:*               LISTEN      5996/jsvc.exec
tcp        0      0 0.0.0.0:7199            0.0.0.0:*               LISTEN      5996/jsvc.exec
tcp6       0      0 :::22                   :::*                    LISTEN      619/sshd      
udp        0      0 0.0.0.0:68              0.0.0.0:*                           431/dhclient3  

If Cassandra is running, you should see, port 9160, 7000 and 7199 being in used

> sudo service cassandra status
This should return the status of Cassandra

Tuesday, December 11, 2012

Amazon EC2 - Mounting a EBS drive


This post applies to

  • mounting a new EBS volume
  • mounting an existing EBS volume or snapshot

Specs:
  • Ubuntu Server 12.04.1 LTS 64-bit
  • XFS

1.) Creating a EBS volume (skip this if you have a volume already)

  • Login to your Amazon EC2 Admin Console
  • In the left sidebar menu, expand Elastic Block Volume and click on Volumes
  • Click on Create Volume
  • Specify a size (Ex. 8 Gib)
  • Set Availability Zone to the region that your instance is in (Note that it is very important that the EBS and the instance are in the same region.)
  • Click "Yes, Create"
  • Wait till the console says the EBS is available (blue circle).


2.) Attaching a EBS volume to an instance



3.) Mounting the EBS volume

ssh into your instance  (ex. ssh -i {key} {user}@{ec2_address})


sudo apt-get update && sudo apt-get upgrade -y


Check if the drive is available by
sudo fdisk -l
Sample output:
Disk /dev/xvdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000 
Disk /dev/xvdf doesn't contain a valid partition table
Install xfs tools.
sudo apt-get install -y xfsprogs xfs
sudo mkfs.xfs /dev/xvdf 
Add the new volume to the file system configuration file.
sudo vi /etc/fstab
Add the following to the end of the file.
/dev/xvdf /vol xfs noatime 0 0
Save the file.

/vol is the mount point for the partition /dev/xvdf.

Note that xvd[f-p] are partitions for EBS. You can choose which letter to use when creating the EBS in the EC2 admin console.


sudo mkdir -m 000 /vol

sudo mount /vol

cd into /vol to see if it's accessible.