Vagrant is a virtualization technology that allows you to configure virtualization software such as Linux Containers and VirtualBox. It is commonly used together with orchestration tools like Ansible, and Chef.
To get started, download Vagrant here - https://www.vagrantup.com/downloads.html
Create a folder and run
vagrant init
This should create a VagrantFile.
Similar to Docker, Vagrant is dependent on base images. Let's begin by downloading an ubuntu box:
vagrant box add hashicorp/precise32
Open VagrantFile and edit the following:
config.vm.box = "hashicorp/precise32"
You can always find other boxes here:
https://atlas.hashicorp.com/boxes/search
Let's boot up the box:
vagrant up
vagrant ssh
You can check the status of the machine by running:
vagrant status
Do not delete the folder /vagrant, it's a synced folder
Let's begin by loading a script that will install apache.
In your hosts machine's root folder, create s file called bootstrap.sh
vi bootstrap.sh
Add the following:
apt-get update
apt-get install -y apache2
if ! [ -L /var/www ]; then
rm -rf /var/www
ln -fs /vagrant /var/www
fi
In VagrantFile, add
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise32"
config.vm.provision :shell, path: "bootstrap.sh"
end
Reload the provision:
vagrant reload --provision
Test the status of apache:
vagrant ssh
service apache2 status
Try running:
wget -qO- 127.0.0.1
Add the following line in VagrantFile for port forwarding, so we can see webpages from our host browsers
config.vm.network :forwarded_port, guest: 80, host: 4567
Run
vagrant reload
In your browser, do
http://127.0.0.1:4567
If you want to share this image to Altas to share/backup your files, register an account at
https://atlas.hashicorp.com/
Run
vagrant login
vagrant share
In the browser, access the url that's outputted by the terminal.
When you finished sharing, Ctrl = C to terminate it.
When you are done with your vagrant box, you can use the following:
vagrant suspend - state is saved, quick to start up, consumes space
vagrant halt - guest OS is shut down, consumes space
vagrant destroy - removes the guest machine
You can use vagrant up to start it again.
By default, the vagrant box is backed with Virtual Box.
But you can easily change it to VMware or AWS by:
vagrant up --provider-vmware_fusion
vagrant up --provider=aws
Saturday, August 22, 2015
Friday, July 24, 2015
Migrating Splunk indexed data
First stop splunk.
cd into your splunk/bin directory
./splunk stop
Create a new folder (ex. /mnt/splunk_data).
cp -rp splunk/var/lib/splunk/* /mnt/splunk_data/
Change SPLUNK_DB to point to /mnt/splunk_data.
vi splunk/etc/splunk-launch.conf
Find SPLUNK_DB in the file and change the path.
SPLUNK_DB=/mnt1/splunk_data
You may also want to change the retention policy and the max storage size.
// 30 days
frozenTimePeriodInSecs = 2592000
// 90G
maxTotalDataSizeMB = 90000
It's recommended to set the size using the following formula:
Total storage = daily average rate x retention policy x 1/2 = 15 Gig
Start Splunk.
./splunk start
To tune Splunk settings, check:
http://docs.splunk.com/Documentation/Splunk/4.3.1/Installation/CapacityplanningforalargerSplunkdeployment
cd into your splunk/bin directory
./splunk stop
Create a new folder (ex. /mnt/splunk_data).
cp -rp splunk/var/lib/splunk/* /mnt/splunk_data/
Change SPLUNK_DB to point to /mnt/splunk_data.
vi splunk/etc/splunk-launch.conf
Find SPLUNK_DB in the file and change the path.
SPLUNK_DB=/mnt1/splunk_data
You may also want to change the retention policy and the max storage size.
// 30 days
frozenTimePeriodInSecs = 2592000
// 90G
maxTotalDataSizeMB = 90000
It's recommended to set the size using the following formula:
Total storage = daily average rate x retention policy x 1/2 = 15 Gig
Start Splunk.
./splunk start
To tune Splunk settings, check:
http://docs.splunk.com/Documentation/Splunk/4.3.1/Installation/CapacityplanningforalargerSplunkdeployment
Thursday, July 23, 2015
Install Splunk Forwarding and Receiving
We will be using Splunk Light.
Click on the menu icon at the upper right corner. Choose Data -> Receiving.
In Configure receiving, choose 9997 as the receiving port.
In your application instance, install the universal splunk forwarder.
http://www.splunk.com/en_us/download/universal-forwarder.html
Extract it and put it in /opt/splunk_forwarder directory
sudo ./splunk start
sudo ./splunk enable boot-start -user ec2-user
List all the forward servers:
./splunk list forward-server
Active forwards:
None
Configured but inactive forwards:
None
If it prompts you for username and password, use
username: admin
password: changeme
Add the receiving server to the forwarder:
./splunk add forward-server:9997
Test the connection:
./splunk list forward-server
Active forwards:
None
Configured but inactive forwards:
:9997
If it's not active, remember to add port 9997 to your security group.
Add data to monitor
./splunk add monitor -index main -sourcetype
To list what's being monitored:
./splunk list monitor
Click on the menu icon at the upper right corner. Choose Data -> Receiving.
In Configure receiving, choose 9997 as the receiving port.
In your application instance, install the universal splunk forwarder.
http://www.splunk.com/en_us/download/universal-forwarder.html
Extract it and put it in /opt/splunk_forwarder directory
sudo ./splunk start
sudo ./splunk enable boot-start -user ec2-user
List all the forward servers:
./splunk list forward-server
Active forwards:
None
Configured but inactive forwards:
None
If it prompts you for username and password, use
username: admin
password: changeme
Add the receiving server to the forwarder:
./splunk add forward-server
Test the connection:
./splunk list forward-server
Active forwards:
None
Configured but inactive forwards:
If it's not active, remember to add port 9997 to your security group.
Add data to monitor
./splunk add monitor
To list what's being monitored:
./splunk list monitor
Installing splunk on AWS
Begin by downloading Splunk Light here: http://www.splunk.com/en_us/download.html. You will probably need to register an account on Splunk before it lets you to download it.
Upload Splunk to your ec2 instance using SCP. For example
scp -i ec2-user@:tmp
In above, I uploaded the splunk tgz file to a tmp folder in my ec2 instance.
You will need to install glibc.i686 first.
yum -y install glibc.i686
Create a folder called /opt if it doesn't exist
Extract your tgz file inside opt
tar xvzf splunklight-6.2.4-271043-Linux-i686.tgz
The splunk executable is located in /opt/splunk/bin. cd into it.
Start splunk:
sudo ./splunk start --accept-license
Start splunk on boot:
sudo ./splunk enable boot-start -user ec2-user
You should be able to view splunk's web interface at port 8000 or your ec2 public address.
Other useful commands:
./splunk stop
./splunk restart
Upload Splunk to your ec2 instance using SCP. For example
scp -i
In above, I uploaded the splunk tgz file to a tmp folder in my ec2 instance.
You will need to install glibc.i686 first.
yum -y install glibc.i686
Create a folder called /opt if it doesn't exist
Extract your tgz file inside opt
tar xvzf splunklight-6.2.4-271043-Linux-i686.tgz
The splunk executable is located in /opt/splunk/bin. cd into it.
Start splunk:
sudo ./splunk start --accept-license
Start splunk on boot:
sudo ./splunk enable boot-start -user ec2-user
You should be able to view splunk's web interface at port 8000 or your ec2 public address.
Other useful commands:
./splunk stop
./splunk restart
Wednesday, July 8, 2015
show user cronjobs in ubuntu
Show all the users and their respective cronjobs
for user in $(cut -f1 -d: /etc/passwd); do echo $user; crontab -u $user -l; done
for user in $(cut -f1 -d: /etc/passwd); do echo $user; crontab -u $user -l; done
Sunday, July 5, 2015
boot2docker cannot cd into a directory
Let's say you are running your server using:
docker-compose up
You may be trying to run bash for your container,
docker ps (grab the container id)
docker exec -it 301 bash
When you cd into a mounted host volume, if you get a "killed" message or it just logs you out, try the following:
boot2docker restart
docker-compose up
You may be trying to run bash for your container,
docker ps (grab the container id)
docker exec -it 301 bash
When you cd into a mounted host volume, if you get a "killed" message or it just logs you out, try the following:
boot2docker restart
docker - error fetching ubuntu packages
If you ever see the following error and you are using boot2docker, run "boot2docker restart"
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/libe/libevent/libevent-2.0-5_2.0.21-stable-1ubuntu1.14.04.1_amd64.deb Could not resolve 'archive.ubuntu.com'
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/m/memcached/memcached_1.4.14-0ubuntu9_amd64.deb Could not resolve 'archive.ubuntu.com'
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/libe/libevent/libevent-2.0-5_2.0.21-stable-1ubuntu1.14.04.1_amd64.deb Could not resolve 'archive.ubuntu.com'
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/m/memcached/memcached_1.4.14-0ubuntu9_amd64.deb Could not resolve 'archive.ubuntu.com'
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Saturday, July 4, 2015
Rudix The easiest way to install unix software for OSX
Install Rudix
> curl -s https://raw.githubusercontent.com/rudix-mac/rpm/2015.5/rudix.py | sudo python - install rudix
To install any packages, for example, erlang,
> sudo rudix install erlang
> curl -s https://raw.githubusercontent.com/rudix-mac/rpm/2015.5/rudix.py | sudo python - install rudix
To install any packages, for example, erlang,
> sudo rudix install erlang
boot startup scripts with chkconfig
All the startup scripts in ubuntu is located in /etc/init.d. You can turn these scripts on or off by using chkconfig.
> chkconfig
To turn a script on at level 2, 3, 5
> chkconfig memcached on --level 235
To turn off a script
> chkconfig memcached off
EBS expand file system to recognize volume size in Ubuntu
A common scenario when dealing using EC2 is expanding EBS/volume sizes. You may be doing a new AMI or just expanding an existing volume. This article is about how to make your file system (ex. xfs, ext) to recognize the size of your new volumes.
After you expand your volume, ssh into the instance.
Show the instance's volumes and their sizes.
> sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
xvda1 ext4 128G / /
xvdb ext3 840G /media/ephemeral0
xvdm linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdn linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdo linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdl linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdj linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdk linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdf ext4 30G /mnt/shared
In the example above, we see /dev/xvda1 has 128G and is using file system ext4.
If you want more details on the file system types of each volume, you can use the file command:
> sudo file -s /dev/xvd*
/dev/xvda1: Linux rev 1.0 ext4 filesystem data, UUID=ebbf1b1c-fb71-40aa-93a3-056b455e5127 (needs journal recovery) (extents) (large files) (huge files)
/dev/xvdb: Linux rev 1.0 ext3 filesystem data, UUID=07b9bb55-97cc-47e8-b968-6f158e66ff60 (needs journal recovery) (large files)
/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=bff77q92-806c-44a5-a260-5a50025283ba (needs journal recovery) (extents) (large files) (huge files)
/dev/xvdj: data
/dev/xvdk: data
/dev/xvdl: data
/dev/xvdm: data
/dev/xvdn: data
/dev/xvdo: data
> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda1 202:1 0 128G 0 disk /
xvdb 202:16 0 840G 0 disk /media/ephemeral0
xvdm 202:192 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdn 202:208 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdo 202:224 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdl 202:176 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdj 202:144 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdk 202:160 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdf 202:80 0 30G 0 disk /mnt/shared
> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 4.0G 3.9G 52% /
tmpfs 17G 0 17G 0% /dev/shm
/dev/xvdb 827G 201M 785G 1% /media/ephemeral0
/dev/xvdf 30G 8.0G 21G 29% /mnt/shared
/dev/md127 60G 15G 46G 25% /mnt/data
For ext2, ext3, ext4, you can use the resize2fs command.
Resize /dev/xvda1
> sudo resize2fs /dev/xvda1
For xfs, you can do
> sudo xfs_growfs -d /mnt
After you expand your volume, ssh into the instance.
Show the instance's volumes and their sizes.
> sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
xvda1 ext4 128G / /
xvdb ext3 840G /media/ephemeral0
xvdm linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdn linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdo linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdl linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdj linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdk linux_raid_member 10G ip-10-188-5-211:0
└─md127 xfs 60G /mnt/data
xvdf ext4 30G /mnt/shared
In the example above, we see /dev/xvda1 has 128G and is using file system ext4.
If you want more details on the file system types of each volume, you can use the file command:
> sudo file -s /dev/xvd*
/dev/xvda1: Linux rev 1.0 ext4 filesystem data, UUID=ebbf1b1c-fb71-40aa-93a3-056b455e5127 (needs journal recovery) (extents) (large files) (huge files)
/dev/xvdb: Linux rev 1.0 ext3 filesystem data, UUID=07b9bb55-97cc-47e8-b968-6f158e66ff60 (needs journal recovery) (large files)
/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=bff77q92-806c-44a5-a260-5a50025283ba (needs journal recovery) (extents) (large files) (huge files)
/dev/xvdj: data
/dev/xvdk: data
/dev/xvdl: data
/dev/xvdm: data
/dev/xvdn: data
/dev/xvdo: data
> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda1 202:1 0 128G 0 disk /
xvdb 202:16 0 840G 0 disk /media/ephemeral0
xvdm 202:192 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdn 202:208 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdo 202:224 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdl 202:176 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdj 202:144 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdk 202:160 0 10G 0 disk
└─md127 9:127 0 60G 0 raid0 /mnt/data
xvdf 202:80 0 30G 0 disk /mnt/shared
> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 4.0G 3.9G 52% /
tmpfs 17G 0 17G 0% /dev/shm
/dev/xvdb 827G 201M 785G 1% /media/ephemeral0
/dev/xvdf 30G 8.0G 21G 29% /mnt/shared
/dev/md127 60G 15G 46G 25% /mnt/data
For ext2, ext3, ext4, you can use the resize2fs command.
Resize /dev/xvda1
> sudo resize2fs /dev/xvda1
For xfs, you can do
> sudo xfs_growfs -d /mnt
Friday, July 3, 2015
Using Zend opcache with php-fpm
Install Zend OPcache
> yum install php55-opcache
Check if the module exists:
> php -m | grep cache
Add the following to your php.ini
opcache.enable=1
opcache.memory_consumption=128
opcache.max_accelerated_files=4000
opcache.revalidate_freq=60
Check if opcache is enabled by
> php-fpm -i | grep cache
The opcache settings may be located in /etc/php-5.5.d/opcache.ini
use the following to set opcache.max_accelerated_files:
find . -type f -print | grep php | wc -l
If the number of php files is 2000, you may want to set it to some number slightly larger than that.
Thursday, July 2, 2015
Upgrading php5.4 to php5.5 in Amazon EC2
First stop apache, nginx, php-fpm if you are running them.
List all the php 5.4 modules:
> yum list installed | grep php54
php54.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-bcmath.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-cli.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-common.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-devel.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-fpm.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-gd.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-intl.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-mbstring.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-mcrypt.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-mysqlnd.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-pdo.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-pecl-apc.x86_64 3.1.13-1.12.amzn1 @amzn-updates
php54-pecl-igbinary.x86_64 1.1.2-0.2.git3b8ab7e.6.amzn1 @amzn-updates
php54-pecl-memcache.x86_64 3.0.7-3.10.amzn1 @amzn-updates
php54-pecl-memcached.x86_64 2.1.0-1.5.amzn1 @amzn-updates
php54-pecl-xdebug.x86_64 2.2.1-1.6.amzn1 @amzn-updates
php54-process.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-soap.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-xml.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-xmlrpc.x86_64 5.4.21-1.46.amzn1 @amzn-updates
Remove all of them:
yum remove php54.x86_64 php54-bcmath.x86_64 php54-cli.x86_64 php54-common.x86_64 php54-devel.x86_64 php54-fpm.x86_64 php54-gd.x86_64 php54-intl.x86_64 php54-mbstring.x86_64 php54-mcrypt.x86_64 php54-mysqlnd.x86_64 php54-pdo.x86_64 php54-pecl-apc.x86_64 php54-pecl-igbinary.x86_64 php54-pecl-memcache.x86_64 php54-pecl-memcached.x86_64 php54-pecl-xdebug.x86_64 php54-process.x86_64 php54-soap.x86_64 php54-xml.x86_64 php54-xmlrpc.x86_64
Install php 5.5
yum install php55.x86_64 php55-bcmath.x86_64 php55-cli.x86_64 php55-common.x86_64 php55-devel.x86_64 php55-fpm.x86_64 php55-gd.x86_64 php55-intl.x86_64 php55-mbstring.x86_64 php55-mcrypt.x86_64 php55-mysqlnd.x86_64 php55-pdo.x86_64 php55-pecl-apc.x86_64 php55-pecl-igbinary.x86_64 php55-pecl-memcache.x86_64 php55-pecl-memcached.x86_64 php55-pecl-xdebug.x86_64 php55-process.x86_64 php55-soap.x86_64 php55-xml.x86_64 php55-xmlrpc.x86_64
You may need to tweak the php-fpm settings
List all the php 5.4 modules:
> yum list installed | grep php54
php54.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-bcmath.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-cli.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-common.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-devel.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-fpm.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-gd.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-intl.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-mbstring.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-mcrypt.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-mysqlnd.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-pdo.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-pecl-apc.x86_64 3.1.13-1.12.amzn1 @amzn-updates
php54-pecl-igbinary.x86_64 1.1.2-0.2.git3b8ab7e.6.amzn1 @amzn-updates
php54-pecl-memcache.x86_64 3.0.7-3.10.amzn1 @amzn-updates
php54-pecl-memcached.x86_64 2.1.0-1.5.amzn1 @amzn-updates
php54-pecl-xdebug.x86_64 2.2.1-1.6.amzn1 @amzn-updates
php54-process.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-soap.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-xml.x86_64 5.4.21-1.46.amzn1 @amzn-updates
php54-xmlrpc.x86_64 5.4.21-1.46.amzn1 @amzn-updates
Remove all of them:
yum remove php54.x86_64 php54-bcmath.x86_64 php54-cli.x86_64 php54-common.x86_64 php54-devel.x86_64 php54-fpm.x86_64 php54-gd.x86_64 php54-intl.x86_64 php54-mbstring.x86_64 php54-mcrypt.x86_64 php54-mysqlnd.x86_64 php54-pdo.x86_64 php54-pecl-apc.x86_64 php54-pecl-igbinary.x86_64 php54-pecl-memcache.x86_64 php54-pecl-memcached.x86_64 php54-pecl-xdebug.x86_64 php54-process.x86_64 php54-soap.x86_64 php54-xml.x86_64 php54-xmlrpc.x86_64
Install php 5.5
yum install php55.x86_64 php55-bcmath.x86_64 php55-cli.x86_64 php55-common.x86_64 php55-devel.x86_64 php55-fpm.x86_64 php55-gd.x86_64 php55-intl.x86_64 php55-mbstring.x86_64 php55-mcrypt.x86_64 php55-mysqlnd.x86_64 php55-pdo.x86_64 php55-pecl-apc.x86_64 php55-pecl-igbinary.x86_64 php55-pecl-memcache.x86_64 php55-pecl-memcached.x86_64 php55-pecl-xdebug.x86_64 php55-process.x86_64 php55-soap.x86_64 php55-xml.x86_64 php55-xmlrpc.x86_64
You may need to tweak the php-fpm settings
Wednesday, July 1, 2015
Configure symfony to see log errors to swiftmailer
If you want 400x and 500x errors, use action_level error, else use action_level critical
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
buffer_size: 200
handler: nested
grouped:
type: group
members: [streamed, buffered]
streamed:
type: stream
path: %log_dir%/moonlight_%kernel.environment%.log
level: debug
buffered:
type: buffer
buffer_size: 200
handler: swift
swift:
type: swift_mailer
from_email:
to_email:
subject: Crtical Error Alert
level: debug
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
buffer_size: 200
handler: nested
grouped:
type: group
members: [streamed, buffered]
streamed:
type: stream
path: %log_dir%/moonlight_%kernel.environment%.log
level: debug
buffered:
type: buffer
buffer_size: 200
handler: swift
swift:
type: swift_mailer
from_email:
to_email:
subject: Crtical Error Alert
level: debug
Elastic beanstalk docker - map symfony logs to S3
In config.yml
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
buffer_size: 200
handler: nested
nested:
type: stream
path: %log_dir%/moonlight_%kernel.environment%.log
level: debug
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
buffer_size: 200
handler: nested
nested:
type: stream
path: %log_dir%/moonlight_%kernel.environment%.log
level: debug
Make log_dir in parameter.yml to be /var/log/nginx or anywhere you want.
Create a file called Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "80"
}
],
"Logging": "/var/log/nginx"
}
The logging entry above needs to be the same as log_dir you set in parameter.log.
In Elastic Beanstalk settings, click on Configuration on the left side, then software configuration.
Check "Enable log file rotation to Amazon S3. If checked, service logs are published to S3."
If you are using a custom IAM, you will need to grant read and write permissions to S3:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1435793320000",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListBucket",
"s3:ListBucketVersions"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-*/resources/environments/logs/*"
]
}
]
}
Log rotations happen about every 15 mins. You can search the s3 directory elasticbeanstalk-*/resources/environments/logs/* for logs.
If you are using a custom IAM, you will need to grant read and write permissions to S3:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1435793320000",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListBucket",
"s3:ListBucketVersions"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-*/resources/environments/logs/*"
]
}
]
}
Log rotations happen about every 15 mins. You can search the s3 directory elasticbeanstalk-*/resources/environments/logs/* for logs.
Saturday, June 20, 2015
Reseting git changes
To reset a commit
git reset --soft HEAD^
To reset an add command before commit
git reset
git reset --soft HEAD^
To reset an add command before commit
git reset
Sunday, June 14, 2015
Docker container cannot access mounted volumes in OSX host
If you are running Nginx or Apache in your docker images while using those to write files (ex. cache) on the host machine, chances are you will get a permission error.
Bash into your container:
> docker-compose run bash
cd into the location that has your mounted volume and do a "ls -l", you may see the following:
drwxr-xr-x 1 1000 staff 646 Jun 15 03:39 app
In Nginx or Apache, the user is usually www-data. We need to associate www-data with the UID 1000.
In your Dockerfile, add the following:
> RUN usermod -u 1000 www-data
Now if you check the permission again, you would see the correct user.
drwxr-xr-x 1 www-data staff 646 Jun 15 03:39 app
Bash into your container:
> docker-compose run
cd into the location that has your mounted volume and do a "ls -l", you may see the following:
drwxr-xr-x 1 1000 staff 646 Jun 15 03:39 app
In Nginx or Apache, the user is usually www-data. We need to associate www-data with the UID 1000.
In your Dockerfile, add the following:
> RUN usermod -u 1000 www-data
Now if you check the permission again, you would see the correct user.
drwxr-xr-x 1 www-data staff 646 Jun 15 03:39 app
MAMP Nginx cannot connect to phpmyadmin #2002
It is a really weird error.
In file /MAMP/bin/phpMyAdmin/config.inc.php, search for the line:
$cfg['Servers'][$i]['host'] = 'localhost';
Change to:
$cfg['Servers'][$i]['host'] = '127.0.0.1';
In file /MAMP/bin/phpMyAdmin/config.inc.php, search for the line:
$cfg['Servers'][$i]['host'] = 'localhost';
Change to:
$cfg['Servers'][$i]['host'] = '127.0.0.1';
Saturday, June 13, 2015
Create your own Docker Registry with S3
The purpose of this post is to be able to deploy your own custom image to ElasticBeanstalk using docker registry through storing the images on Amazon S3.
Let's begin by cloning Docker Registry 2.0.
git clone https://github.com/docker/distribution.git
Generate self-signed certificates.
cd distribution
mkdir certs
openssl req \
-newkey rsa:2048 -nodes -keyout certs/domain.key \
-x509 -days 365 -out certs/domain.crt
Add TLS to config.yml
Let's begin by cloning Docker Registry 2.0.
git clone https://github.com/docker/distribution.git
Generate self-signed certificates.
cd distribution
mkdir certs
openssl req \
-newkey rsa:2048 -nodes -keyout certs/domain.key \
-x509 -days 365 -out certs/domain.crt
Add TLS to config.yml
vi ./cmd/registry/config.yml
Add the tls block to the http section like the following:
http:
addr: :5000
secret: asecretforlocaldevelopment
debug:
addr: localhost:5001
tls:
certificate: /go/src/github.com/docker/distribution/certs/domain.crt
key: /go/src/github.com/docker/distribution/certs/domain.key
Remove filesystem settings and use AWS s3 as repository storage:
storage:
#filesystem:
# rootdirectory: /tmp/registry
s3:
accesskey: awsaccesskey
secretkey: awssecretkey
region: us-west-1
bucket: bucketname
encrypt: true
secure: true
v4auth: true
chunksize: 5242880
rootdirectory: /s3/object/name/prefix
Settings: http://docs.docker.com/registry/configuration/#storage
Save this.
Build the image with a name (ex. docker_registry)
> docker build -t docker_registry .
Tag it. Note that I am using boot2docker on MacOSX. You can get your IP address by running "boot2docker ip".
> docker tag docker_registry:latest 192.168.59.103:5000/docker_registry:latest
Run the registry.
> docker run -p 5000:5000 docker_registry
If you try to push your an image, you will get a error saying you need to add an insecure registry.
> boot2docker ssh "echo $'EXTRA_ARGS=\"--insecure-registry 192.168.59.103:5000\"' | sudo tee -a /var/lib/boot2docker/profile && sudo /etc/init.d/docker restart"
Push an image:
> docker push 192.168.59.103:5000/{image}
#filesystem:
# rootdirectory: /tmp/registry
s3:
accesskey: awsaccesskey
secretkey: awssecretkey
region: us-west-1
bucket: bucketname
encrypt: true
secure: true
v4auth: true
chunksize: 5242880
rootdirectory: /s3/object/name/prefix
Settings: http://docs.docker.com/registry/configuration/#storage
Save this.
Build the image with a name (ex. docker_registry)
> docker build -t docker_registry .
Tag it. Note that I am using boot2docker on MacOSX. You can get your IP address by running "boot2docker ip".
> docker tag docker_registry:latest 192.168.59.103:5000/docker_registry:latest
Run the registry.
> docker run -p 5000:5000 docker_registry
If you try to push your an image, you will get a error saying you need to add an insecure registry.
> boot2docker ssh "echo $'EXTRA_ARGS=\"--insecure-registry 192.168.59.103:5000\"' | sudo tee -a /var/lib/boot2docker/profile && sudo /etc/init.d/docker restart"
Push an image:
> docker push 192.168.59.103:5000/{image}
Sunday, June 7, 2015
Install Docker Compose on MacOSX or Ubuntu
Install docker-compose
> curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
> chmod +x /usr/local/bin/docker-compose
Check version
> docker-compose version
Run service
> docker-compose up
Check all running services
> docker-compose ps
Bash access to a running service
> docker-compose run worker bash
> curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
> chmod +x /usr/local/bin/docker-compose
Check version
> docker-compose version
Run service
> docker-compose up
Check all running services
> docker-compose ps
Bash access to a running service
> docker-compose run worker bash
Saturday, June 6, 2015
Using DockerFile to create an image
Each line in Dockerfile creates a layer. An image can have a max of 127 layers
Create a Dockerfile in a new folder.
> vi Dockerfile
Paste the following inside the Dockerfile
# this is a comment
FROM ubuntu:14.04
MAINTAINER Kenneth
RUN apt-get update && apt-get install
RUN gem install json
Build the image
> docker build -t kenneth/sinatra:v2 .
Tag the image
> docker tag ouruser/sinatra:dev
Create a Dockerfile in a new folder.
> vi Dockerfile
Paste the following inside the Dockerfile
# this is a comment
FROM ubuntu:14.04
MAINTAINER Kenneth
RUN apt-get update && apt-get install
RUN gem install json
Build the image
> docker build -t kenneth/sinatra:v2 .
Tag the image
> docker tag
Wednesday, June 3, 2015
boot2docker in terminal MacOSX
Initialize boot2docker
> boot2docker init
Start boot2docker
> boot2docker start
Set the environment variables in the current terminal.
> eval "$(boot2docker shellinit)"
Test run
> docker run hello-world
To run an Nginx server (-d for running in background):
> docker run -d -P --name web nginx
Stop
> boot2docker stop
Check status
> boot2docker status
Built an image:
> docker build -t .
Access home folder:
> cd $HOME
Mount a local directory to the image's directory
echo "my new site" > index.html
> docker run -d -P -v $HOME/site:/usr/share/nginx/html --name mysite nginx
upgrade boot2docker
> boot2docker stop
> boot2docker upgrade
> boot2docker init
Start boot2docker
> boot2docker start
Set the environment variables in the current terminal.
> eval "$(boot2docker shellinit)"
Test run
> docker run hello-world
To run an Nginx server (-d for running in background):
> docker run -d -P --name web nginx
Stop
> boot2docker stop
Check status
> boot2docker status
Built an image:
> docker build -t
Access home folder:
> cd $HOME
Mount a local directory to the image's directory
echo "my new site" > index.html
> docker run -d -P -v $HOME/site:/usr/share/nginx/html --name mysite nginx
upgrade boot2docker
> boot2docker stop
> boot2docker upgrade
Useful docker commands
Check docker version:
> docker version
Search an image named tutorial:
> docker search tutorial
Download an image:
> docker pull learn/tutorial
Install ping on your image:
> docker run learn/tutorial apt-get install -y ping
Show running processes:
> docker ps -l
Grab the ID above and commit the change with a name. Docker will return a new ID for the new image. (Only the first three characters of the ID is enought)
> docker commit 698 learn/ping
See list of running container:
> docker ps
Grab the container ID above and you can inspect the information of the container by running
> docker inspect
Inspect one element of the container specs
> docker inspect -f '{{ .NetworkSettings.IPAddress }}' nostalgic_morse
Push to a docker repository:
> docker push learn/ping
Build an image from a DockerFile:
docker build -t .
Remove all containers:
> docker stop $(docker ps -qa)
Run an app with interactive mode
> docker run -it --rm -p 3000:8080
See mapped ports.
> docker port
If you are using boot2docker, check the ip by:
> boot2docker ip
You should get something like 192.168.59.103
Stop the container:
> docker stop
Remove the container:
> docker rm
Shows standard output of a container:
> docker logs
See the end of the standard output of a container (if you are running a web app, you can see the outputs):
> docker logs -f
See applications running insider the container
> docker top
list all local images:
> docker images
list all containers, including exited
> docker ps -a
Remove all exited containers:
> docker version
Search an image named tutorial:
> docker search tutorial
Download an image:
> docker pull learn/tutorial
Install ping on your image:
> docker run learn/tutorial apt-get install -y ping
Show running processes:
> docker ps -l
Grab the ID above and commit the change with a name. Docker will return a new ID for the new image. (Only the first three characters of the ID is enought)
> docker commit 698 learn/ping
See list of running container:
> docker ps
Grab the container ID above and you can inspect the information of the container by running
> docker inspect
Inspect one element of the container specs
> docker inspect -f '{{ .NetworkSettings.IPAddress }}' nostalgic_morse
Push to a docker repository:
> docker push learn/ping
Build an image from a DockerFile:
docker build -t
If you are using boot2docker, check the ip by:
> boot2docker ip
You should get something like 192.168.59.103
Stop the container:
> docker stop
Remove the container:
> docker rm
Shows standard output of a container:
> docker logs
See the end of the standard output of a container (if you are running a web app, you can see the outputs):
> docker logs -f
See applications running insider the container
> docker top
list all local images:
> docker images
list all containers, including exited
> docker ps -a
Remove all exited containers:
> docker ps -a | grep Exit | cut -d ' ' -f 1 | xargs docker rm
Commit a change to an image with id 0b2616b0e5a8
> docker commit -m "Added json gem" -a "Kate Smith" \ 0b2616b0e5a8 ouruser/sinatra:v2
Use bash
> docker run -t -i training/sinatra /bin/bash
Port mapping from host to container.
> docker run -d -p 5000:5000 training/webapp python app.py
Port mapping from only localhost port to container.
> docker run -d -p 127.0.0.1:5000:5000 training/webapp python app.py
Port mapping from localhost dynamic port to container.
> docker run -d -p 127.0.0.1::5000 training/webapp python app.py
Port mapping to UDP.
> docker run -d -p 127.0.0.1:5000:5000/udp training/webapp python app.py
Check where on the host the container is mapped to.
> docker port 5000
Change name to web and run the web app.
> docker run -d -P --name web training/webapp python app.py
Inspect name of the container.
> docker inspect -f "{{ .Name }}"
Remove a running container
> docker rm -f
Create a web container and link to a db container (--link:alias)
> docker run -d --name db training/postgres
> docker run -d -P --name web --link db:db training/webapp python app.py
Inspect the link information:
> docker inspect -f "{{ .HostConfig.Links }}" web
Output: [/db:/web/db]
When containers are linked, docker automatically creates environment variables and a /etc/host file
> sudo docker run --rm --name web2 --link db:db training/webapp env
Output:
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
It is recommended to use /etc/host file to set locations.
> docker run -t -i --rm --link db:webdb training/webapp /bin/bash
> cat /etc/hosts
Output:
172.17.0.7 aed84ee21bde
. . .
172.17.0.5 webdb 6e5cdeb2d300 db
Ping the address:
> apt-get install -yqq inetutils-ping
> ping webdb
Restart db: (Note that the host /etc/hosts will auto update itself)
> docker restart db
Adding a data volume with -v:
> docker run -d -P --name web -v /webapp training/webapp python app.py
Note that docker volumes are persistent. Even if the container is removed, it will still be there.
Mount host directory to container's with read-write permissions
> docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
Mount host directory to container's with read-only permission.
> docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py
Mount a single file
> docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash
Example run:
Port mapping from host to container.
> docker run -d -p 5000:5000 training/webapp python app.py
Port mapping from only localhost port to container.
> docker run -d -p 127.0.0.1:5000:5000 training/webapp python app.py
Port mapping from localhost dynamic port to container.
> docker run -d -p 127.0.0.1::5000 training/webapp python app.py
Port mapping to UDP.
> docker run -d -p 127.0.0.1:5000:5000/udp training/webapp python app.py
Check where on the host the container is mapped to.
> docker port
Change name to web and run the web app.
> docker run -d -P --name web training/webapp python app.py
Inspect name of the container.
> docker inspect -f "{{ .Name }}"
Remove a running container
> docker rm -f
Create a web container and link to a db container (--link
> docker run -d --name db training/postgres
> docker run -d -P --name web --link db:db training/webapp python app.py
Inspect the link information:
> docker inspect -f "{{ .HostConfig.Links }}" web
Output: [/db:/web/db]
When containers are linked, docker automatically creates environment variables and a /etc/host file
> sudo docker run --rm --name web2 --link db:db training/webapp env
Output:
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
It is recommended to use /etc/host file to set locations.
> docker run -t -i --rm --link db:webdb training/webapp /bin/bash
> cat /etc/hosts
Output:
172.17.0.7 aed84ee21bde
. . .
172.17.0.5 webdb 6e5cdeb2d300 db
Ping the address:
> apt-get install -yqq inetutils-ping
> ping webdb
Restart db: (Note that the host /etc/hosts will auto update itself)
> docker restart db
Adding a data volume with -v:
> docker run -d -P --name web -v /webapp training/webapp python app.py
Note that docker volumes are persistent. Even if the container is removed, it will still be there.
Mount host directory to container's with read-write permissions
> docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
Mount host directory to container's with read-only permission.
> docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py
Mount a single file
> docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash
Example run:
> docker run -it -d --name flask -p 3000:8080 flask/image:latest
Elastic Beanstalk with Git
In your git project directory, run
eb init
It will ask for your security access keys, you can get it here:
https://console.aws.amazon.com/iam/home?#security_credential
When asking for solution stack, use the following (if you are using docker)
50) 64bit Amazon Linux 2015.03 v1.4.1 running Docker 1.6.0
After it's done, it will show you the location with your auth info:
/Users/{username}/.elasticbeanstalk/aws_credential_file
Deploy the application by:
eb start
If you see the following boto error, install it:
ImportError: No module named boto
Instruction: https://github.com/boto/boto
eb init
It will ask for your security access keys, you can get it here:
https://console.aws.amazon.com/iam/home?#security_credential
When asking for solution stack, use the following (if you are using docker)
50) 64bit Amazon Linux 2015.03 v1.4.1 running Docker 1.6.0
After it's done, it will show you the location with your auth info:
/Users/{username}/.elasticbeanstalk/aws_credential_file
Deploy the application by:
eb start
If you see the following boto error, install it:
ImportError: No module named boto
Instruction: https://github.com/boto/boto
Saturday, May 30, 2015
Set up MAMP with Nginx, php-fpm and symfony on MacOSX
To make this work, you will need to have MAMP installed.
The steps are as follows: install php-fpm, add your symfony config to nginx.conf, start the server.
Install php5-fpm
You can use macport or brew to install php5-fpm
After it's install, search php5-fpm by doing "which-fpm". Note that it may be called "php-fpm" in some installations.
php5-fpm requires two configurations to be set up: /etc/php-fpm.conf and /etc/php-fpm/pool.d/{whatever}.conf
Sample php-fpm.conf:
https://github.com/perusio/php-fpm-example-config/blob/tcp/fpm/php5-fpm.conf
Sample pool.d/{whatever}.conf
http://www.if-not-true-then-false.com/2011/nginx-and-php-fpm-configuration-and-optimizing-tips-and-tricks/
Start php5-fpm by
> sudo php5-fpm
In your MAMP config - /conf/nginx/nginx.conf, put in the relevant http and https server settings:
server {
listen 80;
server_name localhost_moonlight;
root /Users/rei999/Documents/symfony_workspace/moonlight/web;
location / {
try_files $uri /app_dev.php$is_args$args;
}
# DEV
# This rule should only be placed on your development environment
# In production, don't include this and don't deploy app_dev.php or config.php
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /Applications/MAMP/logs/nginx/moonlight_error.log;
access_log /Applications/MAMP/logs/nginx/moonlight_access.log;
}
server {
listen 443 ssl;
server_name localhost_moonlight;
ssl_certificate /Applications/MAMP/conf/ssl/server.crt;
ssl_certificate_key /Applications/MAMP/conf/ssl/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
root /Users/rei999/Documents/symfony_workspace/moonlight/web;
location / {
try_files $uri /app_dev.php$is_args$args;
}
# DEV
# This rule should only be placed on your development environment
# In production, don't include this and don't deploy app_dev.php or config.php
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /Applications/MAMP/logs/nginx/moonlight_error.log;
access_log /Applications/MAMP/logs/nginx/moonlight_access.log;
}
listen 80;
server_name localhost_moonlight;
root /Users/rei999/Documents/symfony_workspace/moonlight/web;
location / {
try_files $uri /app_dev.php$is_args$args;
}
# DEV
# This rule should only be placed on your development environment
# In production, don't include this and don't deploy app_dev.php or config.php
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /Applications/MAMP/logs/nginx/moonlight_error.log;
access_log /Applications/MAMP/logs/nginx/moonlight_access.log;
}
server {
listen 443 ssl;
server_name localhost_moonlight;
ssl_certificate /Applications/MAMP/conf/ssl/server.crt;
ssl_certificate_key /Applications/MAMP/conf/ssl/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
root /Users/rei999/Documents/symfony_workspace/moonlight/web;
location / {
try_files $uri /app_dev.php$is_args$args;
}
# DEV
# This rule should only be placed on your development environment
# In production, don't include this and don't deploy app_dev.php or config.php
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /Applications/MAMP/logs/nginx/moonlight_error.log;
access_log /Applications/MAMP/logs/nginx/moonlight_access.log;
}
Depending on your PHP-FPM config, the fastcgi_pass can also be fastcgi_pass 127.0.0.1:9000.
Start MAMP Nginx.
Wednesday, May 13, 2015
Handy RabbitMQ commands
Running rabbitmq in background:
rabbitmq-server -detached
Checking status:
rabbitmqctl status
Stopping:
rabbitmqctl stop
Show queues:
rabbitmqctl list_queues
Show queues with message status
rabbitmqctl list_queues name messages_ready messages_unacknowledged
rabbitmq-server -detached
Checking status:
rabbitmqctl status
Stopping:
rabbitmqctl stop
Show queues:
rabbitmqctl list_queues
Show queues with message status
rabbitmqctl list_queues name messages_ready messages_unacknowledged
Installing pip and supervisor on mac
Start by getting installing pip from the following link:
https://pip.pypa.io/en/stable/installing.html
Insall supervisor (there's no stable release of supervisor at the time of this writing, use the --pre flag):
> pip install supervisor --pre
Supervisor should be running already.
Copy the config to /etc
> echo_supervisord_conf > /etc/supervisord.conf
If supervisord is not already started, run supervisord in the directory your desired config is located. If you don't want to use the /etc/supervisord.conf and you have it somewhere else, run it there.
Start supervisord
> supervisord
Restart all supervisor processes.
> sudo supervisorctl restart all
If you want to start a program, you need to use :*, as supervisord names each program along with it's process name. Assume you defined your program as [program:hello]:
> supervisorctl
> start hello:*
Stop supervisor:
> ps -ef | grep supervisord
This should show the process id running supervisord. Terminate it by issuing a kill command.
501 95787 1 0 Fri07pm ?? 0:02.92 /usr/bin/python /usr/local/bin/supervisord
> kill -s SIGTERM 95787
For a sample supervisord config, check here.
https://pip.pypa.io/en/stable/installing.html
Insall supervisor (there's no stable release of supervisor at the time of this writing, use the --pre flag):
> pip install supervisor --pre
Supervisor should be running already.
Copy the config to /etc
> echo_supervisord_conf > /etc/supervisord.conf
If supervisord is not already started, run supervisord in the directory your desired config is located. If you don't want to use the /etc/supervisord.conf and you have it somewhere else, run it there.
Start supervisord
> supervisord
Restart all supervisor processes.
> sudo supervisorctl restart all
If you want to start a program, you need to use :*, as supervisord names each program along with it's process name. Assume you defined your program as [program:hello]:
> supervisorctl
> start hello:*
Stop supervisor:
> ps -ef | grep supervisord
This should show the process id running supervisord. Terminate it by issuing a kill command.
501 95787 1 0 Fri07pm ?? 0:02.92 /usr/bin/python /usr/local/bin/supervisord
> kill -s SIGTERM 95787
For a sample supervisord config, check here.
Tuesday, April 14, 2015
Thursday, April 9, 2015
phpstorm xdebug php.ini settings
Put the following into your php.ini
[xdebug]
zend_extension="/Applications/MAMP/bin/php/php5.5.22/lib/php/extensions/no-debug-non-zts-20121212/xdebug.so"
xdebug.profiler_enable=1
xdebug.remote_enable=1
xdebug.remote_host=127.0.0.1
xdebug.remote_port=9000
xdebug.idekey=PHPSTORM
[xdebug]
zend_extension="/Applications/MAMP/bin/php/php5.5.22/lib/php/extensions/no-debug-non-zts-20121212/xdebug.so"
xdebug.profiler_enable=1
xdebug.remote_enable=1
xdebug.remote_host=127.0.0.1
xdebug.remote_port=9000
xdebug.idekey=PHPSTORM
Friday, March 13, 2015
Bee run failed for Golang and Beego
If you are using beego and bee in your Golang project, you may run into an issue that says "Run failed" when you started "bee run"
Assuming you cd into your folder and the error still persists, try running "bee version".
If you see the above message, you may find that the newest Golang SDK from google is naming "go" command as "goapp".
What you need to do is to rename "go_appengine/goapp" to "go_appengine/go".
And rename "go_appengine/goroot/goapp" to "go_appengine/goroot/go".
Assuming you cd into your folder and the error still persists, try running "bee version".
bee :1.2.4
beego :1.4.3
exec: "go": executable file not found in $PATH
What you need to do is to rename "go_appengine/goapp" to "go_appengine/go".
And rename "go_appengine/goroot/goapp" to "go_appengine/goroot/go".
Subscribe to:
Posts (Atom)