Jan 21 2013

Building Your Own Cloud From Scratch

Category: Cloud Computing,Systemsjgoulah @ 8:50 PM

Intro

There are a lot of private cloud solutions out there with great things built into them already to complete a full cloud stack – networking, dashboards, storage, and a framework that puts all the pieces together, amongst other things. But there is also a decent amount of overhead to getting these frameworks setup, and maybe you want more flexibility over some of the components, or even just something a little more homegrown. What might a lightweight cloud machine bootstrapping process look like if it where implemented from scratch?

Getting Started

We can use libvirt and KVM/QEMU to put something reasonably robust together, start by installing those packages:

apt-get install qemu-kvm libvirt libvirt-bin virtinst virt-viewer

The next important thing is to setup a bridge for proper networking on this host. This will allow the guests to use the bridge to communicate on the same network. There should be a few articles out there that can help you set this up, but the basics are that you want your bridge assigned the IP that your eth0 interface previously had, and then add the eth0 interface to the bridge. In this example 192.168.1.101 is the IP of the host machine:

# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual

auto br0
iface br0 inet static
  address 192.168.1.101
  netmask 255.255.255.0
  gateway 192.168.1.1
  network 192.168.1.0
  bridge_ports eth0
  
ifup br0

Building the Image

The first step is setting up a base template that you create your instances from. So grab an iso to start from, we’ll use debian, but this process works with any distro:

% wget http://cdimage.debian.org/debian-cd/6.0.6/amd64/iso-cd/debian-6.0.6-amd64-netinst.iso

And allocate a file on disk to the size you’d like your template to be. I created one here at 8GB, it can always be expanded later, so this should only need to be big enough to hold the initial base image that all instances will start from. Generally smaller is better because of the copy step when instances get created later.

% dd if=/dev/zero of=/var/lib/libvirt/images/debbase.img bs=1M count=8192

Now you can start the linux installation, noting the –graphics args for the ability to connect with VNC. Our installation target disk is the one we created above, debbase.img, and we are giving it 512M RAM and 1 CPU.

% virt-install --name=virt-base-deb --ram=512 --graphics vnc,listen=0.0.0.0  --network=bridge=br0 \
--accelerate --virt-type=kvm --vcpus=1 --cpuset=auto --cpu=host --disk /var/lib/libvirt/images/debbase.img \
--cdrom debian-6.0.6-amd64-netinst.iso

Once thats started up you can use VNC on your client machine to connect to this instance graphically and run through the normal install setup. There are plenty of clients out there but a decent one is Chicken of the VNC. Its also possible at this step that you’d create the image off a PXE boot or similar bootstrapping mechanism.

Extract the Partition

Here we take advantage of QEMU ability to load Linux kernels and init ramdisks directly, thereby circumventing bootloaders such as GRUB. It then can be launched with the physical partition containing the root filesystem as the virtual disk.

There are two steps to make this work. First you’ll need the vmlinuz and initrd files, and the easiest way to get those is to copy them from the base image we setup above:

% scp BASEIP:/boot/vmlinuz-2.6.32-5-amd64 /var/lib/libvirt/kernels/
% scp BASEIP:/boot/initrd.img-2.6.32-5-amd64 /var/lib/libvirt/kernels/

The next step is to extract the root partition from that same base image. We want to take a look at how those partitions are laid out so that we can get the right numbers to pass to the dd command.

% sfdisk -l -uS /var/lib/libvirt/images/debbase.img

Disk /var/lib/libvirt/images/debbase.img: 1044 cylinders, 255 heads, 63 sectors/track
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/var/lib/libvirt/images/debbase.img1   *      2048  15988735   15986688  83  Linux
/var/lib/libvirt/images/debbase.img2      15990782  16775167     784386   5  Extended
/var/lib/libvirt/images/debbase.img3             0         -          0   0  Empty
/var/lib/libvirt/images/debbase.img4             0         -          0   0  Empty
/var/lib/libvirt/images/debbase.img5      15990784  16775167     784384  82  Linux swap / Solaris

We are going to pull the first partition out, note how the numbers line up to the first line corresponding to debbase.img1 line. We start at sector 2048 and get 15986688 sectors of 512 bytes each:

% dd if=/var/lib/libvirt/images/debbase.img of=/var/lib/libvirt/debian-tmpl skip=2048 count=15986688 bs=512

Templatize the Image

Now we have a disk file that serves as our image template. There’s a few things we want to change directly on this template. Note that we are using a few all caps placeholders ending in -TMPL that we’ll replace later with sed. We can edit the templates files by mounting the disk:

% mkdir -p /tmp/newtmpl
% mount -t ext3 -o loop /var/lib/libvirt/debian-tmpl /tmp/newtmpl
% chroot /tmp/newtmpl

Note at this point we are chrooted and these commands are acting against our template disk file.

Clear out the old IPs tied to our NIC when the base image networking was setup:

% echo "" > /etc/udev/rules.d/70-persistent-net.rules

We’re going to put a placeholder for our hostname in /etc/hostname:

% echo "HOSTNAME-TMPL" > /etc/hostname

Set a nameserver template in /etc/resolv.conf:

% echo "nameserver NAMESERVER-TMPL" > /etc/resolv.conf 

In the file /etc/network/interfaces:


# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address ADDRESS-TMPL
netmask NETMASK-TMPL
gateway GATEWAY-TMPL

This will give us console access when we boot it. Make sure /etc/inittab has this line (usually just uncomment it):

T0:23:respawn:/sbin/getty -L ttyS0 9600 vt100

Creating an Instance

Now we have all the pieces together to launch an instance from our image. This script will create the instance given the IP and hostname. It does no error checking for readability reasons, and is well commented so that you know whats going on:

#!/bin/bash

# read in ' ' from command line
virt_ip=$1
virt_host=$2

# build the fqdn based off the short host name
virt_fqdn=${virt_host}.linux.bogus

# fill in your network defaults
virt_gateway=192.168.1.1
virt_netmask=255.255.225.0
virt_nameserver=192.168.1.101

# how the disk/ram/cpu is sized
virt_disk=10G
virt_ram=512
virt_cpus=1

# random mac address
virt_mac=$(openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/.$//')

cp /var/lib/libvirt/images/debian-tmpl /var/lib/libvirt/images/${virt_host}-disk0

# optionally resize the disk
qemu-img resize /var/lib/libvirt/images/${virt_host}-disk0 ${virt_disk}
loopback=`losetup -f --show /var/lib/libvirt/images/${virt_host}-disk0`
fsck.ext3 -fy $loopback
resize2fs $loopback ${virt_disk}
losetup -d $loopback

mountbase=/tmp/${virt_host}
mkdir -p ${mountbase}
mount -o loop /var/lib/libvirt/images/${virt_host}-disk0 ${mountbase}

# replace our template vars
sed -i -e "s/ADDRESS-TMPL/$virt_ip/g" \
       -e "s/NETMASK-TMPL/$virt_netmask/g" \
       -e "s/GATEWAY-TMPL/$virt_gateway/g" \
       -e "s/HOSTNAME-TMPL/$virt_fqdn/g" \
       -e "s/NAMESERVER-TMPL/$virt_nameserver/g" \
  ${mountbase}/etc/network/interfaces \
  ${mountbase}/etc/resolv.conf \
  ${mountbase}/etc/hostname

# unmount and remove the tmp files
umount /tmp/${virt_host}
rm -rf /tmp/${virt_host}*

# run a file system check on the disk
fsck.ext3 -pv /var/lib/libvirt/images/${virt_host}-disk0

# specify the kernel and initrd (these we copied with scp earlier)
vmlinuz=/var/lib/libvirt/kernels/vmlinuz-2.6.32-5-amd64
initrd=/var/lib/libvirt/kernels/initrd.img-2.6.32-5-amd64

# install the new domain with our specified parameters for cpu/disk/memory/network
virt-install --name=$virt_host --ram=$virt_ram \
--disk=path=/var/lib/libvirt/images/${virt_host}-disk0,bus=virtio,cache=none \
--network=bridge=br0 --import --accelerate --vcpus=$virt_cpus --cpuset=auto --mac=${virt_mac} --noreboot --graphics=vnc \
--cpu=host --boot=kernel=$vmlinuz,initrd=$initrd,kernel_args="root=/dev/vda console=ttyS0 _device=eth0 \
_ip=${virt_ip} _hostname=${virt_fqdn} _gateway=${virt_gateway} _dns1=${virt_nameserver} _netmask=${virt_netmask}"

# start it up
virsh start $virt_host

assuming we named it buildserver, run the above like:

% buildserver 192.168.1.197 jgoulah

Conclusion

This is really just the first step, but now that you can bring a templated disk up you can decide a little more about how you’d like networking to work for your cloud. You can either continue to use static IP assignment as shown here, and use nsupdate to insert dns entries when new guests come up, or you can set things up such that the base image uses dhcp, and you can configure your dhcp server to update records in dns when clients come online. You may also want to bake your favorite config management system into the template so that you can bootstrap the nodes and maintain configurations on them. Have fun!

Tags: , , , , ,


Jan 01 2013

Ditching Vino for X11vnc

Category: Systemsjgoulah @ 2:07 PM

I’d been using gnome vino as a VNC server for years on my media computer. This way I can use touchpad to control it from my iPad. It works fine, but a little bit clunky and badly documented, plus its tied directly to gnome. The other day I woke up to a filled drive (~2TB) of vino errors. I killed it off and cleaned up the error log file and tried to start it up again. No go this time due to some startup errors. This isn’t the first time I’ve fought with it, surely something better exists.

This led me to X11vnc. A bit of fresh air its fairly easy to setup and get going very quickly. I’ll first show how to do it manually and then present my open sourced chef cookbook.

Manual Install

First thing is to install it:

sudo apt-get install x11vnc

Setup a password file:

sudo x11vnc -storepasswd YOUR_PASS_HERE /etc/x11vnc.pass

Create an upstart config:

sudo touch /etc/init/x11vnc.conf

Open it and put this into it:

start on login-session-start
script
x11vnc -xkb -noxrecord -noxfixes -noxdamage -display :0 -rfbauth /etc/x11vnc.pass \
 -auth /var/run/lightdm/root/:0 -forever -bg -o /var/log/x11vnc.log
end script

and start it up:

sudo service x11vnc start

Simple, easy, and just works.

Chef Cookbook

Naturally, I ported this to a chef cookbook which you can find here. I have to admit I’m not totally happy with it yet, mainly because it doesn’t restart x11vnc on config changes. In some cases such as our production servers, we actually prefer this so that we don’t accidentally roll out a broken config change and have an auto-restart bring everything to its knees (there are some ways to avoid this but thats another post). In any case on my home computer I prefer it to restart if I make changes, but I’m struggling to get upstart to stop the process correctly due to what seems to be some disassociation with its pid file. The other thing is the recipe currently assumes you are using Ubuntu or something similar, but can easily be extended. Hope this helps someone else out there!

Tags: , , , , ,


Dec 27 2012

Quick Tip: Find the QEMU VM to Virtual Interface Mapping

Category: Debugging,Systemsjgoulah @ 4:13 PM

The other day we were getting some messages on our network switch that a host was flapping between ports. It turns out we had two virtual hosts on different machines using the same MAC address (not good). We had the interface and MAC information, and wanted to find what vm domains mapped to these. Its easy if you know the domain name of the vm node, you can get back the associated information like so:

% sudo virsh domiflist goulah  
Interface  Type       Source     Model       MAC
-------------------------------------------------------
vnet2      bridge     br0        -           52:54:00:19:40:18

But if you only have the virtual interface, how would you get the domain? I remembered virsh dumpxml will show all of the dynamic properties about the VM, including those which are not available in the static XML definition of the VM in /etc/libvirt/qemu/foo.xml. Which vnetX interface is attached to which VM is one of these additional dynamic properties. So we can grep for this! I concocted up a simple (not very elegant) function which given the vnet interface will return the domain associated to it:

function find-vnet() { for vm in $(sudo virsh list | grep running | awk '{print $2}'); do sudo virsh dumpxml $vm|grep -q "$1" && echo $vm; done }

It just looks through all the running domains and prints the domain name if it finds the interface you’re looking for. So now you can do:

% find-vnet vnet2
goulah

I wonder if others out there have a clever way of doing this or if this is really the best way? If you know of a better way leave a comment. Perhaps the problem is not common enough that a libvirt built-in command exists.

Tags: , , , , ,


Apr 15 2012

Percona Live 2012 – The Etsy Shard Architecture

Category: Conferences,Databases,Systemsjgoulah @ 2:58 PM

I attended Percona Conf in Santa Clara last week. It was a great 3 days of lots of people dropping expert MySQL knowledge. I learned a lot and met some great people.

I gave a talk about the Etsy shard architecture, and had a lot of good feedback and questions. A lot of people do active/passive, but we run everything in active/active master-master. This helps us keep a warm buffer pool on both sides, and if one side goes out we only lose about half of the queries until its pulled and they start hashing to the other side. Some details on how this works in the slides below.

Tags: , , , ,


Jan 09 2012

Distributed MySQL Sleuthing on the Wire

Category: Databases,Real-time Web,SSH,Systemsjgoulah @ 8:52 AM

Intro

Oftentimes you need to know what MySQL is doing right now and furthermore if you are handling heavy traffic you probably have multiple instances of it running across many nodes. I’m going to start by showing how to take a tcpdump capture on one node, a few ways to analyze that, and then go into how to take a distributed capture across many nodes for aggregate analysis.

Taking the Capture

The first thing you need to do is to take a capture of the interesting packets. You can either do this on the MySQL server or on the hosts talking to it. According to this percona post this command is the best way to capture mysql traffic on the eth0 interface and write it into mycapture.cap for later analysis:

% tcpdump -i eth0 -w mycapture.cap -s 0 "port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2"
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
47542 packets captured
47703 packets received by filter
60 packets dropped by kernel

Analyzing the Capture

The next step is to take a look at your captured data. One way to do this is with tshark, which is the command line part of wireshark. You can do yum install wireshark or similar to install it. Usually you want to do this on a different host than the one taking traffic since it can be memory and CPU intensive.

You can then use it to reconstruct the mysql packets like so:

% tshark -d tcp.port==3306,mysql -T fields -R mysql.query -e frame.time -e ip.src -e ip.dst -e mysql.query -r mycapture.cap

This will give you the time, source IP, destination IP, and query but this is still really raw output. Its a nice start but we can do better. Percona has released the Percona Toolkit which includes some really nice command line tools (including what used to be in Maatkit).

The one we’re interested in here is pt-query-digest

It has tons of options and you should read the documentation, but here’s a few I’ve used recently.

Lets say you want to get the top tables queried from your tcpdump

% tcpdump -r mycapture.cap -n -x -q -tttt | pt-query-digest --type tcpdump --group-by tables --order-by Query_time:cnt \
 --report-format profile --limit 5
reading from file mycapture.cap, link-type EN10MB (Ethernet)

# Profile
# Rank Query ID Response time Calls R/Call Apdx V/M   Item
# ==== ======== ============= ===== ====== ==== ===== ====================
#    1 0x        0.3140  6.1%   674 0.0005 1.00  0.00 shard.images
#    2 0x        0.8840 17.1%   499 0.0018 1.00  0.03 shard.activity
#    3 0x        0.1575  3.1%   266 0.0006 1.00  0.00 shard.listing_images
#    4 0x        0.1680  3.3%   265 0.0006 1.00  0.00 shard.connection_edges_reverse
#    5 0x        0.0598  1.2%   254 0.0002 1.00  0.00 shard.listing_translations
# MISC 0xMISC    3.5771 69.3%  3534 0.0010   NS   0.0 <86 ITEMS>

Note the tcpdump options I used this time, which the tool requires to work properly when passing –type tcpdump. I also grouped by tables (as opposed to full queries) and ordered by the count (the Calls column). It will stop at your –limit and group the rest into MISC so be aware of that.

You can remove the –order-by to sort by response time, which is the default sort order, or provide other attributes to sort on. We can also change the –report-format, for example to header:

% tcpdump -r mycapture.cap -n -x -q -tttt | pt-query-digest --type tcpdump --group-by tables --report-format header 
reading from file mycapture.cap, link-type EN10MB (Ethernet)

# Overall: 5.49k total, 91 unique, 321.13 QPS, 0.30x concurrency _________
# Time range: 2012-01-08 15:52:05.814608 to 15:52:22.916873
# Attribute          total     min     max     avg     95%  stddev  median
# ============     ======= ======= ======= ======= ======= ======= =======
# Exec time             5s     3us   114ms   939us     2ms     3ms   348us
# Rows affecte         316       0      13    0.06    0.99    0.29       0
# Query size         3.64M      18   5.65k  694.98   1.09k  386.68  592.07
# Warning coun           0       0       0       0       0       0       0
# Boolean:
# No index use   0% yes,  99% no

If you set the –report-format to query_report you will get gobs of verbose information that you can dive into and you can use the –filter option to do things like getting slow queries:

% tcpdump -r mycapture.cap -n -x -q -tttt | \
  pt-query-digest --type tcpdump --filter '($event->{No_index_used} eq "Yes" || $event->{No_good_index_used} eq "Yes")'

Distributed Capture

Now that we’ve taken a look at capturing and analyzing packets from one host, its time to dive into looking at our results across the cluster. The main trick is that tcpdump provides no option to stop capturing – you have to explicitly kill it. Otherwise we’ll just use dsh to send our commands out. We’ll assume you have a user that can hop around in a password-less fashion using ssh keys – setting that up is well outside the scope of this article but there’s plenty of info out there on how to do that.

There’s a few ways you can let a process run on a “timeout” but I’m assuming we don’t have any script written or tools like bash timeout or the one distributed in coreutils available.

So we’re going off the premise that you will background the process and kill it after a sleep by grabbing its pid:

( /path/to/command with options ) & sleep 5 ; kill $!

Simple enough, except we’ll want to capture the output on each host, so we need to ssh the output back over to the target using a pipe to grab the stdout. This means that $! will return the pid of our ssh command instead of our tcpdump command. We end up having to do a little trick to kill the right process, since the capture won’t be readable if we kill ssh command that is writing the output. We’ll need to kill tcpdump and to do that we can look at the parent pid of the ssh process, ask pkill (similar to pgrep) for all of the processes that have this parent, and finally kill the oldest one, which ends up being our tcpdump process.

Then end result looks like this if I were to run it across two machines:

% dsh -c -m web1000,web1001 \
   'sudo /usr/sbin/tcpdump -i eth0 -w - -s 0 -x -n -q -tttt "port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2" | \
   ssh dshhost "cat - > ~/captures/$(hostname -a).cap" & sleep 10 ; \
   sudo pkill -o -P $(ps -ef | awk "\$2 ~ /\<$!\>/ { print \$3; }")'

So this issues a dsh to two of our hosts (you can make a dsh group with 100 or 1000 hosts though) and runs the command concurrently on each (-c). We issue our tcpdump on each target machine and send the output to stdout for ssh to then cat back to a directory on the source machine that issued the dsh. This way we have all of our captures in one directory with each file named with the target name of each host the tcpdump was run. The sleep is how long the dump is going to run for before we then kill off the tcpdump.

The last piece of the puzzle is to get these all into one file and we can use the mergecap tool for this, which is also part of wireshark:

% /usr/sbin/mergecap -F libpcap -w output.cap *.cap

And then we can analyze it like we did above.

Further Reading

References

http://www.mysqlperformanceblog.com/2011/04/18/how-to-use-tcpdump-on-very-busy-hosts

http://stackoverflow.com/questions/687948/timeout-a-command-in-bash-without-unnecessary-delay

http://www.xaprb.com/blog/2009/08/18/how-to-find-un-indexed-queries-in-mysql-without-using-the-log/

Breaking the distributed command down further

Just to clarify this command a bit more, particularly how the kill part works since that was the trickiest part for me to figure out.

When we run this

$ dsh -c -m web1000,web1001 \
   'sudo /usr/sbin/tcpdump -i eth0 -w - -s 0 -x -n -q -tttt "port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2" | \
   ssh dshhost "cat - > ~/captures/$(hostname -a).cap" & sleep 10 ; \
   sudo pkill -o -P $(ps -ef | awk "\$2 ~ /\<$!\>/ { print \$3; }")'

on the server the process list looks something like

user     12505 12504  0 03:12 ?        00:00:00 bash -c sudo /usr/sbin/tcpdump -i eth0 -w - -s 0 -x -n -q -tttt "port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2" | ssh myhost.myserver.com "cat - > /home/etsy/captures/$(hostname -a).cap" & sleep 5 ; sudo pkill -o -P $(ps -ef | awk "\$2 ~ /\<$!\>/ { print \$3; }")
pcap     12506 12505  1 03:12 ?        00:00:00 /usr/sbin/tcpdump -i eth0 -w - -s 0 -x -n -q -tttt port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2
user     12507 12505  0 03:12 ?        00:00:00 ssh myhost.myserver.com cat - > ~/captures/web1001.cap

So $! is going to return the pid of the ssh process, 12507. We use awk to find the process matching that, and then print the parent pid out, which is then passed to the -P arg of pkill. If you use pgrep to look at this without the -o you’d get a list of the children of 12505, which are 12506 and 12507. The oldest child is the tcpdump command and so adding -o kills that guy off.

So if we were only running the command on one host we could use something much simpler

ssh dbhost01 '(sudo /usr/sbin/tcpdump -i eth0 -w - -s 0 port 3306) & sleep 10; sudo kill $!' | cat - > output.cap

Tags: , , , , ,


Oct 02 2010

Quick Tip: Dynamically Updating Screen Window Titles With The Current Server Hostname

Category: Organization,SSH,Systemsjgoulah @ 12:13 PM

I haven’t had a ton of time for blogging lately but figured this tip was good enough to throw out there for all the screen users. One way I like to organize servers that I’m ssh’d into is using screen windows. As you hopefully know you can use Ctrl-A c to create sub windows within screen. Then you can switch between them in several ways such as using Ctrl-A X where X is the window number, Ctrl-A n or Ctrl-A p for next and previous, and Ctrl-A “ to get a list of the windows for selection. You can move windows around with Ctrl-A : then type number X where X is the window you want to swap with. Finally, you can also name the windows with Ctrl-A A. So usually I ssh into a server, and manually change the window title to the server name I’m ssh’d into so that its easy to organize and remember where my windows are.

Turns out thats a lot of repetitive work that can easily be scripted. You can create a really simple script called ssh-to and place in your ~/bin or somewhere in your path

#!/bin/bash

hostname=`basename $0`

# set screen title to short hostname
echo -n -e "\033k${hostname%%.*}\033\134"

# ssh to the server with any additional args
ssh -X $hostname $*

# set hostname back when session is done (-s on osx)
echo -n -e "\033k`hostname -s`\033\134"

Now, create symbolic links to the script with each server hostname that you use. This can be a little tedious but you only have to do it once and the benefit is that you can now tab complete ssh’ing into servers.

For example you would do something like

ln -s ssh-to myserver.com

Then anytime you type “myserver.com” the tab completion can fill it out, you’re ssh’d into the server (assuming you have keys setup you don’t have to type a password) and your screen window title is updated with the domain info trimmed off, in this case myserver. Now your screen windows update their own titles anytime you ssh into a new server!

Tags: , , ,


Jan 18 2010

Using Mongo and Map Reduce on Apache Access Logs

Category: Databases,Systemsjgoulah @ 9:32 PM

Introduction

With more and more traffic pouring into websites, it has become necessary to come up with creative ways to parse and analyze large data sets. One of the popular ways to do that lately is using MapReduce which is a framework used across distributed systems to help make sense of large data sets. There are lots of implementations of the map/reduce framework but an easy way to get started is by using MongoDB. MongoDB is a scalable and high performant document oriented database. It has replication, sharding, and mapreduce all built in which makes it easy to scale horizontally.

For this article we’ll look a common use case of map/reduce which is to help analyze your apache logs. Since there is no set schema in a document oriented database, its a good fit for log files since its fairly easy to import arbitrary data. We’ll look at getting the data into a format that mongo can import, and writing a map/reduce algorithm to make sense of some of the data.

Getting Setup

Installing Mongo

Mongo is easy to install with detailed documentation here. In a nutshell you can do

$ mkdir -p /data/db
$ curl -O http://downloads.mongodb.org/osx/mongodb-osx-i386-latest.tgz
$ tar xzf mongodb-osx-i386-latest.tgz

At this point its not a bad idea to put this directory somewhere like /opt and adding its bin directory to your path. That way instead of

 ./mongodb-xxxxxxx/bin/mongod &

You can just do

mongodb &

In any case, start up the daemon one of those two ways depending how you set it up.

Importing the Log Files

Apache access files can vary in the information reported. The log format is easy to change with the LogFormat directive which is documented here. In any case these logs that I’m working with are not out of the box apache format. They look something like this

Jan 18 17:20:26 web4 logger: networks-www-v2 84.58.8.36 [18/Jan/2010:17:20:26 -0500] "GET /javascript/2010-01-07-15-46-41/97fec578b695157cbccf12bfd647dcfa.js HTTP/1.1" 200 33445 "http://www.channelfrederator.com/hangover/episode/HNG_20100101/cuddlesticks-cartoon-hangover-4" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7" www.channelfrederator.com 35119

We want to take this raw log and convert it to a JSON structure for importing into mongo, which I wrote a simple perl script to iterate through the log and parse it into sensible fields using a regular expression

#!/usr/bin/perl

use strict;
use warnings;

my $logfile = "/logs/httpd/remote_www_access_log";
open(LOGFH, "$logfile");
foreach my $logentry (<LOGFH>) {
    chomp($logentry);   # remove the newline
    $logentry =~ m/(\w+\s\w+\s\w+:\w+:\w+)\s #date
                   (\w+)\slogger:\s # server host
                   ([^\s]+)\s # vhost logger
                   (?:unknown,\s)?(-|(?:\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3},?\s?)*)\s #ip
                   \[(.*)\]\s # date again
                   \"(.*?)\"\s # request
                   (\d+)\s #status
                   ([\d-]+)\s # bytes sent
                   \"(.*?)\"\s # referrer 
                   \"(.*?)\"\s # user agent 
                   ([\w.-]+)\s? # domain name
                   (\d+)? # time to server (ms) 
                  /x;

    print <<JSON;
     {"date": "$5", "webserver": "$2", "logger": "$3", "ip": "$4", "request": "$6", "status": "$7", "bytes_sent": "$8", "referrer": "$9", "user_agent": "$10", "domain_name": "$11", "time_to_serve": "$12"} 
JSON

}

Again my regular expression probably won’t quite work on your logs, though you may be able to take bits and pieces of what I’ve documented above to use for your script. On my logs that script outputs a bunch of lines that look like this

{"date": "18/Jan/2010:17:20:26 -0500", "webserver": "web4", "logger": "networks-www-v2", "ip": "84.58.8.36", "request": "GET /javascript/2010-01-07-15-46-41/97fec578b695157cbccf12bfd647dcfa.js HTTP/1.1", "status": "200", "bytes_sent": "33445", "referrer": "http://www.channelfrederator.com/hangover/episode/HNG_20100101/cuddlesticks-cartoon-hangover-4", "user_agent": "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7", "domain_name": "www.channelfrederator.com", "time_to_serve": "35119"}

And we can then import that directly to MongoDB. The creates a collection called weblogs in the logs database, from the file that has the output of the above JSON generator script

$ mongoimport --type json -d logs -c weblogs --file weblogs.json

We can also take a look at them and verify they loaded by running the find command, which dumps out 10 rows by default

$ mongo
> use logs;
switched to db logs
> db.weblogs.find()

Setting Up the Map and Reduce Functions

So for this example what I am looking for is how many hits are going to each domain. My servers handle a bunch of different domain names and that is one thing I’m outputting into my logs that is easy to examine

The basic command line interface to MondoDB is a kind of javascript interpreter, and the mongo backend takes javascript implementations of the map and reduce functions. We can type these directly into the mongo console. The map function must emit a key/value pair and in this example it will output a ’1′ each time a domain is found

> map = "function() { emit(this.domain_name, {count: 1}); }"

And so basically what comes out of this is a key for each domain with a set of counts, something like

 {"www.something.com", [{count: 1}, {count: 1}, {count: 1}, {count: 1}]}

This is send to the reduce function, which sums up all those counts for each domain

> reduce = "function(key, values) { var sum = 0; values.forEach(function(f) { sum += f.count; }); return {count: sum}; };"

Now that you’ve setup map and reduce functions, you can call mapreduce on the collection

> results = db.weblogs.mapReduce(map, reduce)
...
>results
{
        "result" : "tmp.mr.mapreduce_1263861252_3",
        "timeMillis" : 9034,
        "counts" : {
                "input" : 1159355,
                "emit" : 1159355,
                "output" : 92
        },
        "ok" : 1,
}

This gives us a bit of information about the map/reduce operation itself. We see that we inputted a set of data and emmitted once per item in that set. And we reduced down to 92 domain with a count for each. A result collection is given, and we can print it out

> db.tmp.mr.mapreduce_1263861252_3.find()
> it

You can type the ‘it’ operator to page through multiple pages of data. Or you can print it all at once like so

> db.tmp.mr.mapreduce_1263861252_3.find().forEach( function(x) { print(tojson(x));});
{ "_id" : "barelydigital.com", "value" : { "count" : 342888 } }
{ "_id" : "barelypolitical.com", "value" : { "count" : 875217 } }
{ "_id" : "www.fastlanedaily.com", "value" : { "count" : 998360 } }
{ "_id" : "www.threadbanger.com", "value" : { "count" : 331937 } }

Nice, we have our data aggregated and the answer to our initial problem.

Automate With a Perl Script

As usual CPAN comes to the rescue with a MongoDB driver that we can use to interface with our database in a scripted fashion. The mongo guys have also done a great job of supporting it in a bunch of other languages which makes it easy to interface with if you are using more than just perl

#!/bin/perl

use MongoDB;
use Data::Dumper;
use strict;
use warnings;

my $conn = MongoDB::Connection->new("host" => "localhost", "port" => 27017);
my $db = $conn->get_database("logs");

my $map = "function() { emit(this.domain_name, {count: 1}); }";

my $reduce = "function(key, values) { var sum = 0; values.forEach(function(f) { sum += f.count; }); return {count: sum}; }";

my $idx = Tie::IxHash->new(mapreduce => 'weblogs', 'map' => $map, reduce => $reduce);
my $result = $db->run_command($idx);

print Dumper($result);

my $res_coll = $result->{'result'};

print "result collection is $res_coll\n";

my $collection = $db->get_collection($res_coll);
my $cursor = $collection->query({ }, { limit => 10 });

while (my $object = $cursor->next) {
    print Dumper($object); 
}

The script is straightforward. Its just using the documented perl interface to make a run_command call to mongo, which passes the map and reduce javascript functions in. It then prints the results similar to how we did on the command line earlier.

Summary

We’ve gone over a very simple example of how MapReduce can work for you. There are lots of other ways you can put it to good use such as distributed sorting and searching, document clustering, and machine learning. We also took a look at MongoDB which has great uses for schema-less data. It makes it easy to scale since it has built in replication and sharding capabilities. Now you can put map/reduce to work on your logs and find a ton of information you couldn’t easily get before.

References

http://github.com/mongodb/mongo
http://www.mongodb.org/display/DOCS/MapReduce
http://apirate.posterous.com/visualizing-log-files-with-mongodb-mapreduce
http://search.cpan.org/~kristina/MongoDB-0.27/
http://kylebanker.com/blog/2009/12/mongodb-map-reduce-basics/

Tags: , , , , , , , , ,


Jun 11 2009

Investigating Data in Memcached

Category: Databases,Systemsjgoulah @ 9:58 PM

Intro

Almost every company I can think of uses Memcached at some layer of their stack, however I haven’t until recently found a great way to take a snapshot of the keys in memory and their associated metadata such as expire time, LRU time, the value size and whether its been expired or flushed. The tool to do this is called peep

Installing Peep

The main thing about installing peep is that you have to compile memcached with debugging symbols. Not really a big deal though. First thing you’ll want to do is install libevent

$ wget http://monkey.org/~provos/libevent-1.4.11-stable.tar.gz
$ tar xzvf libevent-1.4.11-stable.tar.gz
$ cd libevent-1.4.11-stable
$ ./configure
$ make
$ sudo make install

Now grab memcached

$ wget http://memcached.googlecode.com/files/memcached-1.2.8.tar.gz
$ tar xzvf memcached-1.2.8.tar.gz
$ cd memcached-1.2.8
$ CFLAGS='-g' ./configure --enable-threads
$ make 
$ sudo make install

Note the configure line on this one sets the -g flag which enables debug symbols. Now move this directory somewhere standard like /usr/local/src because we’ll need to reference it when installing peep

$ sudo mv memcached-1.2.8 /usr/local/src

Ok, now for peep, which is written in ruby. If you don’t have rubygems you need that and the ruby headers. Just use your package management system for this. I happened to be on a CentOS box so I ran

$ sudo yum install rubygems.noarch
$ sudo yum install ruby-devel.i386

Finally you can install peep

$ sudo gem install peep -- --with-memcached-include=/usr/local/bin/memcached-1.2.8

Using Peep

Finally you can actually use this thing, you just give it the running memcached process pid

$ sudo peep --pretty `cat /var/run/memcached/memcached.pid`

Here’s a snippet of what peep can show us in its “pretty” mode

      time |   exptime |  nbytes | nsuffix | it_f | clsid | nkey |                           key | exprd | flushd
       485 |       785 |   17925 |      10 | link |    25 |   64 |  "post.ordered-posts1-03afdb" |  true |  false
       537 |       721 |   16991 |      10 | link |    24 |   63 |  "post.ordered-posts1-03bd6"  |  true |  false
       240 |      3684 |     434 |       8 | link |     9 |   22 |  "channel.channel.105.v1"     | false |  false
       241 |      3687 |   27286 |      10 | link |    27 |   55 |  "post.post_count-fec35129"   | false |  false
       538 |      4022 |    3223 |       9 | link |    17 |   55 |  "post.post_count-2ff57a7"    | false |  false
       538 |      4024 |   17169 |      10 | link |    25 |   55 |  "post.post_count-2ba928998d" | false |  false
       241 |      3686 |   10763 |      10 | link |    22 |   55 |  "post.post_count-3879a24011" | false |  false
        25 |       320 |    8874 |       9 | link |    22 |   22 |  "channel.posterframes.4"     |  true |  false

Putting the Data Into MySQL

The above is not the easiest thing to analyze, especially if you have a lot of data in cache. But we can easily load it in to MySQL so that we can run queries on it.

First create the db and permissions

mysql> create database peep;
mysql> grant all on peep.* to peep@'localhost' identified by 'peep4u';

Then a table to store the output of the peep snapshot

mysql> CREATE TABLE `entries` (
  `lru_time` int(11) DEFAULT NULL,
  `expires_at_time` int(11) DEFAULT NULL,
  `value_size` int(11) DEFAULT NULL,
  `suffix_size` int(11) DEFAULT NULL,
  `it_flag` varchar(255) DEFAULT NULL,
  `class_id` int(11) DEFAULT NULL,
  `key_size` int(11) DEFAULT NULL,
  `key_name` varchar(255) DEFAULT NULL,
  `is_expired` varchar(255) DEFAULT NULL,
  `is_flushed` varchar(255) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Now run peep and output the data in “ugly” format into a file

$ sudo peep --ugly `cat /var/run/memcached/memcached.pid` > peep.out

And now you can load it into your entries table

mysql> load data local infile '/tmp/peep.out' into table entries fields terminated by ' | ' lines terminated by '\n';
Query OK, 177 rows affected (0.01 sec)
Records: 177  Deleted: 0  Skipped: 0  Warnings: 0

I’m only loading a small dev install of memcahed but if you are running this on production you’d be importing many more rows. Luckily load data infile is sufficiently optimized to input large datasets. Somewhat unfortunately however, peep makes memcached block while its taking its snapshot, which can take a bit of time in production.

In any case, not that interesting with dev data but you can get lots of interesting numbers out of this. In my case it looks like most of the stuff in my cache is expired already.

mysql> select is_expired, count(*) as num, sum(value_size) as value_size from entries group by is_expired;
+------------+-----+------------+
| is_expired | num | value_size |
+------------+-----+------------+
| false      |   1 |        415 |
| true       | 176 |    2392792 |
+------------+-----+------------+
2 rows in set (0.01 sec)

Or selecting by grouping slabs and displaying their sizes

mysql> select class_id as slab_class, max(value_size) as slab_size from entries group by slab_class;
+------------+-----------+
| slab_class | slab_size |
+------------+-----------+
|          1 |         8 |
|          2 |        15 |
|          9 |       445 |
|         12 |      1100 |
|         13 |      1402 |
|         14 |      1710 |
|         15 |      2174 |
|         16 |      2409 |
|         17 |      3223 |
|         20 |      6154 |
|         21 |      8395 |
|         22 |     10763 |
|         23 |     13395 |
|         24 |     16991 |
|         25 |     17925 |
|         26 |     23320 |
|         27 |     33361 |
|         28 |     38824 |
|         29 |     44904 |
+------------+-----------+
19 rows in set (0.00 sec)

Conclusion

Although there are a lot of tools such as Cacti that will display graphs of your Memcached usage, there are not many tools that will actually show you specifics of whats in memory. This tool can be used for a variety of reasons, but its especially useful to example the keys that you have in memory and the size of their values, as well as if they’re expired and what the individuals slabs are holding.

Tags: , , , ,


Feb 22 2009

Attach Progress Bars to Everyday Linux Commands With Pipe Viewer

Category: Systemsjgoulah @ 9:45 PM

Introduction

A lot of Linux command line utilities don’t give any indication of progress in terms of when the job is going to finish. I’m going to take a couple of commands that I use on a near daily basis and show how to output some progress using the pipe viewer utility. This is a handy tool that you can insert between pipes in your shell commands that will give you a visual indication of how long its going to take for the command to finish.

Getting Started

There isn’t much to installing this tool with your favorite package manager, so just make sure you do that first using yum or apt.

The easiest way to show what pv does is by creating a simple example. We can simply gzip a file and show how much data is being processed and how long it will take to finish.

$ pv somefile | gzip > somefile.log.gz
64.1MB 0:00:03 [10.1MB/s] [==========>                        ] 32% ETA 0:00:06

So here can see we have processed 64.1MB in 3 seconds at a rate of about 10.1MB/s and should take about 6 seconds to finish.

Chaining PV to Display Different Inputs and Outputs

Another really cool thing is you can chain these pv commands to see the progress of data being read off the disk and then how fast gzip is compressing it.

$ pv -cN source_stream somefile | gzip | pv -cN gzip_stream > somefile.log.gz
source_stream: 89.6MB 0:00:06 [10.1MB/s] [============>       ] 45% ETA 0:00:07
gzip_stream:   14.8MB 0:00:06 [ 3.2MB/s] [       <=>          ]

In this example we created named streams with -N which you can call whatever you want. In this example they are called source_stream and gzip_stream since that’s what they are measuring. The -c parameter is recommended when running multiple pv processes in the same pipeline and prevents the processes from mangling each others output.

Its important also to point out here that we get a percentage on the source_stream because pv knows how much data exists in that file, but the second stream doesn’t know how much data will pass through gzip and so it can only display how much data has passed through so far.

Using PV on Other Commands

Another example is using tar to compress a folder, but if we do something like this:

$ tar cjf - some_directory | pv > out.tar.bz2
3.55MB 0:00:02 [ 1.2MB/s] [   <=>          ]

the output isn’t very interesting because pv doesn’t know the total amount of data its compressing. We can provide the size with the -s option to pv, and a little bash interpretation to grab the size

tar -cf - some_directory | \
     pv -s $(du -sb some_directory | awk '{print $1}') | \
     bzip2 > somedir.tar.bz2
1.39MB 0:00:03 [ 929kB/s] [==>     ]  6% ETA 0:00:45

So we are telling tar to create an archive of some_directory, print the output to stdout, next tell pv the size of that directory using the du command on it, and send that to bzip2 for compression.

Create a Shell Script For Easy Reuse

At this point the command above gets to be way too much to remember. We’ve also had to type the directory name twice which is silly. I always put my ~/bin directory in my PATH so that I can throw easy scripts in there. Here’s one that will take a directory argument and compress it with progress

#!/bin/sh

if [ -z $1 ]; then
    echo "Usage: $0 dir"
    exit 1
fi

dir="${1%/*}"

if [ ! -e $dir ]; then
  echo "$dir doesn't exist"
  exit 1
fi

echo "compressing $dir"
tar -cf - $dir | pv -s $(du -sb $dir | awk '{print $1}') | bzip2 > $dir.tar.bz2

So we’ve just taken what we’ve learned above and created a script that takes an argument, which needs to be a directory (or file) that exists, and the output is that directory name with a tar.bz2 extension.

Seeing Progress In a MySQL Import

We can use the same exact concept to chain our tools to get a progress bar on a MySQL data import. Here it is

pv myschema.sql | \
      pv -s $(du -sb myschema.sql | awk '{print $1}') | \
      mysql -u db_user my_database

As shown above we can easily turn this into a script that can be reused.

Conclusion

We’ve seen how to take a simple tool called pipe viewer which gives information on the data fed to it and attached it to some commands we currently use. Since the commands can be tricky to remember we’ve seen how to wrap them in a simple shell script. Now you can experiment with using it on other commands that you frequently use and want to see progress output, which can be a really useful thing when dealing with large amounts of data.

Tags: ,




download xiuxiu editor foto shambho shankara mp3 free download pikeno e menor perdicao download download lagu surat at taubat smarthru 4 download pl abbey road 60s drums download mac cell phone repair download download kara winter magic album download intel gma booster terbaru download mp3 tantowi yahya free download farhan ali qadri video naats free download internal medicine harrison download music ragheb alama sinin dhada songs download in ziddu download form 4852 from the irs website free invitation templates download and print download boundless by cynthia hand free la chatimi cantare mp3 download free download of deception point ebook download munni badnam hui free mp3 download jtx party like a rockstar download hide ip ng 1.40 download ooh la la goldfrapp download time after time quietdrive crime and punishment mp3 download sweety gippy mp3 download caminhos da liberdade download minions banana video download reflex flugsimulator download gratis naruto shippuden 208 pt download sniper elite wii download ntsc pirata do espaço download dublado download ptanks full version o justiceiro download pc rip microsoft hda driver download download admit card iti jharkhand 2012 bada 2.0 download wave 2 download gangster life gta apple safari 4 x download business intelligence download oracle download os x dvd installer download gta 8 vice city myegy szybcy i wściekli 5 2011 download download highschool of the dead color download shakira ft pitbull rabiosa zippy download lagu t ara zombie jelly car music download download driver sony vaio 32bit klezmer music to download free download of shreenathji bhajan aga bai arechya download serie v 3 temporada download download the jeremy kyle show download jogos fazenda gratis pc minimizer download para mu susana nothing at all download download um novo vencedor damares playback download command line mac os download paypal jar for android download zeljko vasic zanjisi kukovima marian keyes watermelon ebook download internet download manager flurry icon kruti dev free download windows download chief keef choppa go bang download msi 3.1 windows installer chessmaster free download with crack bravo hits 98 download download disrespect kirko bangz x plane 6 demo download la baby jonas brothers mp3 download netgear ga311 windows 7 download free download habib painter mp3 download tweetdeck desktop windows 7 ekhon ami mp3 download promethean the created download pdf how to download youtube videos to ipad amazon download games steam cypress hill download 2011 download treu nha hang xom 2 download kick out the epic mother wooh da kid stepped download x2 x men united download imaginasamba perfeição download download navigation for mobile zor ka jhatka song download mp3 download audi a4 owners manual download pretty little liars s01e05 avi download nitro circus season 1 download eega promotional song download the simulator 2012 demo x264 codec download mac download lições para toda vida legendado download mkisofs for ubuntu download star trek voyager scorpion lil b 855 download download pokemon black 2 jap rom spells aprilynne pike pdf download ita download ways of reading cursed crusader trainer download download darmowe gry dla dzieci download spyglass for android alaa wardi 7aram free download massive attack teardrop song download download settings for nokia x6 download account opening form obc download fl studio on a mac worth dying for download s bot download free silkroad 1 click downloader download 5 ishq ka sheen download download melodia que eu conheço stephen king novels download free 100 download psp games for free a escolha download dublado ptgui 9 mac download kz hack download gratis download driver yamaha psr 3000 download macroeconomics policy and practice download hp dv1000 sound driver download famous five movies download phim benh nhan nguoi anh download gta 1 for free download gangs of wasseypur movie free download panasonic sd jukebox software download falling up drake download trackmania sunrise extreme full version free download de simuladores de combate aereo download i bruise easily download pioneer dj software free download noah and the whale life goes on mp3 download recover deleted files software download sơ đồ kế toán download free desk phone ringtone fußball manager 12 download vollversion kostenlos even greater mp3 download planetshakers download snmp for windows 2000 free download amuse park game bejeweled 3 jar download sleeping at last quicksand download download ứng dụng cho nokia n8 coral player download luna download jill scott whenever you're around download fairy tail games for pc download photoshop 8 cs me myegy download yahoo latest version for free mp7 player free download mac download internet explorer 8 vista java hry download 128x160 download jason upton key of david download march of the wooden soldiers download barad toro be dast avordam mp3 download songs of aitraaz from songs pk com 6.72 f ai nightmare download download tiny toon adventures nes download i'm yours mp3 download 9 hours rom pipi player download vista deskjet d2360 software download download booster pack hack abhas ha mp3 song download mp3 songs download 3gp download cod 4 3rd person mod trey songz blind download free download hawaii 5 0 season 2 kenji free download fort minor download clubbed to death 2 quebrando regras 2 download portugues chamas da vingança 1984 download download sketchup 8 deutsch virtual router manager download xp download instalador chrome offline download soundgarden live to rise download canon mx410 printer driver download the harold song kesha mp3 garmin 255w download maps free download mise a jour mcafee download visio windows 7 64 bit download gen psp 3000 quasi amici film download gratis ita download plano de fuga rmvb legendado download alfonso loher name in the sky download mp3 zahra damariva alasan download j rock songs download bihar secretariat assistant admit card download project 64 64 bits download naruto chapter 589 cartoon download for ipad download tributo a bezerra da silva download roda a roda jogo cx one full download armin van buuren rapture download mp3 download nein mann video download tambor de funk download rota de fuga rmvb free download of shawty got moves mp3 download cm7 for droid 2 global download executive resume format amnesia game download mac crash bandicoot mutant island download pc ipod touch download pictures to computer download tu pirata soy yo chayanne download lg pc suite p990 harmor vst plugin download download abaqus 6.10 student edition i like cereal song download filmes alta resolucao download italian lessons download mp3 mass effect 3 download pirate download manager idm key download wh cs 2011 bau simulator download ita download efek suara unik download girl talk ultraviolet sound download bicara hati episod 4 annie khalid songs download download driver epson cx5500 free quake 3 download bots infinity blade 2 ipa download crack pk songs download list dream the game download free cydia download step by step motorola xoom rom download download let's go ricky luna remix zune manual download pdf download hit and run 2012 dvdrip download do sapo videos download do jogo cities xl 2011 download song socha na tha by alamgir ktechlab for ubuntu download download vara rece kamelia zippy download schenk mir dein herz gipsy kings volare download free welsh flag to download download afinador do cifra club 3 gatsu 9 ka download free mp3 angry birds android download download manager error the server returned an error download sdo x season 2 song download paint shop pro download turn to u justin bieber download original ruu for evo download maps to print boys over flowers download songs download free regular show episodes download yamaha psr 1500 styles satyamev jayate download song download mouse fix for windows xp download elliot in the morning kenapa tidak bisa download film h.p f4200 printer software download p square game over download mp3 carpet 3d max download download mw 3 1.07 patch the legend of zelda download snes rom download 9 temporada friends download on my freebox ne fonctionne pas download tito lopez the blues sharebeast download sweeney todd final scene papago x8.5 wince 5 download download monkey for rhino download na paz de jah download gratuito adobe reader 8 ngo accounting software download download onto mp3 from youtube somebody's me mp3 download enrique download ocarina of time 1.0 download 64 bit windows tax form 8379 download frisky tinie tempa download zippy download pro update psp go 6.60 download lagu chrisye gejolak cinta download themes for gw300 clr via c# richter download download 8195 the damned rar parayathe ariyathe malayalam song download autobiography of abraham lincoln download download phim hiep dao hoa hetaoni english download part 2 cenário de novela download mp3 sandra brown envy download pdf download sims 1 love bed sende ahasa wage mp3 download download voice changing application download feed us 2 free download open source library management system download alana grace black roses red download audi a3 manual download free irctc mobile application stand o food 3 apk download download dss dj effects visual basic software download download film khuda kay liye download intezar remix by falak sri rama rajyam mp3 download download dan seals one friend download virtual families mac free download gossip girl cecily von ziegesar download autoramas fale mal de mim gabin doo uap mp3 download free download love in this club mp3 download dead space 3 demo pc charmed download season 1 nhac chuong theo ten download angry birds season download free pc reign of hunters download download de pokemon flash download full screen theme wordpress download manager 6.05 crack download planta x zumbi download razor ramon entrance music download skype xperia 8 download lagu tercipta untukku ungu download shining inheritance ep 16 trial download microsoft project 2010 halo ce download key thermodynamics 6th cengel download vmware ova converter download download e.r legendado 1 temporada wilfred season one download download hivi mata hati download apun bola mp3 zmierzch księżyc w nowiu download peb cod tool download 1.5 web client get download speed ryback meat mp3 download music download on itunes download beenie man i'm okay vigilante 8 2nd offence download pc download movie 2012 in hindi latha tamil font free download word melhor impossivel download legendado download lloyds tsb bank statement download oki b6300 driver how to download spoutcraft free popup blocker download google chrome download tenth avenue north losing download sound intervention mw2 download jogo harry potter pc download pokemon blanc nds alda célia playback download download shwayze get you home download tower bloxx mobile game dani california official video download download jump out the gym download all killer no filler sum 41 ra one full movie download 2011 download rebelde só pro meu prazer download tees maar khan movie in avi format activex control download install download video setia band stasiun cinta download de temas nokia x2 00 download power geez 2005 computer games download com download benny and babloo songs soc pc camera driver download for xp manually download sophos virus definitions jackie chan adventures download links download dragostea se face in doi download terjemahan kitab al umm hp photosmart c3180 download scanner download ipcop for windows 7 nero 8 download windows 7 64 bit kick buttowski kick in genes download dewana 2013 mp3 download download 2 chainz birthday song free how to download correct video driver download film g 30s pki firing games download full version download free 3d motorbike racing download call of duty 4 zombies download smart mobile themes download crbl romanu n are noroc hotfiles download diggy simmons make you mine download bangla natok bhalobashi tai download kml from my maps download song if this charlie sheen shinda new album download download outcast 1 temporada keterlaluan the potters mp3 download download office 2007 upload download falling skies 2 temporada rmvb download sajan all songs oblivion mod manager download mac download toma o meu coração download pierce the veil caraphernelia mp3 download lagu jkt48 original download leaf by elle varner vandalism coming alive mp3 download download god of war betrayal 240x320 download amanda by zigi mp3 download apostila do trf hp 635 driver download windows xp download pro e student version download office 2007 turkish proofing tools download filme a era do nariz vermelho are you in download download lagu true worshippers jadi sepertimu warlords battlecry 3 download free full version apostilas calculo 1 download calof duti 2 download gratis