Jan 01 2013

Ditching Vino for X11vnc

Category: Systemsjgoulah @ 2:07 PM

I’d been using gnome vino as a VNC server for years on my media computer. This way I can use touchpad to control it from my iPad. It works fine, but a little bit clunky and badly documented, plus its tied directly to gnome. The other day I woke up to a filled drive (~2TB) of vino errors. I killed it off and cleaned up the error log file and tried to start it up again. No go this time due to some startup errors. This isn’t the first time I’ve fought with it, surely something better exists.

This led me to X11vnc. A bit of fresh air its fairly easy to setup and get going very quickly. I’ll first show how to do it manually and then present my open sourced chef cookbook.

Manual Install

First thing is to install it:

sudo apt-get install x11vnc

Setup a password file:

sudo x11vnc -storepasswd YOUR_PASS_HERE /etc/x11vnc.pass

Create an upstart config:

sudo touch /etc/init/x11vnc.conf

Open it and put this into it:

start on login-session-start
script
x11vnc -xkb -noxrecord -noxfixes -noxdamage -display :0 -rfbauth /etc/x11vnc.pass \
 -auth /var/run/lightdm/root/:0 -forever -bg -o /var/log/x11vnc.log
end script

and start it up:

sudo service x11vnc start

Simple, easy, and just works.

Chef Cookbook

Naturally, I ported this to a chef cookbook which you can find here. I have to admit I’m not totally happy with it yet, mainly because it doesn’t restart x11vnc on config changes. In some cases such as our production servers, we actually prefer this so that we don’t accidentally roll out a broken config change and have an auto-restart bring everything to its knees (there are some ways to avoid this but thats another post). In any case on my home computer I prefer it to restart if I make changes, but I’m struggling to get upstart to stop the process correctly due to what seems to be some disassociation with its pid file. The other thing is the recipe currently assumes you are using Ubuntu or something similar, but can easily be extended. Hope this helps someone else out there!

Tags: , , , , ,


Dec 27 2012

Quick Tip: Find the QEMU VM to Virtual Interface Mapping

Category: Debugging,Systemsjgoulah @ 4:13 PM

The other day we were getting some messages on our network switch that a host was flapping between ports. It turns out we had two virtual hosts on different machines using the same MAC address (not good). We had the interface and MAC information, and wanted to find what vm domains mapped to these. Its easy if you know the domain name of the vm node, you can get back the associated information like so:

% sudo virsh domiflist goulah  
Interface  Type       Source     Model       MAC
-------------------------------------------------------
vnet2      bridge     br0        -           52:54:00:19:40:18

But if you only have the virtual interface, how would you get the domain? I remembered virsh dumpxml will show all of the dynamic properties about the VM, including those which are not available in the static XML definition of the VM in /etc/libvirt/qemu/foo.xml. Which vnetX interface is attached to which VM is one of these additional dynamic properties. So we can grep for this! I concocted up a simple (not very elegant) function which given the vnet interface will return the domain associated to it:

function find-vnet() { for vm in $(sudo virsh list | grep running | awk '{print $2}'); do sudo virsh dumpxml $vm|grep -q "$1" && echo $vm; done }

It just looks through all the running domains and prints the domain name if it finds the interface you’re looking for. So now you can do:

% find-vnet vnet2
goulah

I wonder if others out there have a clever way of doing this or if this is really the best way? If you know of a better way leave a comment. Perhaps the problem is not common enough that a libvirt built-in command exists.

Tags: , , , , ,


Oct 27 2012

Looking Under the Covers of StatsD

Category: Debuggingjgoulah @ 12:24 PM

Intro

StatsD is a network daemon that runs on the Node.js platform and listens for statistics, like counters and timers. Packets are then sent to one or more pluggable backend services. The default service is Graphite. Every 10 seconds the stats sent to StatsD are aggregated and forwarded on to this backend service. It can be useful to see what stats are going through both sides of the connection – from the client to StatsD and then from StatsD to Graphite.

Management Interface

The first thing to know is there is a simple management interface built in that you can interact with. By using either telnet or netcat you can find information directly from the command line. By default this is listening on port 8126, but that is configurable in StatsD.

The simplest thing to do is send the stats command:

% echo "stats" | nc statsd.domain.com 8126          
uptime: 365
messages.last_msg_seen: 0
messages.bad_lines_seen: 0
graphite.last_flush: 5
graphite.last_exception: 365

This tells us a bit about the current state of the server, including the uptime, and the last time a flush was sent to the backend. Our server has only been running for 365 seconds. It also lets us know when the length of time since StatsD received its last message, bad lines sent to it, and the last exception. Things look pretty normal.

You can also get a dump of the current timers:

(echo "timers" | nc statsd.domain.com 8126) > timers

As well as a dump of the current counters:

(echo "counters" | nc statsd.domain.com 8126) > counters

Take a look at the files generated to get an idea of the metrics StatsD is currently holding.

On the Wire

Beyond that, its fairly simple to debug certain StatsD or Graphite issues by looking at whats going on in realtime on the connection itself. On the StatsD host, be sure you’re looking at traffic across the default StatsD listen port (8125), and specifically here I’m grep’ing for the stat that I’m about to send which will be called test.goulah.myservice:

% sudo tcpdump -t -A -s0 dst port 8125 | grep goulah
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

Then we fake a simple client on the command line to send a sample statistic to StatsD like so:

echo "test.goulah.myservice:1|c" | nc -w 1 -u statsd.domain.com 8125

Back on the StatsD host, you can see the metric come through:

e......."A.test.goulah.myservice:1|c

There is also the line of communication from StatsD to the Graphite host. Every 10 seconds it flushes its metrics. Start up another tcpdump command, this time on port 2003, which is the port carbon is listening on the Graphite side:

% sudo tcpdump -t -A -s0 dst port 2003 | grep goulah
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

Every 10 seconds you should see a bunch of stats go by. This is what you are flushing into the Graphite backend. In our case I’m doing a grep for goulah, and showing the data aggregated for the metric we sent earlier. Notice there are two metrics here that look slightly different than the metric we sent though. StatsD sends two lines for every metric. The first is the aggregated metric prefixed with the stats namespace. StatsD also sends the raw data prefixed by stats_counts. This is the difference in the value per second calculated and the raw value. In our case they identical:

stats.test.goulah.myservice 0 1351355521
stats_counts.test.goulah.myservice 0 1351355521

Conclusion

Now we can get a better understanding of what StatsD is doing under the covers on our system. If metrics don’t show up on the Graphite side it helps to break things into digestible pieces to understand where the problem lies. If the metrics aren’t even getting to StatsD, then of course they can’t make it to Graphite. Or perhaps they are getting to StatsD but you are not seeing the metrics you would expect when you look at the graphs. This is a good start on digging into those types of problems.

Tags: , , , , , ,


Oct 06 2012

Proxying Your iPad/iPhone Through OpenVPN

Category: iOS,VPNjgoulah @ 6:29 PM

Intro

It comes up often how to connect to our office openvpn network using an iPad or iPhone. On OSX its pretty simple, use Viscosity or Tunnelblick. But to my knowledge there is nothing like that for iDevices. However its possible to connect these using a SOCKS proxy. The SOCKS server lives on your laptop connected to the VPN, and the iPhone/iPad will be setup to connect through that. Obviously you should only do this on a secured wireless network and/or secure the SOCKS server so that only you have access. I wrote these notes a couple years ago and figured its worth sharing since it comes up once in a while.

Setting Up the SOCKS Server

Setting up the server is really easy, we can use ssh – just run this command on your laptop that is connected to your VPN

ssh -N -D 0.0.0.0:1080 localhost

If you want it to run in the background also use the -f option. You may also want to setup some access control with iptables, which is a bit out of scope of this article but more information can be found here.

Setting Up the iPhone/iPad to use SOCKS

Setup the PAC File

The only way to configure the iPhone/iPad to use SOCKS is to setup a PAC file. Create a file with the .pac extension, and put this into it:

function FindProxyForURL(url, host)
{
return "SOCKS 192.168.X.XXX";
}

Make sure to use the IP address of your laptop that we setup the SOCKS server on. Now put this file in any web accessible location. It doesn’t matter if its internal to your network or external, as long as you can access it from the web. How to actually serve a page is beyond the scope of this article, but if you’ve gotten this far you probably know how to do this.

Configure the iPhone/iPad

Now you just have to tell the iPad to use the PAC file so that it will proxy web requests through the laptops VPN.

Click:  Settings -> WiFi

Then click the blue arrow to the right of your access point and under HTTP Proxy choose Auto. In the URL field, put the full URL to the PAC file that we setup. Make sure to put the http:// protocol in this URL line. For example this may look something like: http://yourserver.com/myproxy.pac

Sometimes getting this setting to stick is tricky. I recommend clicking out of the text field into another field and letting the iPhone spinner in the upper left finish.

Conclusion

If you did everything right you should be able to hit websites behind your VPN connection. One way to debug that its working is to startup ssh with the -vvv option. When you request pages through the proxy you will see a bunch of output. If there is no output you’re not using the proxy.

Tags: , , , , , , , ,


Jul 20 2012

Development is Production Too

Category: Conferencesjgoulah @ 8:56 PM

I was at OSCON this week, and my friend Erik Kastner and I did a talk about development environments. Specifically what to avoid and how to keep environments consistent across development and production. As usual the slides are not fully explanatory without seeing the accompanying talk but here they are anyway:

Tags: , , , , ,


May 28 2012

Using the New “load_recipe” Chef Function with Shef

Category: Codingjgoulah @ 9:21 PM

If you are developing chef recipes it really helps to use the command line tool called shef. Shef is just a REPL to run chef in an interactive ruby session. If you haven’t ever tried it, you can find some nice instructions over here to get you going.

Shef gives an easy way to iterate on your recipes so that you can make small changes and see the effects. However, I found the include_recipe function would only load the recipe one time, and complain that its seen the recipe before on subsequent tries. I added a small patch that implemented a new function called load_recipe that will allow you to reimport the recipe. The problem is that once the recipe is loaded again, the resource list is reimported giving us the same set of resources twice.

You can see the list of resources that are loaded up like so

chef:recipe > puts run_context.resource_collection.all_resources
package[php]
package[php-common]

If you were to call load_recipe again the list would double, and the new code would be run second when calling run_chef

chef:recipe > puts run_context.resource_collection.all_resources
package[php]
package[php-common]
package[php]
package[php-common]

The trick is that you can clear this list with this command

run_context.resource_collection = Chef::ResourceCollection.new

So to use load_recipe you should call the above before it to clear the current list. This can be done in one line like so

run_context.resource_collection = Chef::ResourceCollection.new; load_recipe "php"

Hopefully I’ll be able to patch things to add a reload_recipe that overwrites the old resources so you don’t have to use this trick, but for now this will work to get quick iterations going.

Tags: , , , ,


Apr 15 2012

Percona Live 2012 – The Etsy Shard Architecture

Category: Conferences,Databases,Systemsjgoulah @ 2:58 PM

I attended Percona Conf in Santa Clara last week. It was a great 3 days of lots of people dropping expert MySQL knowledge. I learned a lot and met some great people.

I gave a talk about the Etsy shard architecture, and had a lot of good feedback and questions. A lot of people do active/passive, but we run everything in active/active master-master. This helps us keep a warm buffer pool on both sides, and if one side goes out we only lose about half of the queries until its pulled and they start hashing to the other side. Some details on how this works in the slides below.

Tags: , , , ,


Jan 09 2012

Distributed MySQL Sleuthing on the Wire

Category: Databases,Real-time Web,SSH,Systemsjgoulah @ 8:52 AM

Intro

Oftentimes you need to know what MySQL is doing right now and furthermore if you are handling heavy traffic you probably have multiple instances of it running across many nodes. I’m going to start by showing how to take a tcpdump capture on one node, a few ways to analyze that, and then go into how to take a distributed capture across many nodes for aggregate analysis.

Taking the Capture

The first thing you need to do is to take a capture of the interesting packets. You can either do this on the MySQL server or on the hosts talking to it. According to this percona post this command is the best way to capture mysql traffic on the eth0 interface and write it into mycapture.cap for later analysis:

% tcpdump -i eth0 -w mycapture.cap -s 0 "port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2"
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
47542 packets captured
47703 packets received by filter
60 packets dropped by kernel

Analyzing the Capture

The next step is to take a look at your captured data. One way to do this is with tshark, which is the command line part of wireshark. You can do yum install wireshark or similar to install it. Usually you want to do this on a different host than the one taking traffic since it can be memory and CPU intensive.

You can then use it to reconstruct the mysql packets like so:

% tshark -d tcp.port==3306,mysql -T fields -R mysql.query -e frame.time -e ip.src -e ip.dst -e mysql.query -r mycapture.cap

This will give you the time, source IP, destination IP, and query but this is still really raw output. Its a nice start but we can do better. Percona has released the Percona Toolkit which includes some really nice command line tools (including what used to be in Maatkit).

The one we’re interested in here is pt-query-digest

It has tons of options and you should read the documentation, but here’s a few I’ve used recently.

Lets say you want to get the top tables queried from your tcpdump

% tcpdump -r mycapture.cap -n -x -q -tttt | pt-query-digest --type tcpdump --group-by tables --order-by Query_time:cnt \
 --report-format profile --limit 5
reading from file mycapture.cap, link-type EN10MB (Ethernet)

# Profile
# Rank Query ID Response time Calls R/Call Apdx V/M   Item
# ==== ======== ============= ===== ====== ==== ===== ====================
#    1 0x        0.3140  6.1%   674 0.0005 1.00  0.00 shard.images
#    2 0x        0.8840 17.1%   499 0.0018 1.00  0.03 shard.activity
#    3 0x        0.1575  3.1%   266 0.0006 1.00  0.00 shard.listing_images
#    4 0x        0.1680  3.3%   265 0.0006 1.00  0.00 shard.connection_edges_reverse
#    5 0x        0.0598  1.2%   254 0.0002 1.00  0.00 shard.listing_translations
# MISC 0xMISC    3.5771 69.3%  3534 0.0010   NS   0.0 <86 ITEMS>

Note the tcpdump options I used this time, which the tool requires to work properly when passing –type tcpdump. I also grouped by tables (as opposed to full queries) and ordered by the count (the Calls column). It will stop at your –limit and group the rest into MISC so be aware of that.

You can remove the –order-by to sort by response time, which is the default sort order, or provide other attributes to sort on. We can also change the –report-format, for example to header:

% tcpdump -r mycapture.cap -n -x -q -tttt | pt-query-digest --type tcpdump --group-by tables --report-format header 
reading from file mycapture.cap, link-type EN10MB (Ethernet)

# Overall: 5.49k total, 91 unique, 321.13 QPS, 0.30x concurrency _________
# Time range: 2012-01-08 15:52:05.814608 to 15:52:22.916873
# Attribute          total     min     max     avg     95%  stddev  median
# ============     ======= ======= ======= ======= ======= ======= =======
# Exec time             5s     3us   114ms   939us     2ms     3ms   348us
# Rows affecte         316       0      13    0.06    0.99    0.29       0
# Query size         3.64M      18   5.65k  694.98   1.09k  386.68  592.07
# Warning coun           0       0       0       0       0       0       0
# Boolean:
# No index use   0% yes,  99% no

If you set the –report-format to query_report you will get gobs of verbose information that you can dive into and you can use the –filter option to do things like getting slow queries:

% tcpdump -r mycapture.cap -n -x -q -tttt | \
  pt-query-digest --type tcpdump --filter '($event->{No_index_used} eq "Yes" || $event->{No_good_index_used} eq "Yes")'

Distributed Capture

Now that we’ve taken a look at capturing and analyzing packets from one host, its time to dive into looking at our results across the cluster. The main trick is that tcpdump provides no option to stop capturing – you have to explicitly kill it. Otherwise we’ll just use dsh to send our commands out. We’ll assume you have a user that can hop around in a password-less fashion using ssh keys – setting that up is well outside the scope of this article but there’s plenty of info out there on how to do that.

There’s a few ways you can let a process run on a “timeout” but I’m assuming we don’t have any script written or tools like bash timeout or the one distributed in coreutils available.

So we’re going off the premise that you will background the process and kill it after a sleep by grabbing its pid:

( /path/to/command with options ) & sleep 5 ; kill $!

Simple enough, except we’ll want to capture the output on each host, so we need to ssh the output back over to the target using a pipe to grab the stdout. This means that $! will return the pid of our ssh command instead of our tcpdump command. We end up having to do a little trick to kill the right process, since the capture won’t be readable if we kill ssh command that is writing the output. We’ll need to kill tcpdump and to do that we can look at the parent pid of the ssh process, ask pkill (similar to pgrep) for all of the processes that have this parent, and finally kill the oldest one, which ends up being our tcpdump process.

Then end result looks like this if I were to run it across two machines:

% dsh -c -m web1000,web1001 \
   'sudo /usr/sbin/tcpdump -i eth0 -w - -s 0 -x -n -q -tttt "port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2" | \
   ssh dshhost "cat - > ~/captures/$(hostname -a).cap" & sleep 10 ; \
   sudo pkill -o -P $(ps -ef | awk "\$2 ~ /\<$!\>/ { print \$3; }")'

So this issues a dsh to two of our hosts (you can make a dsh group with 100 or 1000 hosts though) and runs the command concurrently on each (-c). We issue our tcpdump on each target machine and send the output to stdout for ssh to then cat back to a directory on the source machine that issued the dsh. This way we have all of our captures in one directory with each file named with the target name of each host the tcpdump was run. The sleep is how long the dump is going to run for before we then kill off the tcpdump.

The last piece of the puzzle is to get these all into one file and we can use the mergecap tool for this, which is also part of wireshark:

% /usr/sbin/mergecap -F libpcap -w output.cap *.cap

And then we can analyze it like we did above.

Further Reading

References

http://www.mysqlperformanceblog.com/2011/04/18/how-to-use-tcpdump-on-very-busy-hosts

http://stackoverflow.com/questions/687948/timeout-a-command-in-bash-without-unnecessary-delay

http://www.xaprb.com/blog/2009/08/18/how-to-find-un-indexed-queries-in-mysql-without-using-the-log/

Breaking the distributed command down further

Just to clarify this command a bit more, particularly how the kill part works since that was the trickiest part for me to figure out.

When we run this

$ dsh -c -m web1000,web1001 \
   'sudo /usr/sbin/tcpdump -i eth0 -w - -s 0 -x -n -q -tttt "port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2" | \
   ssh dshhost "cat - > ~/captures/$(hostname -a).cap" & sleep 10 ; \
   sudo pkill -o -P $(ps -ef | awk "\$2 ~ /\<$!\>/ { print \$3; }")'

on the server the process list looks something like

user     12505 12504  0 03:12 ?        00:00:00 bash -c sudo /usr/sbin/tcpdump -i eth0 -w - -s 0 -x -n -q -tttt "port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2" | ssh myhost.myserver.com "cat - > /home/etsy/captures/$(hostname -a).cap" & sleep 5 ; sudo pkill -o -P $(ps -ef | awk "\$2 ~ /\<$!\>/ { print \$3; }")
pcap     12506 12505  1 03:12 ?        00:00:00 /usr/sbin/tcpdump -i eth0 -w - -s 0 -x -n -q -tttt port 3306 and tcp[1] & 7 == 2 and tcp[3] & 7 == 2
user     12507 12505  0 03:12 ?        00:00:00 ssh myhost.myserver.com cat - > ~/captures/web1001.cap

So $! is going to return the pid of the ssh process, 12507. We use awk to find the process matching that, and then print the parent pid out, which is then passed to the -P arg of pkill. If you use pgrep to look at this without the -o you’d get a list of the children of 12505, which are 12506 and 12507. The oldest child is the tcpdump command and so adding -o kills that guy off.

So if we were only running the command on one host we could use something much simpler

ssh dbhost01 '(sudo /usr/sbin/tcpdump -i eth0 -w - -s 0 port 3306) & sleep 10; sudo kill $!' | cat - > output.cap

Tags: , , , , ,


Dec 01 2011

iOS Development with Vim and the Command Line

Category: Coding,iOSjgoulah @ 11:12 PM

Intro

I’ve recently been playing around with some iOS code, and while I find Xcode an excellent editor with a lot of great built-ins, I’m a vim user at heart. It’s the editor I’ve been using on a daily basis for years, so its tough to switch when I want to do iPhone development. Of course there are some things you just have to use Xcode for, but I’ve been able to tailor vim to do most of my dev work, as well as build and run the simulator on the command line. I couldn’t find a single cohesive tutorial on this, so these are the pieces I put together to make this work for me.

Editing

The Vim Cocoa Plugin

To start, there is a decent plugin called cocoa.vim to get some of the basics going. It attempts to do a lot of things but I’m only using it for a few specific things – the syntax highlighting, the method list, and the documentation browse functionality.

One caveat is the plugin is not very well maintained, given the last release is nearly 2 years ago. Because of this you’ll want to grab the version from my github:

git clone https://jgoulah@github.com/jgoulah/cocoa.vim.git

Install it by following the simple instructions in the README.

You’ll get the syntax highlighting by default, and the method listing is pretty straightforward. While you have a file open you can type :ListMethods and navigate the methods in the file. Of course you may want to map this to a key combo, here’s what I have in my .vimrc for this, but of course you can map to whatever you like:

map <leader>l :ListMethods

There’s another command that you can use called :CocoaDoc that will search the documentation for the keyword and open it in your browser. For example, you can type :CocoaDoc NSString to open the documentation for NSString in your browser. However OS X gives a warning every time, which is pretty annoying. You can disable this on the command line:

defaults write com.apple.LaunchServices LSQuarantine -bool NO

After you’ve run the command, restart your Finder (or reboot). Lastly, it would be annoying to type the keyword every time, so you can configure a mapping in your .vimrc so that it will try to launch the documentation for the word the cursor is on:

map <leader>d :exec("CocoaDoc ".expand("<cword>"))<CR>

Navigation

One of the biggest things I miss when I’m in Xcode is the functionality you get from ctags. Of course you can right click on a function and “jump to definition”. Works fine, but feels clunky to me. I love being able to hop through code quickly with ctags commands. I’ve talked about this before, so I’m not going to give a full tutorial about ctags, but I’ll quickly go over how I made it work for my code.

First go ahead and grab the version from github that includes Objective-C support:

git clone https://github.com/mcormier/ctags-ObjC-5.8.1

Compile and install in the usual way. Now you can utilize it to generate yourself a tags file. Mine looks like this:

#!/bin/bash
cd ~/wdir
for i in MyApp ios_frameworks ; do
echo "tagging $i"
pushd $i
/usr/local/bin/ctags -f ~/.vimtags/$i -R \
--exclude='.git' \
--langmap=objc:.m.h \
--totals=yes \
--tag-relative=yes \
--regex-objc='/^[[:space:]]*[-+][[:space:]]*\([[:alpha:]]+[[:space:]]*\*?\)[[:space:]]*([[:alnum:]]+):[[:space:]]*\(/\1/m,method/' \
--regex-objc='/^[[:space:]]*[-+][[:space:]]*\([[:alpha:]]+[[:space:]]*\*?\)[[:space:]]*([[:alnum:]]+)[[:space:]]*\{/\1/m,method/' \
--regex-objc='/^[[:space:]]*[-+][[:space:]]*\([[:alpha:]]+[[:space:]]*\*?\)[[:space:]]*([[:alnum:]]+)[[:space:]]*\;/\1/m,method/' \
--regex-objc='/^[[:space:]]*\@property[[:space:]]+.*[[:space:]]+\*?(.*);$/\1/p,property/' \
--regex-objc='/^[[:space:]]*\@implementation[[:space:]]+(.*)$/\1/c,class/' \
--regex-objc='/^[[:space:]]*\@interface[[:space:]]+(.*)[[:space:]]+:.*{/\1/i,interface/'
popd
done;
view raw gistfile1.sh hosted with ❤ by GitHub

Just a little explanation here, I’m going into my working directory and generating a tags file for both MyApp and ios_frameworks. ios_frameworks is just a symlink that points to /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.0.sdk/System/Library/Frameworks

If you’re interested in how I load up these tag files you can check out my .vimrc on github.

Another handy feature from I wanted to emulate was flipping between the header and source file. Luckily there is yet another vim plugin that will do this for you called a.vim.

Grab it off github and install it by dropping it in your ~/.vim/plugin directory:

git clone https://github.com/vim-scripts/a.vim.git
cp a.vim/plugin/a.vim  ~/.vim/plugin

There’s a tiny bit of configuration that you can put in .vim/ftplugin/objc.vim:

let g:alternateExtensions_m = "h"
let g:alternateExtensions_h = "m"
map <leader>s :A

This just tells it which files to use for source and header files and creates another shortcut for us to swap between the two.

Build and Run

So now that we’ve got most of the basic things that help us edit code, its time to build the code. You can use xcodebuild for this on the command line. If you type xcodebuild –usage you can see the variety of options. Here’s what worked for me to build my target app, noting that I setup everything in Xcode first and made sure it built there. After that you can just specify the target, sdk, and configuration. This will create a build under the Debug configuration for the iPhone simulator:

xcodebuild -target "MyApp Enterprise" -configuration Debug -project MyApp.xcodeproj -sdk iphonesimulator5.0

And now to run the app. Back to github there’s a nice little app called iphonesim. Download that and just build it with xcode:

git clone https://github.com/jhaynie/iphonesim.git
open iphonesim.xcodeproj

Put the resulting binary somewhere in your PATH like /usr/bin.

Now we can launch the app we just built in the simulator:

iphonesim launch /Users/jgoulah/wdir/MyApp/build/Debug-iphonesimulator/MyAppEnterprise.app

Last thing you’ll want are logs. Another thing that is arguably more easily accessible in Xcode, but we actually can get the console log. If you’re using something like NSLog to debug this will grab all of that output. You’ll have to add a couple lines to your main.m source file:

NSString *logPath = @"/some/path/to/output.log";
freopen([logPath fileSystemRepresentation], "a", stderr);

And then you can run tail -f /some/path/to/output.log

Conclusion

This is how I use vim to compile iPhone apps. Your mileage may vary and there are many ways to do this, it just happens to be the path that worked well for me without having to pull my hair out too much. Xcode is a great editor and there are some things that you will have to use it for such as the debugging capabilities and running the app on the phone itself. Of course even these are possible with some work, so if anyone has tips feel free to leave a comment.

Tags: , , , , ,


Sep 25 2011

Hacking on Backbone.js and Rdio API to Build a Playlist App

Category: Javascriptjgoulah @ 11:47 AM

Intro

I’ve been meaning to play around with backbone.js for a while,  trying to piece together what can make a sensible framework for frontend code. There seem to be a few out there, another I’ve wanted to try is sproutcore.  The truth is, I’m a backend coder, a database guy, a systems engineer… but I love all aspects of coding, and although I’m quite terrible at design and CSS, I’ve been getting more into Javascript lately. Its quite a nice language for server side development with node – small, simple, server based apps that work well with websockets.  I’ve written a little bit about that in the past.  Everything is moving to the web, and its nice to have a modern and easy to use interface to place in front of your tools.

The app is a simple playlist.  You can search for songs,  get a list of potential matches,  and add the song to your playlist.  You can cycle back and forth through the songs, pause them, and delete them from the list.  Thats about it!

If you want to cut to the code you can find it here on github.

Some Assumptions

Part of the goals for this project were to get out of my comfort zone and learn something new.  So I decided to make it a frontend only app.  All HTML/Javascript and no backend services.  Well,  none that I write,  it does some interaction with other services or it would be pretty boring.  If you end up checking the project out, you’ll find the markup could use some help (patches welcome!).  But the goal was just to get something functional.  This isn’t the way I’d go about things if I were building an app to start a company with,  its just a fun hack.

I didn’t know backbone at all, and it seems hard to find good examples that are more than “heres a text box on a page”.  One of the better ones out there that is well documented is the todo list app.  It turns out this app fit rather nicely with my idea of a playlist,  so I decided to start off with that codebase and modify it from there. It also uses the handy LocalStorage driver, which implements an HTML5 specification for storing structured data on the client side.

I also needed a simple way to query songs,  and since the Rdio API is Oauth based, which can get complicated (and probably more so in javascript taking security into account) I found a nice API from echonest that turns out to have a lot of nice functionality thats easy to use.  This is what I’m using to query for the songs.

I am using the Rdio playback API to actually play the songs from the Rdio service, and that part is based largely on their hello web playback example.  Their API does allow you to build playlists in their service, which is probably the more correct way to do this since you’d be able to access them in their web UI,  but since that adds complexity thats not used in this version.

The App

So check out playlister, let me know what you think. There’s a lot of room for improvement, especially with the markup and how some of the pieces fit together. All of the backbone code is in one file and that should be remedied. It could also be made to sync to a backend instead of localstorage and save the playlist into Rdio, and things like that.  When you download it,  just plug in your API keys into the js/token.js and you’re pretty much ready to go.


« Previous PageNext Page »