It was recently time to start applying the real skins to the Android app I’m building.  Android stretchable art assets are all encoded as 9-patch png files.  This basically means that the regions to stretch and apply padding are specified by setting pixels in a 1 pixel border around the image.  The system will then stretch and place content based on these guides.  There is plenty of documentation out there for background.  There is even an ios library for using 9-patch assets in a similar fashion: https://github.com/tortuga22/Tortuga22-NinePatch.

The problem comes when it is time to create these bad boys.  There is a gui tool that comes with the android sdk for doing it.  You open one image at a time and add the guides.  If you edit the image or recut it later, you probably have to do this again.  If you have lots of images of the same dimensions but different colors, you’ll be spending a lot of time with this little tool.

Since it is just a 1px border, you can also use your photoshop expertise to do draw the guides.  But this is tricky because Android needs the alpha channels to be just so and if they are not it will royally screw up your project when you try to build.  It’s easy to mess up.

So I wrote a command line editor for these things.  It allows you to convert normal pngs to 9-patches (add the border), strip the border and get back to a normal png, set the guides and use another 9-patch as a template to set guides on other images.  It made my life easier.  Enjoy.

https://github.com/stellaeof/9-patchedit

Read the GitHub README at the bottom of the page.  I include the info on where to get the binary and example usage.

Posted in blog, geeky | 6 Comments »

android on desktop java April 29th, 2011

One of the things I’ve been excited about recently has been the ability to forklift the non-interactive core of my mobile app onto standard desktop java.  Admittedly, I get excited about such things easily, but this is pretty cool.  The app I’m working on has a core that does some fairly non trivial connection management and location processing and I found that the ability to run it on the desktop has definite bonuses.  For one, it is much easier to simulate connected clients doing interesting things on the desktop than when running a single instance in the emulator.  I even have a couple of “test” robot users that are always logged in and virtually interacting with the system that my real app can talk to and interact with.

This approach wouldn’t work for every kind of app, but when it does fit, it’s pretty powerful.  I’d much rather use my actual real codebase to simulate robot users than have to write something from scratch and I get the benefit of putting the real code through a lot more code-run-debug cycles than it would otherwise get on just the device.

So why aren’t we seeing more of this talked about and done?  It took me about an hour to hack together enough of an android.jar to get all of the basics going.  My results are here: https://github.com/stellaeof/android-desktop-headless .  This was admittedly a quick and dirty solution, but if I had more time it wouldn’t be all that hard to go through and mock/port a pretty large swath of the non-ui platform to be able to run on a standard vm.

Posted in geeky | No Comments »

It’s been a number of years since I’ve had my sysadmin hat on for real and I was happy today to find out that the world has changed for the better.  The last time I did this, I had to deal with setting up actual equipment starting with an ethernet cable in one datacenter and two incoming T1′s at another.  I have vague recollections and nightmares of trying to decipher the meaning of blink starts and esoteric T1 setup stuff while trying to balance the Cisco book open on top of a chair in the datacenter supporting a monitor and keyboard.  And of course the cell phone calls to SBC.  ”Ok, try now…  No it’s still blinking.”  And then once I got the routers online, I had to navigate setting up a basic PKI to establish the vpn between sites so that things could actually talk to each other.  Then came the servers, and the monitoring systems… I managed through it but there was trauma.  All in all, the experience was worthwhile in that same way I imagine basic training is to people going into the army.  I would *never* do it again, but it gave me a perspective and some skills that few software developer types have.

But now, thankfully, its 2011 and things are easier.  Of course, everyone knows that just being able to login to an account at your favorite cloud providers and provision instances is a god-send, but the thing that has always bugged me about provisioning small numbers of internet connected servers is the feeling like I just walked out of my front door without any clothes on.  I say small numbers because if I were setting up an orderly install with any bulk, I would spend some time making sure it was all seamless, secure and accessible.  But more often these days, it’s small numbers of servers for one task or another and spending any time making things manageable isn’t usually in the cards.  There are services out there that are more turnkey, but I’ve never really used them.

So, continuing my naked outside analogy, the first thing I always do is grab a towel off the nearby clothes line (ie. lock down SSH with public key auth and a few other security bits).  This creates a single server that is more or less secure, but often then I want to connect to my private GIT repo, mirror files onto it or just generally manage it by connecting to administrative ports, whether it be for the database server, memcache, file sharing, jconsole, etc.  I don’t want any of these ports flapping out on the internet, but I also get tired of doing one off ssh tunnels to get at them from my workstation.

Today, I finally did spend a few hours working on this problem.  This post is somewhere between a howto and notes to myself should I need to do this again.  The tools I used were:

I’ve built OpenVPN + iptables management backplanes before but what really surprised me today was how easy this has gotten to be under recent Ubuntu installs.  I’ve got one permanent VPS that I run with Linode that I used as the VPN concentrator.  All I had to do was “apt-get install openvpn ufw” to get the bits.  Then on the concentrator, I took the following steps:

  1. Expand the OpenVPN “easy-rsa” files (/usr/share/doc/openvpn/examples/easy-rsa) into a fresh GIT repo and run the commands to generate a server cert
  2. Copy the server cert and config file to /etc/openvpn (sample config is at /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz)
  3. /etc/init.d/openvpn restart

Then I added the following “mkclient” script to my easy-rsa git repo:

#!/bin/bash
die() {
	echo "$*"
	exit 1
}

td=$(dirname $0)
name="$1"
if [ -z "$name" ]; then
	echo "Need client name"
	exit 1
fi

cdir=$td/clients/$name
mkdir -p $cdir

if ! [ -f $td/keys/$name.key ]; then
	# Generate
	source $td/vars
	$td/build-key $name
fi

cp $td/client.conf $cdir
cp $td/keys/ca.crt $cdir || die "Could not find ca.crt"
cp $td/keys/$name.crt $cdir/client.crt || die "Could not find $name.crt"
cp $td/keys/$name.key $cdir/client.key || die "Could not find $name.key"

I also put a client.conf (example is at /usr/share/doc/openvpn/examples/sample-config-files/client.conf ) in the directory that is the default configuration file for any connecting client. Then setting up a client is just a matter of:

  1. ./mkclient myclient
  2. scp clients/myclient/* root@myclient:/etc/openvpn
  3. ssh root@myclient /etc/init.d/openvpn restart

If everything works, your client should pop onto the network.  I treated my workstation as a client as well, but you can use any OpenVPN gui to do the same thing given a conf file and keys.

I’ve configured OpenVPN before so I didn’t really follow any instructions on modifying the confs.  The basic process is to take the stock server.conf example and client.conf example and make the following changes:

  • Keep the bridging bits commented out.  You want a routed network.
  • Keep the default to use UDP
  • Change the “server” directive to be a private subnet of your own choosing
  • Make sure that “ifconfig-pool-persist ipp.txt” is not commented.  This will make your clients keep the same IP addresses over time.
  • Uncomment “client-to-client” which allows all clients to see each other as well as see the server (this differs from a typical road-warrior config because our “clients” are mostly servers that we are trying to create a management network for)
  • Potentially tweak keepalive.  Keep in mind, though that if you have firewall filtering on UDP 1194 traffic, this will most commonly be stateful and if your keepalive is longer than the firewall timeout, you will start to drop packets

Ok, so far so good.

What I was after from this point was to have host based firewall configs that are default deny with holes punched through for internet accessible services.  But I want the management OpenVPN network to be able to access anything on the host.  This would be the security model of a hard candy shell with a soft gooey center.  I wouldn’t necessarily recommend it as-is for production installs, but for dev/test systems or non-sensitive prod systems, you can’t beat how easy it is to get at everything.  If anyone gets onto any one of the hosts, they are going to be able to access privileged ports on any of the hosts.  This is more or less just like any office LAN.

This is where my memory was telling me to buckle down and get cozy with some obtuse iptables configs, but things have changed.  Ubuntu now comes with UFW, which stands for Uncomplicated Firewall.  It’s built to make the process of administrating host-base firewalls brain dead simple.  Perfect.  To get it going on a host, run “ufw –force enable && ufw default allow”.  Note I put it together into one command to enable the firewall and set its default policy to allow so that I don’t run the risk of dropping my ssh connection until I’ve got my rules in place.

As an example for one of my hosts, I then ran the following commands:

# Allow openvpn (this really only applies to the server)
ufw allow 1194/udp
ufw allow 1194/tcp

# Allow ssh
ufw allow 22/tcp

# Allow web
ufw allow 80/tcp
ufw allow 443/tcp

# Allow all from private management network (from OpenVPN conf)
ufw allow from 10.1.0.0/16

# Change default to deny everything else
ufw default deny

And that’s it. You can see the config by running “ufw status verbose” or its bigger brother “iptables -L”. You should also do some poking at it from the internet side and the vpn side to ensure that your private ports really are private.

The only gotcha I experienced was already alluded to: The firewall config was blocking some OpenVPN traffic.  In order to diagnose traffic drop issues, run “ufw log on” and “tail -f /var/log/messages”.  An example of the problem is below:

Apr 28 17:21:27 client kernel: [UFW BLOCK] IN=eth0 OUT= MAC=xxxxxx SRC={vpnserverip} DST={clientip} LEN=129 TOS=0×00 PREC=0×00 TTL=63 ID=0 DF PROTO=UDP SPT=1194 DPT=43152 LEN=109

Here we see that the kernel is blocking UDP packets from the vpn server to the client originating from port 1194.  This is because our firewall on the client is initiating the vpn “connection” by sending a UDP packet to port 1194 on the vpn server from a randomly assigned port on the client.  If this exchange happens after the firewall rules are flushed or if the firewall’s stateful packet filter doesn’t remember the translation, it’s just going to deny the traffic.  If you see this right after resetting/enabling the firewall, just bounce the openvpn service (/etc/init.d/openvpn restart) to have it reestablish a connection that the firewall will remember.  You should also make sure that the “keepalive” directive on the server is set sufficiently low to keep things from timing out.  Mine is set at “30 120″.  Finally, if all else fails, you could switch everything to tcp or add a rule on the client that allows any traffic from udp port 1194.  I’m not listing that rule here because while it looks simple enough, in actuality it opens all udp ports on your host to an attacker.

Once I put a little mileage on this setup, I’ll probably also block ssh (port 22) at the host firewall.  I want to make sure I don’t lock myself out first though!

Posted in geeky | No Comments »

Geeking out about grammar March 3rd, 2011

Ok, so this is totally random.  In some documentation, I wrote the following sentence:

Response objects mirror an html5 location object with some additional attributes.

Then I realized that wasn’t entirely accurate and changed it to:

Response objects mirror a w3c location object with some additional attributes.

You don’t need to know what that means.  All you need to know is that “html5″ and “w3c” are non-words and are pronounced “h-t-m-l-5″ and “w-3-c”.  As always, when changing the words, I instinctively changed the articles to match (changed “an html5″ to “a w3c”).  It’s been too many years since high school grammar to even know if this is correct, but changing the article sounds right to my ear.

This is bugging me because I can’t figure out why.

I always knew that some words go with some articles and some don’t but how do you assign the correct article to a random sequence of letters and numbers?  Our brains do it, but I can’t figure out the logic of why.  I finally narrowed it down to the first sound of the following word being the primary deciding factor but that’s where I’m going to stop and go get a drink.

This type of thing must really suck for non-native English speakers!

Posted in geeky | 2 Comments »

Nothing special, just provides a canned interface to the operating system’s pseudo random number generator.

https://github.com/stellaeof/node_osrandom

Assuming you’re familiar with github, the mechanics were surprisingly easy:

npm help json
nano package.json
npm link .
npm publish .
Posted in geeky | No Comments »

How does a non creative person generate map markers?  I pulled out an old tool The Persistence of Vision Ray Tracer to construct some simple 3d scenes and render them to icons.  A little bit of ImageMagick and a Makefile, and I’ve got shiny new map markers:

That’s enough to get me going for now.  I’ll add more colors and other bits later as I need them.

Here’s the GitHub: https://github.com/stellaeof/cgmarkers/tree/master/mapmarkers

I haven’t played with PovRay in almost 20 years.  It’s amazing that its still alive and being improved on.  Back then, rendering something like this on a 12MHz PC would have taken hours, if not all day.  On modern gear, its just a few seconds.

iphone safari scaling weirdness January 27th, 2011

So, the iPhone 4 has a high resolution display: twice that of the original at 640 pixels wide vs 320.  For a variety of reasons, however, the CSS unit affectionately known as px stays fixed with respect to its physical size instead of being an exact representation of the actual dots on the screen.  This is actually all fine and good, because it means that the thing we think of as a “pixel” is still roughly the same size on the mobile device as it is when we see it on our computer monitor, meaning that we can actually read the text as expected without a magnifying glass.  It’s also how the W3C specs were designed, but can be counter-intuitive if you’ve thought for all this time that “px” = dot, which it does not, except on a large majority of the most common display devices: computer monitors.

The iPhone mostly hides all of this from you, presenting all coordinates on the device as being 320px wide vs the full resolution of 640 dots.  This should just be fine.  If you’ve got something higher resolution to display, either fractional units or imagery with a higher density of dots will be rescaled to preserve the distinction.

However, today I noticed one startling thing: If I use the Apple meta tags to force the webpage into native resolution, I get roughly twice the framerate for image manipulation.  Now, you are surely thinking, of course you do because you just removed a rescale operation from the pipeline.  But this is not the case.  The maps library is already rescaling all of the imagery for display and in theory, the iPhone should be just incorporating its own display scale settings into the transformation matrix, resulting in no further work.  However, if I run at native resolution, with image scaling being done by hand in JavaScript and asking the poor iPhone to juggle 4 times as many pixels, I get twice the speed.  The maps library precisely tracks my finger movements with no lag and feels completely native.

To be fair, I do not yet know whether this is actually a rendering issue or a problem with the touch events.  It almost seems like the touch events are being averaged too much when being delivered to an element which has the native 2:1 scale factor applied to it.  I haven’t been able to get precise measurements, but the event stream looks “coarser”, maybe only containing 1:8 the resolution of when running at native scale.

I did verify that using the CSS zoom property produces the same speedup.  For example, zooming a parent element to 50% and then sizing its child to twice normal size creates a high-resolution region on the screen and the events delivered to that region are crisp and precise.

Even though my first experiments with this made me think that the graphics rendering was actually binding up, the zooming and handling very large canvases kind of leads me to believe that there’s plenty of render bandwidth.  If the issue is just related to touch event averaging, then this means that we can get much more precise touch events out of WebKit by targeting a zoomed div.  It really shouldn’t make a difference, but I can imagine some engineers at Apple being faced with this new native zooming and dividing everything, including the internal averager that the touch processor uses, resulting in a touch event stream on hi-res displays in WebKit that is much coarser than it should be.

Stay tuned… with a solution in hand, now I just need to find out why it works.

Introducing AssetServer January 16th, 2011

Asset Server Project Homepage

Managing all of the resources that go with modern web development drives me a little crazy. Often times, you just want to be doing a little bit of pre-processing and assembly in a way that is guaranteed to be identical across your dev environment and production. AssetServer handles this by providing a development environment for managing dynamic resources as well as a CLI for snapshotting it for static deployment.

I’ve been working on this for a while and it still has a lot of work to be done, but I finally got some of the docs done and am putting it out there for anyone who wants to have a look.

This type of info is available all over, but I’m writing it here so that I can find it again later.  This quickly sets up a private http proxy over an SSH tunnel so that your web traffic looks like its coming from your server.  I did this to sign up for Google Voice from Puerto Rico. This was done for a Ubuntu Lucid server.

ssh your.free.world.server
sudo apt-get install squid3
sudo cp /etc/squid3/squid.conf /etc/squid3/squid.conf.orig
sudo nano /etc/squid3/squid.conf
# Find "http_port" and change it to 127.0.0.1:3128
# IMPORTANT! This only exposes the proxy on the localhost
sudo /etc/init.d/squid3 restart

Then to establish the ssh tunnel:

ssh -L 3128:127.0.0.1:3128 your.free.world.server

Configure your http proxy settings to send all protocol traffic to localhost, port 3128.

Posted in geeky | No Comments »

CNET 2010 Top 10 Redesigns

The work I spearheaded this year made a top 10 list. The year is complete! Happy New Year!

Posted in geeky | 1 Comment »