I have to vent for a minute because I’m just sick of this shit. I am one of Romney’s 53% that he is supposedly fighting for and I say the government should be taking more of my money. Why?

1. I believe in social safety nets. We should pay to make sure that our citizens are not bankrupted, made homeless or face untimely death as a result of things that should be considered routine in the modern world, including the treatment of health problems, getting an education, recovering from a mental illness, being caught on the wrong end of a business cycle, or growing old.

2. Trickle down is an overstated myth. We should be investing to go after the root causes of poverty, not hoping that a few extra dollars in my wallet is going to magically make more and better jobs appear. The only thing that those few extra dollars have ever made materialize is shoes or fancy clothes, which more than likely is causing that money to be sent overseas where it is being used to fix their problems with poverty. I would rather see more of it invested here at home with our own people to fix our fundamental problems.

3. We live in a first world country and there is absolutely no reason why we cannot seem to keep our basic infrastructure running and current.

4. The people I trust least in this world are the financiers and architects of our free market. They are not out for the public good, so why should they be the gatekeepers of it?

5. I’ve known a lot of people from a lot of walks of life, and I can honestly say that I have never met the freeloader that is the constant villain in neo-conservative rhetoric. I’m sure they are out there, but they are not the norm and are not the population that we should be turning our whole society upside down to eradicate through slow starvation and serfdom.

Sorry to vent. I am just so sick of this shit. We’re all in this together, and proper investment of our money for the public good, through the government institutions that manage it is essential for our prosperity and we should all be pulling for that. Some of us, for one reason or another, have more to pull with, and we should be pulling harder.

Romney, please stop trying to fight for me.

It was recently time to start applying the real skins to the Android app I’m building.  Android stretchable art assets are all encoded as 9-patch png files.  This basically means that the regions to stretch and apply padding are specified by setting pixels in a 1 pixel border around the image.  The system will then stretch and place content based on these guides.  There is plenty of documentation out there for background.  There is even an ios library for using 9-patch assets in a similar fashion: https://github.com/tortuga22/Tortuga22-NinePatch.

The problem comes when it is time to create these bad boys.  There is a gui tool that comes with the android sdk for doing it.  You open one image at a time and add the guides.  If you edit the image or recut it later, you probably have to do this again.  If you have lots of images of the same dimensions but different colors, you’ll be spending a lot of time with this little tool.

Since it is just a 1px border, you can also use your photoshop expertise to do draw the guides.  But this is tricky because Android needs the alpha channels to be just so and if they are not it will royally screw up your project when you try to build.  It’s easy to mess up.

So I wrote a command line editor for these things.  It allows you to convert normal pngs to 9-patches (add the border), strip the border and get back to a normal png, set the guides and use another 9-patch as a template to set guides on other images.  It made my life easier.  Enjoy.

https://github.com/stellaeof/9-patchedit

Read the GitHub README at the bottom of the page.  I include the info on where to get the binary and example usage.

Posted in blog, geeky | 6 Comments »

android on desktop java April 29th, 2011

One of the things I’ve been excited about recently has been the ability to forklift the non-interactive core of my mobile app onto standard desktop java.  Admittedly, I get excited about such things easily, but this is pretty cool.  The app I’m working on has a core that does some fairly non trivial connection management and location processing and I found that the ability to run it on the desktop has definite bonuses.  For one, it is much easier to simulate connected clients doing interesting things on the desktop than when running a single instance in the emulator.  I even have a couple of “test” robot users that are always logged in and virtually interacting with the system that my real app can talk to and interact with.

This approach wouldn’t work for every kind of app, but when it does fit, it’s pretty powerful.  I’d much rather use my actual real codebase to simulate robot users than have to write something from scratch and I get the benefit of putting the real code through a lot more code-run-debug cycles than it would otherwise get on just the device.

So why aren’t we seeing more of this talked about and done?  It took me about an hour to hack together enough of an android.jar to get all of the basics going.  My results are here: https://github.com/stellaeof/android-desktop-headless .  This was admittedly a quick and dirty solution, but if I had more time it wouldn’t be all that hard to go through and mock/port a pretty large swath of the non-ui platform to be able to run on a standard vm.

Posted in geeky | No Comments »

It’s been a number of years since I’ve had my sysadmin hat on for real and I was happy today to find out that the world has changed for the better.  The last time I did this, I had to deal with setting up actual equipment starting with an ethernet cable in one datacenter and two incoming T1′s at another.  I have vague recollections and nightmares of trying to decipher the meaning of blink starts and esoteric T1 setup stuff while trying to balance the Cisco book open on top of a chair in the datacenter supporting a monitor and keyboard.  And of course the cell phone calls to SBC.  ”Ok, try now…  No it’s still blinking.”  And then once I got the routers online, I had to navigate setting up a basic PKI to establish the vpn between sites so that things could actually talk to each other.  Then came the servers, and the monitoring systems… I managed through it but there was trauma.  All in all, the experience was worthwhile in that same way I imagine basic training is to people going into the army.  I would *never* do it again, but it gave me a perspective and some skills that few software developer types have.

But now, thankfully, its 2011 and things are easier.  Of course, everyone knows that just being able to login to an account at your favorite cloud providers and provision instances is a god-send, but the thing that has always bugged me about provisioning small numbers of internet connected servers is the feeling like I just walked out of my front door without any clothes on.  I say small numbers because if I were setting up an orderly install with any bulk, I would spend some time making sure it was all seamless, secure and accessible.  But more often these days, it’s small numbers of servers for one task or another and spending any time making things manageable isn’t usually in the cards.  There are services out there that are more turnkey, but I’ve never really used them.

So, continuing my naked outside analogy, the first thing I always do is grab a towel off the nearby clothes line (ie. lock down SSH with public key auth and a few other security bits).  This creates a single server that is more or less secure, but often then I want to connect to my private GIT repo, mirror files onto it or just generally manage it by connecting to administrative ports, whether it be for the database server, memcache, file sharing, jconsole, etc.  I don’t want any of these ports flapping out on the internet, but I also get tired of doing one off ssh tunnels to get at them from my workstation.

Today, I finally did spend a few hours working on this problem.  This post is somewhere between a howto and notes to myself should I need to do this again.  The tools I used were:

I’ve built OpenVPN + iptables management backplanes before but what really surprised me today was how easy this has gotten to be under recent Ubuntu installs.  I’ve got one permanent VPS that I run with Linode that I used as the VPN concentrator.  All I had to do was “apt-get install openvpn ufw” to get the bits.  Then on the concentrator, I took the following steps:

  1. Expand the OpenVPN “easy-rsa” files (/usr/share/doc/openvpn/examples/easy-rsa) into a fresh GIT repo and run the commands to generate a server cert
  2. Copy the server cert and config file to /etc/openvpn (sample config is at /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz)
  3. /etc/init.d/openvpn restart

Then I added the following “mkclient” script to my easy-rsa git repo:

#!/bin/bash
die() {
	echo "$*"
	exit 1
}

td=$(dirname $0)
name="$1"
if [ -z "$name" ]; then
	echo "Need client name"
	exit 1
fi

cdir=$td/clients/$name
mkdir -p $cdir

if ! [ -f $td/keys/$name.key ]; then
	# Generate
	source $td/vars
	$td/build-key $name
fi

cp $td/client.conf $cdir
cp $td/keys/ca.crt $cdir || die "Could not find ca.crt"
cp $td/keys/$name.crt $cdir/client.crt || die "Could not find $name.crt"
cp $td/keys/$name.key $cdir/client.key || die "Could not find $name.key"

I also put a client.conf (example is at /usr/share/doc/openvpn/examples/sample-config-files/client.conf ) in the directory that is the default configuration file for any connecting client. Then setting up a client is just a matter of:

  1. ./mkclient myclient
  2. scp clients/myclient/* root@myclient:/etc/openvpn
  3. ssh root@myclient /etc/init.d/openvpn restart

If everything works, your client should pop onto the network.  I treated my workstation as a client as well, but you can use any OpenVPN gui to do the same thing given a conf file and keys.

I’ve configured OpenVPN before so I didn’t really follow any instructions on modifying the confs.  The basic process is to take the stock server.conf example and client.conf example and make the following changes:

  • Keep the bridging bits commented out.  You want a routed network.
  • Keep the default to use UDP
  • Change the “server” directive to be a private subnet of your own choosing
  • Make sure that “ifconfig-pool-persist ipp.txt” is not commented.  This will make your clients keep the same IP addresses over time.
  • Uncomment “client-to-client” which allows all clients to see each other as well as see the server (this differs from a typical road-warrior config because our “clients” are mostly servers that we are trying to create a management network for)
  • Potentially tweak keepalive.  Keep in mind, though that if you have firewall filtering on UDP 1194 traffic, this will most commonly be stateful and if your keepalive is longer than the firewall timeout, you will start to drop packets

Ok, so far so good.

What I was after from this point was to have host based firewall configs that are default deny with holes punched through for internet accessible services.  But I want the management OpenVPN network to be able to access anything on the host.  This would be the security model of a hard candy shell with a soft gooey center.  I wouldn’t necessarily recommend it as-is for production installs, but for dev/test systems or non-sensitive prod systems, you can’t beat how easy it is to get at everything.  If anyone gets onto any one of the hosts, they are going to be able to access privileged ports on any of the hosts.  This is more or less just like any office LAN.

This is where my memory was telling me to buckle down and get cozy with some obtuse iptables configs, but things have changed.  Ubuntu now comes with UFW, which stands for Uncomplicated Firewall.  It’s built to make the process of administrating host-base firewalls brain dead simple.  Perfect.  To get it going on a host, run “ufw –force enable && ufw default allow”.  Note I put it together into one command to enable the firewall and set its default policy to allow so that I don’t run the risk of dropping my ssh connection until I’ve got my rules in place.

As an example for one of my hosts, I then ran the following commands:

# Allow openvpn (this really only applies to the server)
ufw allow 1194/udp
ufw allow 1194/tcp

# Allow ssh
ufw allow 22/tcp

# Allow web
ufw allow 80/tcp
ufw allow 443/tcp

# Allow all from private management network (from OpenVPN conf)
ufw allow from 10.1.0.0/16

# Change default to deny everything else
ufw default deny

And that’s it. You can see the config by running “ufw status verbose” or its bigger brother “iptables -L”. You should also do some poking at it from the internet side and the vpn side to ensure that your private ports really are private.

The only gotcha I experienced was already alluded to: The firewall config was blocking some OpenVPN traffic.  In order to diagnose traffic drop issues, run “ufw log on” and “tail -f /var/log/messages”.  An example of the problem is below:

Apr 28 17:21:27 client kernel: [UFW BLOCK] IN=eth0 OUT= MAC=xxxxxx SRC={vpnserverip} DST={clientip} LEN=129 TOS=0×00 PREC=0×00 TTL=63 ID=0 DF PROTO=UDP SPT=1194 DPT=43152 LEN=109

Here we see that the kernel is blocking UDP packets from the vpn server to the client originating from port 1194.  This is because our firewall on the client is initiating the vpn “connection” by sending a UDP packet to port 1194 on the vpn server from a randomly assigned port on the client.  If this exchange happens after the firewall rules are flushed or if the firewall’s stateful packet filter doesn’t remember the translation, it’s just going to deny the traffic.  If you see this right after resetting/enabling the firewall, just bounce the openvpn service (/etc/init.d/openvpn restart) to have it reestablish a connection that the firewall will remember.  You should also make sure that the “keepalive” directive on the server is set sufficiently low to keep things from timing out.  Mine is set at “30 120″.  Finally, if all else fails, you could switch everything to tcp or add a rule on the client that allows any traffic from udp port 1194.  I’m not listing that rule here because while it looks simple enough, in actuality it opens all udp ports on your host to an attacker.

Once I put a little mileage on this setup, I’ll probably also block ssh (port 22) at the host firewall.  I want to make sure I don’t lock myself out first though!

Posted in geeky | No Comments »

I finished most of the basics on nanomaps-droid and got the docs and binaries published out on the project GitHub page. That was more fun than I expected. I had to do a lot of crawling around in the guts of how Android does drawing and layout to design and implement it correctly and that kind of understanding is always worth its weight in gold.

As a public service announcement, another option for maps on Android is OSM Droid.  A lot of people are using it but I had some different things in mind and decided to strike out on my own.

Posted in nanomaps | No Comments »

nanomaps on android April 4th, 2011

I needed a mapping library for android and got frustrated with the options available. So, I put my mapmakers hat back on for a couple of days and produced this: https://github.com/stellaeof/nanomaps-droid

It’s just a no-frills mapping library modeled after nanomaps for JavaScript. Not quite done yet, but it seems to be speedy enough. I spent a fair amount of time today implementing HTTP pipelining which gave a several-fold speed increase in some cases over more naive ways of requesting the tiles.

I can attest that this does get easier the more you do it. This one has taken about two working days from pressing “Create Project” in Eclipse to having something working enough for me to use in the app I’m building.

Posted in nanomaps | No Comments »

Bulletproof Node.js Coding March 21st, 2011

I’ve been actively doing node.js coding for about 4 months now. I’m working with a couple of others on a suite of mobile apps and my time has been split between building the Android client (one of my partners drew the long straw and got ios for this round) and building out the node.js based backend. It’s currently a Node.js + CouchDB + Redis server that combines user auth/account management with real-time signalling between connected clients. The core component, the “sessionserver” exposes no HTML ui and is really just a combination of JSON-based services, background agents and client signalling shims that speak in WebSockets and HTTP long polling.

The details of what I’m building aren’t very important to the topic I want to talk about here, but I mention them because, based on my experience, the sessionserver is a pretty core idiomatic node.js usage. This layer contains no view engine, no HTML templating, no sharing code with clients, etc. It is a raw communication crunching engine, representing a pretty pure node.js use case that may be worth some study. Throughout this post, I’ll be posting excerpts from this project instead of contrived examples as much as possible.

When I first started doing node.js coding, my first thought was “Wow! This is insanely powerful but it is really easy to slice your toes off!” It turned out that was also my second, third, and 150th thought as well! Right around the time that I started the third refactoring/rewrite of the sessionserver, I felt like I had gotten a feel for how to write bulletproof code and I thought it would be worth sharing some of the style and conventions I came to adopt. (As an aside, when learning a fundamentally new and different technology, never expect your first or second attempt to be any good straight out of the gate). It was actually kind of funny: Pretty early on, I found that I was crashing my process so much that I wired it up so that it would play a loud door slamming sound on abnormal exit. I heard that sound enough that it kind of got stock in my head and I found myself humming a melody that I’d made up to the steady beat of the slamming door. Seriously, it was that bad.

I’ve long since been a believer that no matter what the language or environment, developing a bulletproof coding style and conventions for how you approach the code is one of the most critical parts of the learning process. We all know there are an infinite number of ways to write the same chunk of logic, and after a fashion, many of them can even be considered good and reasonable. In my opinion, however, the best styles are those that, when followed, make it difficult or impossible to code most common types of bugs. Some of the most powerful features of a language or environment can also be the most deadly when misapplied. A bulletproof style will balance these features so that you get all of the power but it is difficult to abuse. In addition, dangerous, high-octane areas are properly cordoned off as such, and the style will also fill in for some of the inherent weaknesses of the language.

Since we’re talking about JavaScript, involving single-process asynchronous coding meant to serve thousands of connected clients at a time, any little tricks that can make us more reliable at producing good, working code are a huge bonus. I’m not going to spend much time on general JavaScript coding style. Instead, I’m going to focus on what conventions I applied to tame the callback-based asynchronous world that is node.js.

There are several “macro” solutions to writing more robust node.js code:

  • CoffeScript: Defines a new language (“CoffeScript”) that compiles down to JavaScript.  It’s pretty neat and fills in a lot of the gaps at the language level.
  • node-fibers: Adds the concept of “fibers” to node so that asynchronous code can be written in an imperative style.

In addition, I’ve come across some library level patterns that are also good if applied in the right context:

  • Promises: There are various promise libraries floating around.  While from before my time, there was promise support in the very early days of node core.  Now its just a pattern people apply if they want to.
  • Tim Caswell’s “Do” library

All of these are quite good and worth looking into.  I generally prefer solutions that work with the toolset instead of trying to replace it, and the library solutions certainly fit the bill.  Be careful about picking your core metaphors, however – they will stick with you for the life of your software.

My goals for writing these tidbits down is to share what I’ve learned and to stimulate a conversation about good node.js programming practices.  If you agree or disagree with anything I present, either leave some comments or start a discussion on the node.js mailing list.  We all benefit from talking about this stuff more.

Here are the learnings that I’ve taken away from my odyssey with node.js thus far:

  1. Return on the last statement
  2. Put your callbacks in sequence
  3. Define a respond function for complex logic
  4. Centralize your exception handling
  5. Embrace a functional coding style with futures or promises
  6. Differentiate between system interfaces and user interfaces
  7. Examine dependencies closely
  8. Prefer copying simple, idiomatic code locally
  9. Read the source but code to the docs
  10. Write good tests

1. Return on the Last Statement

This one’s easy but it happens everywhere.  How many times have you done something like this:

The problem with this code is that on error, you are calling your callback with the error and the result (most likely null/undefined). This is almost always a violation of the declared API and will cause all manner of badness to happen on error. Making it worse, error paths are notoriously under-tested. You will almost certainly be hearing the door slamming in response to this one. While its easy to spot in a simple function like this, many real world cases are not so obvious. You could choose to just add a “return;” after “callback(err)”, but there is a better way if you can get your eye used to seeing it.

Here, I take advantage of the fact that in JavaScript we can return anything (even undefined) and I wrap the terminating action and the intent to leave the function into one statement. I’ve found that once my eye gets used to seeing a “return …” as the last line in any control flow situation, it is much easier for me to visually pick out logic errors like the one above. To make this bulletproof, I’ve just gotten into the habit that if my function has any kind of control flow, I make the last statement of every branch be a return statement that returns whatever it is doing. This is usually a garbage value but the point is to make it appear visually as a “we’re done here” so that the next time you don’t see that pattern alarms are going off in your head.

Look around on github for 20 minutes. I bet you can find instances of this class of error in places that will really make you worried (although you may find fewer – I have emailed authors when I’ve found this pattern after reviewing their code).

2. Put Your Callbacks in Sequence

If LISP stands for Lost-In-Stupid-Parenthesis, then node should properly have been an acronym for Burried-In-Incomprehensible-Callbacks. BIIC isn’t as cool as NODE, though, so I imagine we should just start fixing the problem rather than renaming anything. It’s not just a problem of visual clutter — deeply nested functions produce brittle code with hard to find errors. There are other code organization techniques further down, but being able to un-nest your functions in a readable way is core to any functional programming. For this tip, we’ll take a quick look back at programming with Scheme, which is like the mother that JavaScript was separated from at birth. She got her father’s braces and her mother’s lexical scoping. Poor thing. No wonder she’s always in therapy.

When programming Scheme, you start with a simple function. Then you need to do something recursive, so you just code another function inline, and then… sound familiar? After nesting about one deep in Scheme, you almost always end up refactoring your outer function into a let-expression that takes all of those nested functions and puts them in sequence with names. I’ve started doing the equivalent thing with JavaScript and have found that my functions are a lot easier to read and manipulate. For this example, I’m digging out something a little older since most of my newer stuff is using futures and doesn’t exactly match the traditional node callback pattern:

The key thing to note is the first few lines of the function. I do some basic setup and then immediately return by calling an asynchronous function that takes as a callback a function I have defined further below. This uses a trick of JavaScript that may be in bad taste, but I like it: Any functions defined by the form “function name() { }” (as opposed to with a var declaration) are available immediately upon entering the containing function (ie. control does not have to pass through them to define them). This is just some stylistic sugar that let’s you keep your code completely linear: the function starts at the top and proceeds through callbacks in a roughly downward motion. However you actually position your callbacks, however, the key point is that your code will be much more readable and maintainable if you stop using inline callbacks for non-trivial flows and use named callbacks defined in the outer level function instead. I generally stick to the rule that if there is any control flow, recursion or invocation of other asynchronous functions in a callback, it needs to be broken out to be its own named callback. You will also find that once this is done, it becomes trivial to introduce asynchronous recursion to deal with lists and such in a readable fashion.

3. Define a respond function for complex logic

If you have a standard node callback-based function with more than two ways to complete (one for errors and one for successful results), consider defining a secondary “respond” function to guard against hard to find situations where your mild-mannered control logic finishes more than precisely once.

This is an older example and I don’t generally hold it up as a bastion of good code. In particular, I should have broken out more named callbacks to distinguish between the synchronous and asynchronous parts of the flow. The key thing to note, though, is that there are multiple “ways out” of the function where the callback can be invoked. Instead of adding logic everywhere to determine if we’ve error’ed, responded yet, or determining if the callback is even defined, I use an explicit respond(…) function locally which invokes the callback with the results and then clears it so it won’t be invoked again. An even better solution would have been to add a warning if invoked more than once.

The rule here is in the same vein as those that come before. If your function is simple, keep it simple and don’t add an explicit respond function. However, if the control flow is getting a little dicey (and itself cannot be simplified), protect yourself by making the callback an explicit local respond function.

4. Centralize your exception handling

Functional programming in node is a lot of fun, expressive and compact except for one part: exception handling. I don’t really see this talked about that much, but in my opinion the lack of a coherent way of dealing with errors and exceptions is node’s biggest weakness. Node-fibers takes the approach of switching to a completely imperative style to achieve this, but I prefer to stay with a functional style and define a coherent exception handling structure.

I could write an entire post just on this topic (and maybe will one day) but I’ll just cover the high points here. The problem with error codes (which node core is based on) is that for higher level logic, the code that detects the error (ie. the first responder) is almost invariably not in the right position to determine what to do about the error condition. This is where try/catch structures in threaded systems make more sense. Someone up the stack will typically know what to do about the error.

The problem with an asynchronous system like node, however is that every time one of your callbacks or EventEmitter listeners gets invoked, it is often either at the very top level of the event loop or being called by some foreign code that is different from the code that attached the listener (the thing that attached the listener is probably in a better position to deal with the failure than whatever random execution context you ended up in). If you throw an exception in these contexts, it is a good bet that the program will terminate. Since JavaScript has a pretty impressive array of ways that mild-mannered looking statements can throw runtime errors, this problem is worse than in an environment like C, where if I’m careful with my pointers and don’t divide by zero I’m ok. Yes, unit tests help but its kind of like trying to plug all of the holes in a strainer when what you really want is a bowl.

For this tip, you are going to need library support of some kind. What is needed is a way to define a Block with an Error Handler and be able to tear this off and take it with you when your callbacks go into foreign territory. Then when they raise an exception, the exception gets routed back to the Block that was in effect when the callback was sent out to do its master’s bidding. I found that most of the solutions out there munged Futures, Promises, Fibers, etc together with this simple need to define an exception handling Block. The following snippet defines a Block class that fulfills what I’m looking for:

(I chose the Block/Rescue terminology not because I really have any fondness for Ruby but because it is an implementation of scoped exception handling that uses words that are not reserved in JavaScript) Here is an example of using a Block to centralize exception handling. In this case, this is a connect based middleware and the provided “next” function from the framework is a perfect exception handler: it returns an appropriate error to the http client. If we had other cleanup that needed to be done, just define your own function(err) {} callback instead and then invoke next(err) when done. You could also use inline functions in the call to Block.begin (making it resemble a try/catch visually), but I chose to use named callbacks here for readability.

This example also uses a Future class which I’ll cover in a bit. The key thing to keep in mind is that any exception thrown by code in or called by the process() function will be routed to the rescue handler (in this case next). In order to get a callback into the block scope, it should be wrapped by calling Block.guard(originalFunction). This will capture the current Block at the time that Block.guard is called and reestablish it for the duration of any call to originalFunction. The Future class does this internal to the force(…) call, which allows me to rest certain that anything I place as the target of a force(…) will have its exceptions routed appropriately. More on that later.

Here’s an example of explicitly capturing the block in your callbacks. In this case, we are invoking an HTTP request, accumulating the text results and resolving a Future with a constructed CouchResponse object (which does some parsing and other things that could conceivably throw an exception).

There are still a couple of places in this example where an unexpected exception would crash the process:

  • directly within the “function(res)” callback
  • in the ‘data’ callback

I could have wrapped Block.guard statements around these bits as well but chose not to because it costs a little extra and I am 100% confident that a failure here is a critical breakage and is completely covered by unit tests. The ‘end’ handler, however, does some stuff that I can’t immediately see (and I happen to know it contains a JSON.parse call) so I protect it with guard. Finally, I use the block’s standard errorHandler() callbacks to catch request and response error events. I’ve found that this simple pattern of centralizing exception handling makes it very easy to visually understand where exceptions are going and route them at the levels where it makes sense. You can also nest calls to Block.begin. This is useful in framework code that needs to go off and do some other work in response to something the Block initiated but not intrinsically owned by it.

5. Embrace a functional coding style with futures or promises

I actually like node’s callback style a lot for low level stuff — you know for those times when you feel like coding in C is the right thing to be doing and you’re thankful that someone let’s you operate at that level but without malloc/free. For higher level logic/abstractions, though, I prefer something with a bit more functional heritage. A lot of people have used Promises which are just a construct for converting a callback into a return value. You return a Promise instead of invoking a Callback and then you can ask the promise to give you its result. Future’s are similar as far as metaphors go and I prefer them. A Future has two intrinsic operations: resolve and force. Resolve sets the value on the future and force either gets the value if it is immediately available or gives it to you later when it is available. Given the Block based exception handling I illustrated above, my Future class doesn’t really need to think much about capturing and propagating exceptions, so its pretty simple. It does build on the Block by making sure to call Block.guard(…) to wrap any functions that are bound to be invoked as callbacks later by force(…). Here’s the class:

The key advice here is not necessarily to use my Future class, but to use someone’s Future or Promise implementation. I like mine because it is so brain-dead simple and integrates with Block.guard so that when I’m scanning my code and I see a function being passed to a force() call, I can mentally tell myself “this function is safe for exceptions to be thrown from.”

There are examples of using the Future in the previous sections.

6. Differentiate between system interfaces and user interfaces

This one is more of philosophical advice about using the right tool for the job. Some things are best done with node’s callback(err)/EventEmitter machinery and sometimes its better to use a higher level abstraction like a Future/Block. Don’t be afraid to use both. I tend to use the lower-level machinery for stuff that is interfacing with the system. For some reason it feels right to me to be passing around error codes in these situations, but this probably has more to do with the time I spent in C hacking on the Linux kernel than anything else. If you’re writing code to be consumed outside of your project, make sure it speaks the callback(err)/EventEmitter pattern since that is the lowest common denominator that every node programmer on the planet is going to intrinsically understand.

7. Examine dependencies closely

You can get a little cavalier in threaded environments like Java, Ruby or Python when it comes to relying on third party bits. After all you can always just catch Throwable right? Remember that in Node, everything you put into your project and call has the very real potential to kill you. Don’t just run the tests and assume a happy future. Look at the code and make a critical evaluation. If you get the feeling like its playing fast and loose with control flow, it probably is — and it might just kill you. Also, and I mean this with all respect to the node community, do not rely on popularity of a module to assume that others have given its internals a critical evaluation. Remember too that most of the node modules floating around on GitHub started as internal bits for someone else’s project and they have built-in assumptions to those ends.

I don’t mean to be too melodramatic here, but the point is simple: pulling in an external dependency is a lot more like inviting someone into your bed than into your living room. There are lot’s of great things that can come from it, but just be safe about it.

8. Prefer copying simple, idiomatic code locally

This runs counter to most of my experience in other environments and it might not hold up over time as the ecosystem evolves. For now, however, I generally prefer to take simple external dependencies, copy them locally and modify vs trying to share. There’s just no reason why we need to have one “copyObject”, “clone”, etc to rule them all. Find one that does what you want, make sure you understand it, stick it in your project and use it with a local require (require(‘./myCoolObjectCopy’)).

9. Read the source but code to the docs

The great thing about node is that the code is flayed open for all to see. And with most of the modules out on GitHub, its just a few clicks before you are reading anything. Just remember that all of those interesting bits in the source code are not necessarily part of the public api. Rely on the docs for what you are supposed to be calling. If you see something internally that you think should be part of the public api, email the appropriate people and ask/make a suggestion.

10. Write good tests

Really, however non-optional they may have been in other environments, they are not optional here. There are quite a few testing frameworks about, but I tend to use nodeunit. Here’s a simple one to get you started:

For some reason, I always include a ‘test for smoke’ that does nothing as my first test. If there’s a parse error or some other setup problem, then its pretty obvious on the console because I’ll see the error and the line that says “test for smoke” ran successfully won’t be there.

Here’s my runtests.js file. I just customize this slightly (to add require paths, etc) and drop it into any project.

Posted in node.js | 17 Comments »

Geeking out about grammar March 3rd, 2011

Ok, so this is totally random.  In some documentation, I wrote the following sentence:

Response objects mirror an html5 location object with some additional attributes.

Then I realized that wasn’t entirely accurate and changed it to:

Response objects mirror a w3c location object with some additional attributes.

You don’t need to know what that means.  All you need to know is that “html5″ and “w3c” are non-words and are pronounced “h-t-m-l-5″ and “w-3-c”.  As always, when changing the words, I instinctively changed the articles to match (changed “an html5″ to “a w3c”).  It’s been too many years since high school grammar to even know if this is correct, but changing the article sounds right to my ear.

This is bugging me because I can’t figure out why.

I always knew that some words go with some articles and some don’t but how do you assign the correct article to a random sequence of letters and numbers?  Our brains do it, but I can’t figure out the logic of why.  I finally narrowed it down to the first sound of the following word being the primary deciding factor but that’s where I’m going to stop and go get a drink.

This type of thing must really suck for non-native English speakers!

Posted in geeky | 2 Comments »

linguine and clam sauce February 15th, 2011

This one came from my dad.  He used to make it at the fire station when he worked as a paramedic and perfected it over quite a few years.  I’ve adapted it for those of us that need more explicit directions.

Ingredients:

  • 1 lb Linguine
  • 3 cans of minced clams
  • 1 head of fresh parsley
  • 8-10 cloves of garlic
  • Olive oil
  • Italian seasoning
  • Black pepper
  • Parmesan cheese (fresh grated is best, powdered is ok)

Directions:

Cut the parsley from the heads and finely chop it (a food processor will tend to chop it too much).  Peel 8-10 cloves of garlic and finely chop them.  Coat the bottom of a large skillet with an extra coat of olive oil (enough so that there is some standing oil).  Drain the juice from each of the cans of clams into the pan.  Add the chopped parsley and garlic.  Add several shakes of italian seasoning and a couple of good turns of the black pepper grinder.  Set the pan on a large burner at approximately 20-25% power and let it heat up and simmer while starting the pasta water boiling (if using a gas stove, decrease the intensity of the heat by stacking a burner on top of another – the clams and garlic can burn easily so stir often).  Add a few shakes of olive oil and italian seasoning to the pasta water.  It has heated enough when the sauce starts to change to a light milky color and bubbles slightly.  If you started the pasta water at about the same time, this will be about the time the water starts boil.  At this point, add all of the cans of clams to the sauce and put the linguine into the pot of water.  Boil the pasta for about 8 minutes, stirring the clam sauce frequently.  When the pasta is tender, everything is done cooking.

Scoop the pasta out into a strainer and then pour the remaining stock slowly over the pasta, allowing the oil and seasoning to stick to the pasta.  Transfer the pasta to a large bowl.  Add a generous amount of parmesan cheese (this will help thicken the sauce).  Pour the sauce over the pasta and toss it thoroughly.  Add some parmesan cheese and toss some more.  Serve on plates.  Use a spoon to scoop some of the good parts of the sauce from the bottom onto each serving.

Notes:

  • I’ve made this with pre-chopped garlic and parsley and its just not the same.  I always chop my own parsley and garlic now for this recipe.
  • Goes well with a crisp Sauvignon Blanc white wine.
  • 1 lb of linguine will feed 3-5 people
  • It took a while to get the timing right.  Time it all around the pasta.  Turn the heat on to the sauce and the pasta water at the same time.  The sauce simmers (prior to adding the clams) for about the amount of time it takes to get the water boiling.  Once the water is boiling add the pasta to the pot and the clams to the sauce.  Both will be done after about 8 minutes.

Enjoy.

Nothing special, just provides a canned interface to the operating system’s pseudo random number generator.

https://github.com/stellaeof/node_osrandom

Assuming you’re familiar with github, the mechanics were surprisingly easy:

npm help json
nano package.json
npm link .
npm publish .
Posted in geeky | No Comments »