Falsehoods Programmers Believe About Names


John Graham-Cumming wrote an article today complaining about how a computer system he was working with described his last name as having invalid characters.  It of course does not, because anything someone tells you is their name is — by definition — an appropriate identifier for them.  John was understandably vexed about this situation, and he has every right to be, because names are central to our identities, virtually by definition.

I have lived in Japan for several years, programming in a professional capacity, and I have broken many systems by the simple expedient of being introduced into them.  (Most people call me Patrick McKenzie, but I’ll acknowledge as correct any of six different “full” names, any many systems I deal with will accept precisely none of them.) Similarly, I’ve worked with Big Freaking Enterprises which, by dint of doing business globally, have theoretically designed their systems to allow all names to work in them.  I have never seen a computer system which handles names properly and doubt one exists, anywhere.

So, as a public service, I’m going to list assumptions your systems probably make about names.  All of these assumptions are wrong.  Try to make less of them next time you write a system which touches names.

  1. People have exactly one canonical full name.
  2. People have exactly one full name which they go by.
  3. People have, at this point in time, exactly one canonical full name.
  4. People have, at this point in time, one full name which they go by.
  5. People have exactly N names, for any value of N.
  6. People’s names fit within a certain defined amount of space.
  7. People’s names do not change.
  8. People’s names change, but only at a certain enumerated set of events.
  9. People’s names are written in ASCII.
  10. People’s names are written in any single character set.
  11. People’s names are all mapped in Unicode code points.
  12. People’s names are case sensitive.
  13. People’s names are case insensitive.
  14. People’s names sometimes have prefixes or suffixes, but you can safely ignore those.
  15. People’s names do not contain numbers.
  16. People’s names are not written in ALL CAPS.
  17. People’s names are not written in all lower case letters.
  18. People’s names have an order to them.  Picking any ordering scheme will automatically result in consistent ordering among all systems, as long as both use the same ordering scheme for the same name.
  19. People’s first names and last names are, by necessity, different.
  20. People have last names, family names, or anything else which is shared by folks recognized as their relatives.
  21. People’s names are globally unique.
  22. People’s names are almost globally unique.
  23. Alright alright but surely people’s names are diverse enough such that no million people share the same name.
  24. My system will never have to deal with names from China.
  25. Or Japan.
  26. Or Korea.
  27. Or Ireland, the United Kingdom, the United States, Spain, Mexico, Brazil, Peru, Russia, Sweden, Botswana, South Africa, Trinidad, Haiti, France, or the Klingon Empire, all of which have “weird” naming schemes in common use.
  28. That Klingon Empire thing was a joke, right?
  29. Confound your cultural relativism!  People in my society, at least, agree on one commonly accepted standard for names.
  30. There exists an algorithm which transforms names and can be reversed losslessly.  (Yes, yes, you can do it if your algorithm returns the input.  You get a gold star.)
  31. I can safely assume that this dictionary of bad words contains no people’s names in it.
  32. People’s names are assigned at birth.
  33. OK, maybe not at birth, but at least pretty close to birth.
  34. Alright, alright, within a year or so of birth.
  35. Five years?
  36. You’re kidding me, right?
  37. Two different systems containing data about the same person will use the same name for that person.
  38. Two different data entry operators, given a person’s name, will by necessity enter bitwise equivalent strings on any single system, if the system is well-designed.
  39. People whose names break my system are weird outliers.  They should have had solid, acceptable names, like 田中太郎.
  40. People have names.

This list is by no means exhaustive.  If you need examples of real names which disprove any of the above commonly held misconceptions, I will happily introduce you to several.  Feel free to add other misconceptions in the comments, and refer people to this post the next time they suggest a genius idea like a database table with a first_name and last_name column.

Detecting Bots with Javascript for Better A/B Test Results

I am a big believer in not spending time creating features until you know customers actually need them.  This goes the same for OSS projects: there is no point in overly complicating things until “customers” tell you they need to be a little more complicated.  (Helpfully, here some customers are actually capable of helping themselves… well, OK, it is theoretically possible at any rate.)

Some months ago, one of my “customers” for A/Bingo (my OSS Rails A/B testing library) told me that it needed to exclude bots from the counts.  At the time, all of my A/B tests were behind signup screens, so essentially no bots were executing them.  I considered the matter, and thought “Well, since bots aren’t intelligent enough to skew A/B test results, they’ll be distributed evenly over all the items being tested, and since A/B tests measure for difference in conversion rates rather than measuring absolute conversion rates, that should come out in the wash.”  I told him that.  He was less than happy about that answer, so I gave him my stock answer for folks who disagree with me on OSS design directions: it is MIT licensed, so you can fork it and code the feature yourself.  If you are too busy to code it, that is fine, I am available for consulting.

This issue has come up a few times, but nobody was sufficiently motivated about it to pay my consulting fee (I love when the market gives me exactly what I want), so I put it out of my mind.  However, I’ve recently been doing a spate of run-of-site A/B tests with the conversion being a purchase, and here the bots really are killers.

For example, let’s say that in the status quo I get about 2k visits a day and 5 sales, which are not atypical numbers for summer.  To discriminate between that and a conversion rate 25% higher, I’d need about 56k visits, or a month of data, to hit the 95% confidence interval.  Great.  The only problem is that A/Bingo doesn’t record 2k visits a day.  It records closer to 8k visits a day, because my site gets slammed by bots quite frequently.  This decreases my measured conversion rate from .25% to .0625%.  (If these numbers sound low, keep in mind that we’re in the offseason for my market, and that my site ranks for all manner of longtail search terms due to the amount of content I put out.  Many of my visitors are not really prospects.)

Does This Matter?

I still think that, theoretically speaking, since bots aren’t intelligent enough to convert at different rates over the alternatives, the A/B testing confidence math works out pretty much identically.  Here’s the formula for Z statistic which I use for testing:

The CR stands for Conversion Rate and n stands for sample size, for the two alternatives used.  If we increase the sample sizes by some constant factor X, we would expect the equation to turn into:

We can factor out 1/X from the numerator and bring it to the denominator (by inverting it).  Yay, grade school.

Now, by the magic of high school algebra:

If I screw this up the math team is *so* disowning me:

Now, if you look carefully at that, it is not the same equation as we started with.  How did it change?  Well, the reciprocal of the conversion rate (1 – cr) got closer to 1 than it was previously.  (You can verify this by taking the limit as X approaches infinity.)  Getting closer to 1 means the numerators of the denominator get bigger, which means the denominator as a whole gets modestly bigger, which means the Z score gets modestly smaller, which could possibly hurt the calculation we’re making.

So, assuming I worked my algebra right here, the intuitive answer that I have been giving people for months is wrong: bots do bork statistical significance testing, by artificially depressing z scores and thus turning statistically significant results into null results at the margin.

So what can we do about it?

The Naive Approach

You might think you can catch most bots with a simple User-Agent check.  I thought that, too.  As it turns out, that is catastrophically wrong, at least for the bot population that I deal with.  (Note that since keyword searches would suggest that my site is in the gambling industry, I get a lot of unwanted attention from scrapers.)  It barely got rid of half of the bots.

The More Robust Approach

One way we could try restricting bots is with a CAPCHA, but it is a very bad idea to force all users to prove that they are human just so that you can A/B test them.  We need something that is totally automated which is difficult for bots to do.

Happily, there is an answer for that: arbitrary Javascript execution.  While Googlebot (+) and a (very) few other cutting edge bots can execute Javascript, doing it on web scales is very resource intensive, and also requires substantially more skill for the bot-maker than scripting wget or your HTTP library of choice.

+ What, you didn’t know that Googlebot could execute Javascript?  You need to make more friends with technically inclined SEOs.  They do partial full evaluation (i.e. executing all of the Javascript on a page, just like a human would) and partial evaluation by heuristics (i.e. grep through the code and make guesses without actually executing it).  You can verify full evaluation by taking the method discussed in this blog post and tweaking it a little bit to use GETs rather than POSTs, then waiting for Googlebot to show up in your access logs for the forbidden URL.  (Seeing the heuristic approach is easier — put a URL in syntactically live but logically dead code in Javascript, and watch it get crawled.)

To maximize the number of bots we catch (and hopefully restrict it to Googlebot, who almost always correctly reports its user agent), we’re going to require the agent to perform three tasks:

  1. Add two random numbers together.  (Easy if you have JS.)
  2. Execute an AJAX request via Prototype or JQuery.  (Loading those libraries is, hah, “fairly challenging” to do without actually evaluating them.)
  3. Execute a POST.  (Googlebot should not POST.  It will do all sorts of things for GETs, though, including guessing query parameters that will likely let it crawl more of your site.  A topic for another day.)

This is fairly little code.  Here is the Prototype example

<script lang="text/javascript">
  var a=Math.floor(Math.random()*11);
  var b=Math.floor(Math.random()*11);
  var x=new Ajax.Request('/some-url', {parameters:{a: a, b: b, c: a+b}})
</script>

and in JQuery:

<script lang="text/javascript">
  var a=Math.floor(Math.random()*11);
  var b=Math.floor(Math.random()*11);
  var x=jQuery.post('/some-url', {a: a, b: b, c: a+b});
</script>

Now, server side, we take the parameters a, b, and c, and we see if they form a valid triplet.  If so, we conclude they are human. If not, we leave continue to assume that they’re probably a bot.

Note that I could have been a bit harsher on the maybe-bot and given them a problem which trusts them less: for example, calculate the MD5 of a value that I randomly picked and stuffed in the session, so that I could reject bots which hypothetically tried to replay previous answers, or bots hand-coded to “knock” on a=0, b=0, c=0 prior to accessing the rest of my site.  However, I’m really not that picky: this isn’t to keep a dedicated adversary out, it is to distinguish the overwhelming majority of bots from humans. (Besides, nobody gains from screwing up my A/B tests, so I don’t expect there to be dedicated adversaries. This isn’t a security feature.)

You might have noticed that I assume humans can run Javascript.  (My site breaks early and often without it.)  While it is not specifically designed that Richard Stallman and folks running NoScript can’t influence my future development directions, I am not overwrought with grief at that coincidence.

Tying It Together

So now we can detect who can and who cannot execute Javascript, but there is one more little detail: we learn about your ability to execute Javascript potentially after you’ve started an A/B test.  For example, it is quite possible (likely, in fact) that the first page you execute has an A/B test in it somewhere, and that you’ll make an AJAX call from that page you register your humanness after we have already counted (or not counted) your participation in the A/B test.

This has a really simple fix.  A/Bingo already tracks which tests you’ve previously participated in, to avoid double-counting.  In “discriminate against bots” mode, it tracks your participation (and conversions) but does not add them to the totals immediately unless you’ve previously proven yourself to be a human.  When you’re first marked as a human, it takes a look at the tests you’ve previously participated in (prior to turning human), and scores your participation for them after the fact.  Your subsequent tests will be scored immediately, because you’re now known to be human.

Folks who are interested in seeing the specifics of the ballet between the Javascript and server-side implementation can, of course, peruse the code at their leisure by git-ing it from the official site.  If you couldn’t care less about implementation details but want your A/B tests to be bot-proof ASAP, see the last entry in the FAQ for how to turn this on.

Other Applications

You could potentially use this in a variety of contexts:

1) With a little work, it is a no interaction required CAPCHA for blog commenting and similar applications. Let all users, known-human and otherwise, immediately see their comments posted, but delay public posting of the comments until you have received the proof of Javascript execution from that user. (You’ll want to use slightly trickier Javascript, probably requiring state on your server as well.) Note that this will mean your site will be forever without the light of Richard Stallman’s comments.

2) Do user discrimination passively all the time. When your server hits high load, turn off “expensive” features for users who are not yet known to be human. This will stop performance issues caused by rogue bots gone wild, and also give you quite a bit of leeway at peak load, since bots are the majority of user agents. (I suppose you could block bots entirely during high load.)

3) Block bots from destructive actions, though you should be doing that anyway (by putting destructive actions behind a POST and authentication if there is any negative consequence to the destruction).

The Most Radical A/B Test I’ve Ever Done

About four years ago, I started offering Bingo Card Creator for purchase.  Today, I stopped offering it.

That isn’t true, strictly speaking.  The original version of Bingo Card Creator was a downloadable Java application.  It has gone through a series of revisions over the years, but is still there in all its Swing-y glory.  Last year, I released an online version of Bingo Card Creator, which is made through Rails and AJAX.

My personal feeling (backed by years of answering support emails) is that my customers do not understand the difference between downloadable applications and web applications, so I sold Bingo Card Creator without regard to the distinction.  Everyone, regardless of which they are using, goes to the same purchasing page, pays the same price, and is entitled to use either (or both) at their discretion.  It is also sold as a one-time purchase, which is highly unusual for web applications.  This is largely because I was afraid of rocking the boat last summer.

The last year has taught me quite a bit about the difference between web applications and downloadable applications.  To whit: don’t write desktop apps.  The support burden is worse, the conversion rates are lower, the time through the experimental loop is higher, and they retard experimentation in a million and one ways.

Roughly 78% of my sales come from customers who have an account on the online version of the software.  I have tried slicing the numbers a dozen ways (because tracking downloads to purchases is an inexact science in the extreme), and I can’t come up with any explanation other than “The downloadable version of the software is responsible for a bare fraction of your sales.”  I’d totally believe that, too: while the original version of the web application was rough and unpolished, after a year of work it now clocks the downloadable version in almost every respect.

I get literally ten support emails about the downloadable application for every one I get about the web application, and one of the first things I suggest to customers is “Try using the web version, it will magically fix that.”

  • I’m getting some funky Java runtime error.  Try using the web application.
  • I can’t install things on this computer because of the school’s policies.  Try using the web application.
  • How do I copy the files to my niece’s computer?  By the way it is a Mac and I use a Yahoo.  Try using the web application.

However, I still get thousands of downloads a month… and they’re almost all getting a second-best experience and probably costing me money.

Thus The Experiment

I just pushed live an A/B test which was complex, but not difficult.  Testers in group A get the same experience they got yesterday, testers in group B get a parallel version of my website in which the downloadable version never existed.  Essentially, I’m A/B testing dropping a profitable product which has a modest bit of traction and thousands of paying customers.

This is rather substantially more work than typical “Tweak the button” A/B tests: it means that I had to make significant sitewide changes in copy, buttons, calls to action, ordering flow, page architecture, support/FAQ pages, etc etc.  I gradually moved towards this for several months on the day job, refactoring things so that I could eventually make this change in a less painful fashion (i.e. without touching virtually the entire site).  Even with that groundwork laid, when I “flipped the switch”  just now it required changing twenty files.

Doing This Without Annoying Customers

I’m not too concerned about the economic impact of this change: the A/B test is mostly to show me whether it is modestly positive or extraordinarily positive.  What has kept me from doing it for the last six months is the worry that it would inconvenience customers who already use the downloadable version.  As a result, I took some precautions:

The downloadable version isn’t strictly speaking EOLed.  I’ll still happily support existing customers, and will keep it around in case folks want to download it again.  (I don’t plan on releasing any more versions of it, though.  In addition to being written in Java, a language I have no desire to use in a professional capacity anymore, the program is a huge mass of technical debt.  The features I’d most keenly like to add would require close to a whole rewrite of the most complex part of the program… and wouldn’t generate anywhere near an uptick in conversion large enough to make that a worthwhile use of my time, compared to improving the website, web version, or working on other products like Appointment Reminder.

I extended A/Bingo (my A/B testing framework) to give a way to override the A/B test choices for individual users.  I then used this capability to intentionally exclude from the A/B test (i.e. show the original site and not count) folks who hit a variety of heuristics suggesting that they probably already used the downloadable version.  One obvious one is that they’re accessing the site from the downloadable version.  There is also a prominent link in the FAQ explaining where it went, and clicking a button there will show it.  I also have a URL I can send folks to via email to accomplish the same thing, which was built with customer support in mind.

I also scheduled this test to start during the dog days of summer.  Seasonally, my sales always massively crater during the summer, which makes it a great time to spring big changes (like, e.g., new web applications).  Most of my customers won’t be using the software again until August, and that gives me a couple of months to get any hinks out of the system prior to them being seen by the majority of my user base.

My Big, Audacious Goal For This Test

I get about three (web) signups for every two downloads currently, and signups convert about twice as well as downloads do.  (Checking my math, that would imply a 3:1 ratio of sales, which is roughly what I see.)  If I was able to convert substantially all downloads to signups, I would expect to see sales increase by about 25%.

There are a couple of follow-on effects that would have:

  • I think offering two choices probably confuses customers and decreases the total conversion rate.  Eliminating one might help.
  • Consolidating offerings means that work to improve conversion rates automatically helps all prospects, rather than just 60%.

Magic Synergy Of Conversion Optimization And AdWords

Large systemic increases in conversion rates let me walk up AdWords bids.  For example, I use Conversion Optimizer.  Essentially, rather than bidding on a cost per click basis I tell Google how much I’m willing to pay for a signup or trial download.  I tell them 40 cents, with the intention of them actually getting the average at around 30 cents, which implies (given my conversion from trials/signups to purchase) that I pay somewhere around $12 to $15 for each $30 sale.  Working back from 30 cents through my landing page conversion rate, it turns out I pay about 6 cents per click.

Now, assuming my landing page conversion is relatively constant but my trial to sale conversion goes up by 25%, instead of paying $12 to $15 a sale I’d be paying $9.60 to $12 a sale.  I could just pocket the extra money, but rather than doing that, I’m probably going to tell Google “Alright, new deal: I’ll pay you up to 60 cents a trial”, actually end up paying about 40 cents, and end up paying about 8 cents per click.  The difference between 6 and 8 will convince Google to show my ads more often than those of some competitors, increasing the number of trials I get per month out of them.  (And, not coincidentally, my AdWords bill.  Darn, that is a bloody brilliant business model, where they extract rent every time I do hard work.  Oh well, I still get money, too.)

We’ll see if this works or not.  As always, I’ll be posting about it on my blog.  I’m highly interested in both the numerical results of the A/B test as well as whether this turns out being a win-win for my customers and myself or whether it will cause confusion at the margin.  I’m hoping not, but can’t allow myself to stay married to all old decisions just out of a desire to be consistent.

Unveiling My Second Product (Demo Included)

Earlier this week, I went to a small massage parlor which is located at the mall next to my house.  There are three attendants on duty at any given time.  I was a “walk-in” (no appointment) and ordinarily would not have been able to see someone quickly, but luckily for me two clients had failed to make their appointments.  That was rather unlucky for the firm, though: two clients not at their appointments means two massage therapists not seeing anyone, and that costs them $2 a minute in direct economic losses.

What that business needs is some way to reduce the number of no-shows and get earlier warning when no-shows happen, so that they can rearrange the schedul, actively solicit walk-in customers, or invite one of their regular customers to have her weekly massage early.  That would minimize the lost revenue from missed appointments.  For example, they could call every client the day before their appointment.

But calling customers is an expensive proposition, when you think of it: three staff members times an average of 10 clients each per day means they’d need to make 30 phone calls a day.  That is thirty opportunities to hit the answering machine, thirty voicemails to leave, thirty ”We’re sorry, our customer is not in the service area” messages to hear.  And, since there is no dedicated receptionist at the store, that is time that has to come from an expensive therapist — and when their hands are on a receiver, they aren’t working the knots out of someone’s shoulders for $1 a minute.

There are many, many businesses like my local massage parlor: massage therapists, hair salons, auto mechanics, private tutors, and large segments of the healthcare industry.  They all have an appointment problem…  and it is about to get a bit better.

Introducing Appointment Reminder

For the last several weeks, since quitting my day job, I’ve been hard at work on Appointment Reminder.  (You can tell that I haven’t lost the same panache for inspiring, creative names that bought you Bingo Card Creator.)  It is a web application that handles scheduling and automated appointment reminders via phone, SMS, email, and post cards.  The phone and SMS reminders are through the magic of Twilio, which lets you make and receive phone calls and SMSes using simple web technology.

My value proposition to my customers is simple:

  1. Schedule your appointments in the easy-to-use web interface.  That’s all you have to do.
  2. Prior to the appointment, we’ll automatically remind your customer of the date and time of their appointment.
  3. The customer will be asked whether they’re coming or not.  If not, we’ll notify you of that immediately, so that you can reschedule them and rescue the emptied slot.
  4. This makes you money.  Plus, it is another opportunity to touch your customers, hopefully improving your commercial relationship.

How This Is A Lot Like Bingo Cards

My assumption, which has been borne out a bit in talking to potential customers, is that the market for this sort of thing is overwhelmingly female.  The personal services industry is mostly female, and dedicated receptionists (who I am replacing making more efficient in one facet of their jobs) are almost universally female.  That is one point in common with the market for Bingo Card Creator.

In addition, the competition is similar to the competition for educational bingo cards in 2006:  they’re structurally incapable of addressing huge segments of the market and I am going to go after those segments with a vengeance.  For example, if you look for appointment reminder services online, you’ll find that most of them go after the healthcare market — that is, after all, where the money is.  (My dentist got $770 for 15 minutes of his time and 30 minutes from the dental hygienist — he’s losing a car payment every hour he doesn’t have his hands in a mouth.)

This is mostly sold as enterprise software, with the long sales cycle, non-transparent pricing, and general cruftiness to match.  It almost has to be, because the software has to plug into patient records systems (the typical enterprise morass of dead languages, horrible interfaces, and software which would have fallen to pieces years ago if it hadn’t sold its soul to the devil).  Additionally, healthcare is a very regulated industry, and compliance with HIPPA and other regulatory requirements inevitably drives costs up.

From my research, the cheapest options available cost about $300 a month as a service (often involving a contract with an actual call center) or ~$1,000 as installable software and hardware.  Those do not strike me as viable options for a hair salon, piano teacher, or small massage parlor.  However, thanks to Twilio incurring the capital expenditures on my behalf, I can afford to offer a superior service for a fraction of that price.

And because my cost structure is absurdly better than my competitors, I don’t need to have a sales force to close the deals.  Instead, I can use the skills I’ve built up over the last several years of selling B2C software, and consummate transactions online on the strength of passive sales techniques like a demo, free trial, and website.  My guess is that the low-friction nature of this is going to help me with the less enterprise-y segment of the market, as they’re least in the mood for “Give us your phone number, address, and financial particulars so we can have a salesman set up a meeting to talk to your office manager about how much this is going to cost you.”

Demo / Minimum Viable Product

Appointment Reminder is not actually ready yet.  Having been on something of a Lean Startup kick recently, I thought that getting the software ready prior to showing it around to customers would put the cart in front of the horse: why spend 2-3 months getting v1.0 of the software ready if it turns out that users are cool on the entire concept.  Instead, I took a bit of inspiration from Dropbox’s minimum viable product, which was just a video showing how awesome your routine tasks would be if the product actually worked.  (They had a working prototype at the time but not one which would 100% reliably keep people’s data, which is sort of a key consideration if you’re making a backup product.)

The way I figure it, since the demo of my software is what ultimately makes the sale, everything that happens after the demo is essentially irrelevant to getting someone’s credit card number.  So everything that happens after the demo is out of scope for the MVP: I can demonstrate the sizzle without actually cooking the steak.  The sizzle for Bingo Card Creator was cards coming off your printer.  The sizzle for Appointment Reminder is demonstrating that I can make a phone ring on command if you type a number into your computer.

I’ve been programming for more than a decade now and very, very rarely get the “kid in a candy store” feel these days from it, but the first time I made my phone ring with an API call, I got all kinds of giddy.  I’m figuring that my customers will likely be the same: this is new and magical territory for them.  And unlike a certain technology company which specializes in new and magical telephone equipment, this will credibly promise to make people money.

You can try the demo of Appointment Reminder yourself.  Get your cellphone out, you’ll need it.  The flow goes something like:

  1. Open the demo page.  (I may eventually capture email addresses here, but folks visiting in the first few weeks are mostly going to be my tech buddies, so I’ll hold off for now rather than collecting a lot of mailinator.com addresses.)
  2. Take a look at the simple calendar interface.
  3. Type in your phone number and hit Schedule Fake Appointment.
  4. Your phone rings and you get a combination sales pitch and product demo in the guise of informing you of your fake appointment.  At the end of the call, you’ll be given an option to confirm or cancel the appointment.
  5. As soon as you confirm or cancel the appointment, that is reflected on your computer screen.
  6. You’ll be asked for a conversion here.  At the moment, it just asks for your email address.  Once the site is live, I’ll be pushing for the sale right there.

In real life I’d be using a more immediate way to contact the customer than updating their web interface, since they won’t be on Appointment Reminder when their customer gets the phone call, but that didn’t make sense for the demo — I’ve already credibly demonstrated my ability to make the phone ring.  Now I just need to credibly demonstrate that I can get information from the phone calls to the computer.

Pricing

I have some tentative thoughts on pricing for the service.  This was mostly a marketing decision — I want to be able to say something similar to “Appointment Reminder will pay for itself the first time it prevents a no-show.”  There is also quite a bit of daylight between the value of an appointment among my various customer groups — for a low-end salon that might only be $10, for a massage therapist $50, and for a lawyer or dentist “quite a bit indeed.”

My plan breakdown is mostly to do price discrimination among those user groups:

  • Personal ($9 / month): This is for folks who either want to send reminders to themselves/family or folks who have a part-time business like piano tutoring on the side.  Candidly, I think the value of these customers is going to be minimal, but I wanted to have this plan available for marketing reasons.  (It gives me a shot at appealing to the web worker/productivity/etc blogging folks, for example.)
  • Professional ($29 / month): The bread and butter plan.  This is intentionally sized to be decent for a low-intensity full-time business, such as a hair salon or single massage therapist.  Of note, putting the ability to record custom reminders here rather than in the personal plan provides a strong incentive for folks to upgrade.
  • Small business ($79 / month): This is where I expect to get the majority of my revenues and profits from (notice how I recommend it?).  It should be sufficient to cover most businesses smaller than a busy dentistry practice.  Speaking of dentistry practices…
  • Enterprise ($669 / month): So here’s a trick I’ve learned in Japan: there are a million ways to tell people “no”.  One way to tell people “No, I don’t want your business if you’re in health care” is to make them check a box certifying they are not in healthcare at signup.  That increases friction and demonstrates contempt for potential customers.  Instead, I’ll say yes, I do want their business eventually (after I get the kinks out of my system and have hardened the security and legal representation enough to feel comfortable with soliciting their business), but it won’t be today and it won’t be cheap.

Common among all plans: the first month will be free (capture credit card on day 1, bill on day 30 for month #2, etc etc), and I’ll have my usual 30 day money-back guarantee.  I’d like to offer discounts for multi-month signups, but I think that Paypal may not be too keen about that until I have some history with this business, so I’ll be avoiding it.  (Oh, trivia note: Paypal Website Payments Pro + Spreedly for subscription billing.)

At these price points, the cost of providing the service (Twilio calls) would cap out at about 30% if customers routinely rode their plans to the limit.  On experience and knowledge of the industry, I think that is highly unlikely, and expect to pay something much closer to 5 ~ 10%.  Obviously, I’m never going to characterize this as a 20x markup on Twilio services to my customers, as my customers don’t care beans about Twilio: they care about making sure their expensive professionals don’t idle for lack of work.

Early Reception From Potential Customers

Sadly, I’m not in a position to get this localized into Japanese at the moment (Twilio doesn’t quite have first-class Japanese support, and I have severe doubts about my ability to market effectively domestically).  This means that I can’t exactly walk over to the massage parlors around town and ask them to try it out.  However, I’ve been talking to friends from high school who work in service industries to verify that my assumption of what their problems are is indeed accurate, lurking on message boards (the number of posts deleted for excessive vituperation about missed appointments suggests to me that there may indeed be a market need for this service), and have been doing keyword research.  There appear to be healthy volumes in the core keywords, although I wish it had a built-in longtail search strategy like bingo cards did.

This summer I’m going back to America for about a month to visit family, do a bit of consulting, and have something of a vacation.  Over that time, I’m going to be taking the demo (or the product) on my laptop and showing it to as many service providers as I can stomach seeing.  Thankfully, I expect that they’ll indulge my request for an interview — I intend to pay them their normal hourly, so coffee and a discussion of their industry and opinions about the software will work out just as well as offering a haircut/massage/etc would.

Business Plan

I didn’t write a formal business plan last time and I have no intention to devolve.  That said, I do intend on documenting and revisiting assumptions.  As usual, I’ll be doing most of that on my blog, so you guys are welcome along for the ride.  I’ll also continue my general transparency policy with regards to what works, what doesn’t, and what my statistics are looking like.  That probably won’t rate automatic reporting for a few months yet — no sense building things to track the sales I don’t have.

I’ve essentially got two notes for marketing and intend to be hitting both of them: SEO and AdWords.  AdWords is likely going to be a tough nut to crack in this market due to high spending by companies with very, very high average ticket prices, but most of my competitors do not strike me as extraordinarily web savvy, and I think I can out-think their outsourced SEO/SEM teams.

In terms of reasonably achievable goals, I’d like to have v1.0 of the service open and accepting money by the end of June.  I think that two hundred paying customers is a very achievable target for a year from now, although I’m still not sure yet how being full time affects my skill at marketing, so that might be understating things by a bit.  The last time I made a sales prediction for a new product was when I expressed the wild desire that Bingo Card Creator eventually sell a whole $200 per month.  I hope to be every bit as mistaken.

A Quick Request

I value your opinions tremendously.  If you have suggestions related to the business or forthcoming feature set, I’d love to hear them.  If you have any particular areas you’d like to hear about in my upcoming blog posts, I’d love to hear that, too.

I’m coming to the market several years behind most of my competitors and will be playing catchup for quite some time.  I would be indebted if you took a few minutes to blog about Appointment Reminder.

Interviewed by Andew Warner On Entrepreneurship [Video]

The interviewed I mentioned earlier got rescheduled due to technical difficulties, but it is now up on Mixergy’s site.  You can see it here.

Topics include:

  • Why would teachers want to play bingo anyhow?
  • How did you pull this off while full-time employed?
  • What is it like being a Japanese salaryman?
  • What is the next product?  (Spoiler: Not telling you yet, come back in May.)
  • How did you get traction early at the start?
  • How do you make your processes more reliable to maximize on the effectiveness of your time?

I’m pretty happy with how it came out, although given that it was about 2 in the morning when I recorded it due to time zone differences, sometimes my ability to speak in coherent sentences leaves a bit to be desired.  If you have any questions, feel free to comment here or there.

Peldi from Balsamiq Interviewed For An Hour

Peldi from Balsamiq, who is hugely inspiring to the rest of the uISV community and myself, was interviewed for over an hour earlier this week on Mixergy.  Go watch it.  Everything he says about customer service, building remarkable products, early marketing (his post on the subject contains some of the best advice I’ve ever read), and competition just knocks it out of the park.

For folks here who have been reading me for a while but do not know about Mixergy yet: Andrew Warner does interviews with successful Internet business folks.  Most of them are inspiring, and many have killer, actionable tips that you can use in your businesses.  (I particularly like the one with the Wufoo guys, Peldi’s, and this one by Hiten Shah of Kissmetrics and, earlier, CrazyEgg, which I’ve mentioned a time or three here.)

Andrew interviewed me earlier, too.  The interview and transcript will be up one of these days, after the editors have made me sound intelligible.  (It is amazing what you can do with computers!)

Dropbox-style Two-sided Sharing Incentives

Last weekend, among a whole schedule of other great presentations at the Startup Lessons Learned conference (you can watch the video here), the folks behind Dropbox had a presentation (video) about how they went about growing their business.  Apparently search ads were too expensive for them (due to bidding up by other venture-funded firms in their space) and the long tail of search was not panning out, but their referral program worked out really well for them.  Really, really superflously well for them.

(For those who haven’t used it, Dropbox is absurdly well-implemented file storage, backup, and sharing in the clooooooooooooud.  They have saved my life twice when hard drives died and save me hours of time schlepping files from my Windows PC to my Ubuntu box and back again.  Go try them out if you haven’t already — I can’t imagine not having them anymore.  Anyhow: the business model is “We’ll give you 2 GB of space for free, or you pay us to get more than that.”)

The biggest single thing about their referral program is that it has a two-sided incentive for sharing: the person who signs up for Dropbox through your referral link gets a better deal than they would have gotten from the homepage, and you also get a free bonus yourself.  That is marvelous, marvelous psychology there: it gives both parties a benefit so the email seems less like spam, and social relations being what they are, the person receiving the gift feels a wee bit obligated to accept it.  (This same dynamic is used by many of the social gaming companies on Facebook, to degrees which almost make me feel icky.)

This works great for Dropbox because they have a product which they can easily make more useful in a granular fashion: just add more space and stir.  The cost of the marginal space is truly miniscule as a cost of customer acquisition: a few pennies a month if the user actually fills it, and most will not.  However, the passionate freebie seeking techies in the audience will use their disproportionate-sized online megaphones to scrape another few gigs out of their account.

The more I thought about it, the more impressed with this idea I was.  I had considered and rejected “Tell a friend” as a marketing scheme for BCC a few years back, on the theory that it just creates more spam and few of my customers would use it, but the double-sided incentive addresses both of those issues for me.  Plus I thought I could potentially implement it very quickly.  (I had a funny idea for a minimum viable tell-a-friend page: just ask for the friend’s email address and then have it ping me when someone submits anything.  I’d send the emails and credit both users manually.  That would have taken my development time from about six hours to one hour, but I decided to do it the “right way” on the theory that a few hours isn’t much of a risk to me anymore.)

So I decided to test out a version of this for Bingo Card Creator.  Historically, I have given free trial users 15 bingo cards for free.  (This neatly segments my markets between parents, who very rarely have 16 children, and teachers and professional users, who rarely play bingo with under a dozen players.)  I’m allowing them to invite friends: each successful invite gives both parties 3 extra cards, with a cap at 12 gained from inviting.  This theoretically will allow a large portion of my core customer base to get their program for free, but I think that paying is ridiculously more efficient for most of them, so it will only be the truly inveterate skinflints who sign up four of their closest friends so that they can get 27 cards for class.

The cost of allowing users to print extra bingo cards is, of course, too low to measure.

This Feature Is Surprisingly Hard

I’m used to pushing changes which only require ten lines of code, but this feature was a monster:

  • Tell a friend page
  • Processing email addresses put into that form
  • Actually sending emails
  • Facebook sharing integration
  • Customized signup page
  • I-Can’t-Believe-They’re-Not-Affiliate URLs
  • Properly crediting people for signups
  • Anti-abuse measures (mostly making it so that folks can’t use it to spam)
  • Minimum viable stats tracking (no charts yet)

All in all, it took a solid day.

I’m really pleased with a couple of implementation choices:

Getting the user’s first name:

My gut feeling (yeah yeah, A/B test incoming) is that users will be overwhelmingly more likely to respond to an invitation from Jane than from jsmith@example.com or from “a friend.”  So I asked customers to provide it if they haven’t already.  It is totally optional but I’m thinking they’re overwhelmingly going to comply.

Highlighting the offer on the signup page:

I hit the social proof fairly hard on the signup page: mentioning again that Bob or whomever sent the invitation, and that both Bob and the user will benefit from accepting the offer.  This page could stand to be a lot prettier, and I could probably throw a testimonial in here somewhere…  In this example, our generous inviting user’s name is Bingo.

Facebook integration:

Recently, having spent far too much of my time playing Facebook games (market research, I swear!) and scouting out the ecosystem more, I’ve noticed something.  One, a quarter of the female members of my family aged 30+ are currently sheering sheep, planting pumpkins, or throwing pigs at each other.  Two, my friends seem to comment on things they share… a lot.  Whoa.  This whole Facebook thing might actually have legs.

It turns out that getting folks to share links on Facebook is child’s play: one line of code that you can copy/paste.  With a bit more work, you can customize the text Facebook will pull out of the page.  I customized the text to include a strong call to action with added social proof, naturally.  Facebook sharing: not just for blog posts.


One feature that I particularly like about the Facebook option is that it only requires two mouse clicks from users (one to open, one to confirm — assuming they’re already cookied on FB), and it doesn’t require them to understand or recall email addresses.  My users have enough problems remembering and managing their own email addresses — I don’t want to include a “look up Anne’s email address” step in the workflow.

Finding folks when they’re ready:

Here’s an idea ripped straight off of the better Facebook games: give folks an opportunity to share stuff right when they hit the wall.  (Well, most of the Facebook games artificially construct the wall such that you have to share to get around it…  but I’m not that tricky.)  For example, if someone wants to print 22 cards and only has 15 quota, that would be a great time to remind them of the incentive.

Metrics Tracking

At the moment I put in very, very basic stats tracking:

  • Who ever accessed the invite page.
  • Who sent invites via email, and how many.
  • How many folks signed up as a result of invites.  (No source tracking, but GA should show me that, easily — referrer is Facebook, etc.)
  • Daily counts of all of the above.
  • As you could probably have guessed, I turned this on via an A/B test and will be watching to see if it hurts conversion to purchases.  (This is just a first cut, since it is possible that shaving 10% off sales from inviters would be worth getting the invitees, if invitees turn out to convert well.)
  • I’ve laid the groundwork for tracking the viral coefficient, although I strongly, strongly suspect it will be far below 1.  (I am not promoting this very aggressively, at all.)

Future Directions

In addition to the obvious (testing to see if this actually works), I have a few ideas for how to improve this in the future.

One obvious thing which I will probably not do is to ask folks for their webmail login details, grab their contact lists, and assist them in selecting folks to receive emails.  That would be stupidly effective, but it teaches bad Internet practices (do not give your Gmail details to random websites!) and frankly I don’t want it to be that easy to send invites.  We’ve all got that one aunt who has not figured out netiquette for Farmville and sends 14 lost kittens a day: I do not want to be her enabler.

I’ll probably also work on placement of the offer to invite, copy on the invite page, invite email, and invitation signup page (including graphical design), and will do some much more sophisticated metrics on this if early results look promising.

Will It Work?

Your guess is as good as mine.  In favor of it working, most of my customers and many of my trial users are very thrilled with Bingo Card Creator.  Many of them have figured out how to share it with friends despite me not giving them any good way to do so (not a trivial thing for elementary school English teachers — one bragged to me that she found out how to make a link to the website on her desktop and then bring it to her sister on a floppy, and if they’re getting over barriers to conversion that high, you know there must be something going on).  There is also the natural penny-pinching nature of teachers operating in my favor — the fact that the program is not free is far and away my #1 user complaint — and the fact that they tend to travel in packs.

In favor of the idea not working out so well: these are not very plugged in people as compared to Dropbox’s early adopter userbase, the actual mechanics of sharing still require non-trivial technical expertise (understanding email addresses and knowing those of your friends, for the option I’m giving highest billing to), and there are non-trivial business risks if it either becomes too popular or if folks feel that the invitation emails are an imposition.

Speaking of which: I capped the number of invites I’ll send out per user at 5 (hard-capped at the moment), capped the number of invitations any individual will receive at 1, and have capped the system at a total of 500 a day until I have some idea of how many is safe to send.

Bug of the Year Award:

It is early, but I think this already won it: a poorly considered after_save callback on my user model caused users’ mailing list settings at MailChimp to be updated if their user record was updated.  That was previously desirable, since there was nothing in the user model which could be updated without touching either their email address or their mailing list settings, and all updates were at the user’s personal request.  However, when I put the users’ card limit in there, then updated that for 50,000 users to set it to the default value, the callback fired and suddenly I got about 30,000 Delayed jobs all waiting to ping MailChimp.  I was ignorant of this until — thank God for checklists — I was testing the deploy and found out that I could not print bingo cards.  I assumed I had botched the Delayed Job worker processes again, but no, they were up… and right after I confirmed that I got the email saying “Delayed Job has spiked to 30,000 jobs in the queue!”

As soon as I realized what had happened I hit the Big Red Button on DJ, but not before a few thousand of them had been processed.  For users who had actually confirmed the signup to the mailing list before, I don’t think anything bad happened.  For those who had second thoughts before double-optin, they were all hit with another email from MailChimp on my behalf, seemingly out of the blue.  I’ve now got an inbox of “Who are you and why are you spamming me?” to deal with.  *sigh*

On the plate for tomorrow: figure out how I could have seen this one coming.  My testing and staging environments simply ignore API calls to MailChimp for the obvious reason — I wonder if I should have them throw exceptions instead unless they’re explicitly expected behavior.

Data Driven Software Design Presentation (plus bonus interview)

Last week I went down to Osaka to give a presentation to the Design Matters group at the Apple Store.  I originally prepared a very geeky software-centric dive into the magic of using statistics to improve your software, but I was informed that the audience wouldn’t be as geeky as I had expected, so with great help from Andreas and company I retooled the presentation into something less technical and more interesting on the same topic.  I don’t believe it was videotaped, but you can see my presentation and notes on Data-driven Software Design below:

Data-Driven Software Design

(Incidentally, that Slideshare widget is great SEO now isn’t it.  I’m leaving their links attached out of sheer amusement.)
After the presentation, I met with some folks from MessaLiberty, one of the most impressive companies I’ve seen in Japan.  They do lots of WordPress/website consulting and are coming out with a recommendation engine product one of these days — all with a team of about seven young engineers working sane hours.  Ah, there is hope for the future yet.
Anyhow, they asked if they could interview me for their video blog.  You can see the interview in English and, in the near future (after they get done editing it) in Japanese.  Topics include a brief overview of the above presentation, when you should start A/B testing versus when to redirect your efforts elsewhere, and my advice for getting a job in Japan (spoiler: learn Japanese).

Building Highly Reliable Websites For Small Companies

Downtime severely annoys customers.  Downtime annoys sole proprietors even more, because it has a funny way of invariably striking at the worst possible time.  Apache has no respect for date night.  So if you’re a small company without dedicated ops team, you might well be worried about whether you can reasonably promise customers that you’ll be able to avoid inconveniencing them while still maintaining some semblance of sanity in your own life.

Happily, you can, if you’re savvy about it.  I’ve supported thousands of customers and hundreds of thousands of trial users for four years without causing frequent outages, despite not being particularly skilled at server administration or having a huge money or time budget.

Setting Expectations

Let’s get this out of the way: are you a small company dependent on technology?  You will have downtime.  You will wear a communication device twenty four hours a day for the next several years, and respond with alacrity if it goes off.  The purpose of the rest of this blog post is to minimize downtime and have that communication device do as little damage to your relationships and sanity as possible.

Specifically, you will want to:

  • Anticipate failure ahead of time
  • Minimize the inicidence of failure
  • Be notified of failures in a timely manner
  • Quickly recover from failure
  • Learn from failures to prevent reoccurence

Many of these tips are specific to my personal experiences as an entrepreneur with a small business.  If you work in a highly regulated industry, have a dedicated ops team, or are Google, you probably should not be reading my blog to solve your technical challenges.

Identify Risks To Your Service

The key to building reliable systems is first to know where the risks are.  Don’t be suckered into thinking that downtime is generally caused by unpredictable black swan events: that is an easy mistake to make when reading stories about reliability from Google et al.  This is partly because they have large teams of supergeniuses wielding nearly infinite budgets to build reliable systems, and partly because when they do have downtime they typically phrase the report of it in such a way that it sounds like it was a black swan and not systemic failure to follow routine, easily understood policies seven times in a row plus the black swan that let the system go down.  (We’ll be returning to the policy theme in a moment.)

No, your risks are quite predictable, and you can jot them on a piece of notebook paper right now.  I’d strongly suggest actually physically doing this, as it helps inform your thinking about what is likely to break and what you’ll need to do to mitigate the risk of that.

Not sure what your risks are?  I’ve worked for the last several years in Nagoya, the Town that Toyota Built, and even though I was never in the automotive industry my professional mentors were heavily influenced by it.  You know what causes 99% of problems with cars?  Moving parts.

It is astronomically more likely for something which moves to fail than something which doesn’t: it is subject to friction, wear, foreign particles, and a thousand other sources of failure.  By comparison, all the chassis of the car has to do is not decompose into its constituent atoms, and since it hasn’t done that until now it is a good bet that today will not be the day it picks to do so.

Software systems are also, overwhelmingly, killed by their moving parts.

Hard drives, to be very literal, are one of the only things that actually move in your server, and statistically speaking they’ve got the worst failure rate of any device in the box.  Serious engineers treat hard drives as a component that by grace of God has not died yet.  That is why we put them in RAID arrays which abstracts the life and death of particular hard drives away from users of the system.  I’m rationally ignorant of how RAID actually works: if you want to know, read a book; if you want to do something productive with your day, pay your VPS provider or managed hosting company to deal with this for you.  Trust me, you have better things to do than actually touch hard drives.

Side note: This is increasingly becoming true of just about everything [PDF link] in computer systems: scales are such that statistically speaking something is broken right now, so we’ll build our systems in the assumption that they’re broken to a degree unpredictable at run-time, and still squeeze some work out of them.  ”The Cloud” has not quite brought web-scale computing principles to the smallest software companies yet, but I think it is highly likely we’ll adopt bits and pieces of them eventually.  After all, essentially all of us use RAIDs these days, many without knowing it, while they were a Serious Tool for Serious Businesses only a fairly short while ago.

Less literally speaking, your system is most vulnerable where it sees dynamism, complexity, and change.

It is at its most vulnerable when you are working on it and shortly after, because computer systems largely do not rot and once they’ve achieved a steady state tend to stay in it until something exceptional happens.  You are your own worst enemy, and you’ll take steps to mitigate the threat you pose, as described later.

Many web applications these days have easy dividing lines between static and dynamic requests.  The dynamic requests represent generally a small fraction of the overall total but will cause almost all of the failures.  If you have, for example, Nginx proxying to Mongrel, you can be quite confident that Mongrel will fail much, much more often than Nginx will.  (In point of fact, Nginx fails so seldomly you can almost get away with ignoring the possibility of it happening, since something else will almost surely kill either your service or you personally prior to Nginx dying on you.   Carry life insurance and look both ways before you cross the street, but if you have too many things to do and not enough time to do them in, worrying about Nginx failing is something you can probably safely kick down the road.)

Your database is also a high-probability culprit for failing, partially for practical reasons (it is a very sophisticated bit of engineering which by its very nature is highly dynamic) and partially for philosophical ones.  For historical reasons, most databases are made/configured around the assumption that it is better to fail loudly and completely rather than fudge the ACID guarantees silently.  This is a perfectly reasonable engineering tradeoff for a bank’s transactional systems which might not be optimal for the sheepthrowing statistics your app may be tracking.  (The degree of this impedance mismatch is why some folks are very passionate about the whole NoSQL thing when they really just want to drop ACID.)

Then there are a whole host of systems outside of your direct control which can nonetheless bring your system down.  Your hosting provider’s network, for example, can fail at any moment, and typically a failure there is essentially a 100% loss of service for the typical web application.  Their upstream provider could similarly fail.  Any API your application depends on could fail in a hundred ways at any time.

Basically, keep writing on that piece of notebook paper until you run out of obvious sources of failure.

Here’s the abbreviated list of things that could go wrong at my business.

  • Operator error
  • Operator error
  • Operator error
  • Hardware failures on the server
  • Network failures at Slicehost
  • Mongrel fails
  • MySQL fails
  • Memcached fails
  • Delayed Job fails
  • Scheduled cron tasks fail
  • External APIs (e.g. Mixpanel) fail
  • Embedded Javascript (e.g. Google Analytics) fail

All of these have actually happened at one time or another, but most did not cause downtime for my customers.

Mitigating Failures Before They Happen

After you have some idea of what is likely to go wrong, you can start taking actions to mitigate it.

One conceptually easy step is decoupling: make it so that a failure of a particular component can’t bring down the entire system.  This isn’t always possible in a cost-effective fashion: for example, in a heavily dynamic web application, it is highly likely that the database failing means you’re going down hard.  That is OK.  Ideally, your business is not running the power generators at a hospital: you don’t have to eliminate all of the downtime, you just have to minimize it, so go after the low-hanging fruit before addressing “what happens if the database dies”.  (Answer: nothing… if you spent a few months of Very Expensive Engineer Time working out a replication/failover strategy.  That is overkill when a) your customers are the sort that will tolerate a bit of downtime once every blue moon and b) you’re much, much more likely to bring the server down because you didn’t spent ten minutes writing a deployment checklist.)

One example of decoupling: never call an external API from within an HTTP request/response cycle.  That essentially makes your system 100% dependent on the external API being constantly available.  Their downtime is now your downtime.  Their capacity problems are now your capacity problems.  Their operator error is now your operator error.

Instead, do all communication with external APIs asynchronously.  There are many common patterns for this.  My website calls out to Mixpanel for statistics tracking but the end-user doesn’t care about that, so I just queue up a Delayed Job to do the API call asynchronously and regardless of it eventually succeeding or failing my user never cares.  This means that Mixpanel’s (quite infrequent) downtimes have not caused my users any loss of service.

If your users actually need to see the results of the API call, you can schedule the call asynchronously and then do AJAX-y magic to poll the server asking if it has completed yet.  If it doesn’t complete in a reasonable amount of time, you can either tell the user to do so in a nice, customized error message, or you can fallback to something which you can accomplish locally.  In many applications, for example, instantaneous response to changes in the underlying data is just “nice to have” rather than a genuine requirement — RSS readers, for example, usually won’t kill anybody if they are a few minutes out of date.  If updating an RSS feed fails for whatever reason, you can probably get by with showing the user your most recent cache of it — quite possibly without ever telling the user about the failure at all.  (Engineers often are excessively protective of users.  Personally, in most applications that don’t involve lives or money, I would rather tell the user a white lie than show them a red error message.  This is particularly true when they can’t really do anything to address the error message other than “Try again later [and pray a third party who you don't even know exists has figured out why they are throwing 501 status codes and addressed it].”)

Similarly, you can decouple bits of your web infrastructure from other bits.  In Rails applications, Mongrel can (and will) fail independently of Nginx.  By default, this will result in Nginx showing a forbidding black and white page with a scary error message on it.  That is a terrible user experience and you can alleviate it in seconds: create a nice-looking page using nothing but static assets, and have Nginx serve it using the error_page directive.  Somewhat contrary to what many engineers might assume, users are often largely mollified by anything showing up on their screens, and a well-written error page is sometimes almost as good as a functioning system.

I know the failwhale is a running joke, but that is just because we see so much of it and are accustomed to our computers mostly working.  More typical users have computers eat their documents, freeze, and break all the time for no discernable reason at all, and if you do your job right they may never see your system fail twice, so their first and only encounter with your error page might not actually cost you too much customer goodwill.

Automated recovery is another smart mitigation step to take.  I use a process manager called god to watch over my Rails programs, and when a Mongrel starts consuming too much memory or failing to respond to “Are you alive?” messages, god forcibly restarts it.  This sounds almost crazy to the Big Freaking Enterprise engineer in me, but practically speaking it eliminates almost all common Rails problems (e.g. poor memory management caused by overly enthusiastic creation of objects and not garbage collecting them efficiently) before they cause a problem for my customers or myself.  The god daemon is, similarly, restarted daily to avoid it having memory leaks.  Yep, it smells strongly of duct tape, but the duct tape works.

Minimizing operator error is critically important, because you are the least reliable component of your system.  Because you rely on software to do most of the actual work, when you touch the system you’re almost by definition performing something novel that isn’t automated.  Novel operations are moving parts and vastly more likely to fail than known-good operations that your system crunches millions of times per day.  Additionally, even if what you want to do is absolutely flawlessly planned out, you’ll often not execute flawlessly on the plan.  This was one of the root causes of my worst downtime ever.

Happily, the steps to minimize operator error are well understood.  Unhappily, they require swallowing a bit of your ego and actually following them.  They’re well researched, reproducible, and will save you time in the long run: get over yourself.  I had to, too.

If you have to do it more than once, it should be automated or made into a checklist.  This includes things like:

  • server setup
  • sever upgrades
  • upgrading code on the staging server
  • upgrading code on the production server
  • any maintenance task

Checklists are very simple: just a textual description of what the list describes, why you’d want to do it, what the exact steps are (i.e. down to what you type into the console), and what the exact steps are for verifying that the procedure was carried out correctly.  This last bit is non-optional: subtle failures in maintenance tasks are a frequent cause of downtime, sometimes weeks or months later.

This is one reason we spend so much time on root cause analysis, to demonstrate to skeptical engineers that checklists are like flossing.  My dentist tells me “If you think flossing is a nuisance, that’s fine: just floss the ones you intend to keep.”  If you think checklists are a nuisance, that is fine: you can feel free to skip checklists for systems where catastrophic failure is no big deal.

I personally keep checklists in text files, because I only have one person to worry about, but in a multi-user organization wikis are fantastic for them.  This goes doubly true for wikis which keep version history, because frequently as the system matures and as you respond to issues the checklist will need modification, and rather than tracking Bob down and asking him what this command is supposed to do, a well-written changelog will tell you “That command is there to prevent you from hosing the DB like we did last August.”

Certain checklists are executed very infrequently.  Of particular note is one checklist that absolutely everyone should have: how to restore a machine (or machines) from the bare metal to a working copy of the production system.  Ideally, you should be capable of doing this in 15 minutes or less, because if you ever have to do it for real it means that disaster has struck and your site is now down.  Take 15 minutes out of your busy schedule every quarter and actually run that checklist to make sure it still works. You will frequently find “Oh, effity, we use GraphicsMagick these days rather than ImageMagick but nobody wrote that on here.  Hah, silly me.”  which, if this were an actual emergency, would have you scrambling to correct while the site was still down.  ”Scrambling” sounds a whole lot like moving parts, right?  Right, you’ll be introducing more errors at the worst possible time to have them, in the middle of an emergency.  Get emergency recover down to a routine, so that when you actually have an emergency (and you will, eventually), dealing with it is a matter of routine.

Automation is your friend.  Some organizations get checklist happy and make checklists for procedures which essentially can’t fail and which require no judgement.  Those shouldn’t be checklists — they should be shell scripts.  That way, they save your engineers time and you can be confident that the latest script in version control (it is in version control, riiiiiight?) is well-tested (it is tested, riiiiight?) and will actually work.

Of particular note in the Rails world: deprec and Capistrano are wonderful tools which automate server setup (very well suited to deployment to a VPS like Slicehost) and application deployment.  These are absolutely lifesavers and, although I’ve bashed my head against both a few times (typically with integration issues with Windows) they save you from weeks and weeks of script writing.  I also sleep much more easily at night knowing that I’ve set up a staging environment in ~8 minutes using my Capistrano script this month and, if everything else went wrong, I could have a server reimaged and loading my database backup almost as soon as I got the phone call.

Be Notified Of Failures In A Timely Manner

Many failures can be solved or mitigated fairly quickly after you get to a computer, which means that time to recovery is dominated by the time it takes from the time the failure arises to the time it takes for you to be made aware of it.  There are easy, reproducible ways to bring that down to “a few minutes or less.”

Low-hanging fruit: The absolute easiest possible solution is to point Pingdom or Mon.itor.us at your home page and have them contact you if it doesn’t resolve.  They’re fairly simple: if the server doesn’t respond with HTTP 200 when they try to access it (every 15 minutes or so), you get an email or SMS.  This is the simplest thing that could possibly work, but there are circumstances where it won’t catch failures.  (For example, applications of non-trivial complexity often have parts which can fail without taking the front page down.)

I recommend creating an internal status page which automatically checks all the things you think are crucial, risky, and tractable to resolution if you were to know about them.  (If an external API provider goes down and you already know your response is going to be “I wait until it comes back up”, then no sense disturbing your sleep about it, right?)  For example, mine will fail to return properly if Nginx, Mongrel, the Delayed::Job workers, memcached, or Redis is having a bad day.  You can then have your external monitoring poll that page and, if they don’t get the HTTP 200 all clear, send you an email.

For folks who are feeling adventurous (or excessively stingy), you can rig your own server monitoring terminating with a phone call or SMS with Twilio and about 30 minutes of work.  If you do this, you can escape the time or notification limitations which the notification services use to segment their customers into “hobbyists” and “enterprises who have an awful lot of money to spend to make sure nothing breaks.”  Personally I don’t think it is worth it but, hey, it is an option.  (Note that this introduces another moving part into your system which, if it fails, you will probably find out about only when your main system is down… a couple hours after the fact.  This is a compelling reason to not be a cheapskate.)

I have a very simple solution for going from notification mails to instant awareness of the problem: my cell phone, which is a cheapo Japanese model, can do custom ringtones for individual callers or mail senders.  In the event I get an email from my notification service, it plays Ride of the Valkyries.  As I recall, I’ve heard it twice, once while sound asleep and once on date night.  (The interesting question of whether I would have begged off of date night may not ever be answered, since I was able to successfully reboot before my girlfriend noticed.)

In addition to the phone-in method of server monitoring, you can also use the phone-out method.  I have been using Scout for the last couple of weeks, and it is wonderful.  Basically, a cron job running locally reports a variety of statistics to their server every few minutes, and if the statistics are anomalous or the report fails to happen as scheduled, you get detailed warnings of it.  My sole problem with Scout is that it is one chatty little robot: by default, it sends me emails about things like not-even-close-to-critical demand spikes (you went to 2% CPU utilization for a minute?  Poor baby! My business got linked to from Reddit and requests spiked 1,000% without any performance degradation: great, why are you telling me?).  After a few weeks of tuning I’ve mostly shut it up about non-critical notifications.  (One other gripe with Scout is that they reserve SMS notifications for their priciest plan.  It is market segmentation, I know, but in an age of Twilio it is almost petty.  Still, I feel quite satisfied on their $20 a month option.)

Quickly Recover From Failure

Ideally, you’re either recovering automatically or you’ve been given timely notice of a failure you anticipated and now all you have to do is open your checklist.  Neither of the above?  Well then, this is why you earn the big bucks.  Godspeed.

Learn From Failures To Prevent Re-occurrence

We’re a big fan locally of Five Whys, which has lately achieved a bit of prominence in US startups due to Eric Ries and the rest of the Lean Startups crowd.  Boiled down to its essence, Five Whys says that no failure ever has one cause.  There might be a single surface-level immediate cause, but the failure is also a symptom of multiple process failures because you had things in place to prevent that failure from happening and they did not trigger or were not effective.

I’m ruthless about doing the corrective action — root cause analysis — in my own business.  You can see an example here.  I’m more proud of that failure than a lot of my successes, because I learned from it.

Basically, you keep peeling layers of the failure onion until you’re satisfied that you’ve gone deep enough: five layers is a guideline, not a rule.  You then invest proportionate resources into making sure that each of the failures does not happen again.  This could mean updating procedures/checklists, adding features to your autorecovery code or diagnostics, beefing up employee training, etc etc etc.

A note for those with employees: Five Whys will frequently — very, very frequently — implicate human or cultural issues in your organization.  Just trust me on this:  you’re not alone, and you can persevere through the difficulties.  Resolving critical defects was ironically one of the least contentious processes we had at my ex-day job — even in a Japanese megacorp we had internalized that it was too important to coat with the usual amount of corporate horsepuckey.  (It helps that my Japanese megacorp is in Nagoya and that the development practices of a certain large automobile manufacturer are practically state religion here.)  Check the egos, chuck the org chart, and make the fixes you need to make to uphold your responsibilities to your customers.

Quick Start For Rails on Windows Seven

Today I killed a few hours getting my Rails environment working on my brand new shiny 64 bit Windows Seven laptop.  These instructions should also work with Windows Vista.  I’m assuming you’re a fairly  experienced Rails developer and just ended in dependency purgatory like I did for the last few hours.

1.  Grab the MySQL developer version for your architecture (32 bit or 64 bit as appropriate) here.

2.  Grab Ruby here.  I used the 1.8.6 RC2 installer for my 64 bit architecture.

3.  Add C:\Ruby\bin to your path.  You can do this on Windows by opening the Start Menu, right clicking My Computer, clicking Properties, clicking Advanced / System Settings, and then adding it to the end of the PATH variable on the lower of the two dialogs.  Apologies for inexact setting names, my computer is Japanese so I’m working from memory.

4.  Verify that your path includes C:\Ruby\bin by opening a new command line and executing “path”.

5.  Good to go?  OK, execute:

gem install --no-rdoc --no-ri rails
gem install mysql

You’ll get all manner of errors on that MySQL installation. That is OK.

6. Here’s the magic: copy libmySQL.dll from here to C:\Ruby\bin . If you do not do this, you will get ugly errors on Rails startup about not being able to load mysql_api.so.

You should now be able to successfully work with Rails as you have been previously, even from your Windows machine, and you will amaze your Mac-wielding friends.