I had to screen resumes from a leading MBA school. I’m lazy, and there were hundreds of CVs. So after procrastinating until this morning, I decided on 2 principles:

  1. I will not spend more than 45 minutes on this. (That’s the duration of my train ride to office.)
  2. I will not read a single CV. (I would write a program.)

The CVs were in a single PDF file. I saved it as text (it shrunk from 66MB to 1.6MB without the photos). Then I wrote a Perl program to filter CVs by keywords. We were looking for people with an interest and/or experience in IT consulting, so I picked “technology”, “consulting”, “SAP”, “IBM”, “Accenture”, “Deloitte”, etc.

Anyone without these keywords would fall out of the list. This eliminated 75% of the crowd. But since I didn’t want to read the rest, I used my favourite text-analysis technique: concordance. I extracted 3 words on either side of each keywords, and just read those. It was easy to see who’d “worked with suppliers like IBM” as opposed to who’d worked at IBM.

That’s it! I managed to cut the list down to 10%. Better yet, I also had a preference ranking. People with multiple keywords ranked higher than those with fewer keywords. And all this took little more than my train ride to office.

I can see this going to the next level. It’s easy to write a customised rejection letter, depending on which keywords are missing for each person.

Now, if it’s this easy to filter resumes, I can see every organisation do it in a few years. Which means, you need to write resumes for machines as well, not just for humans! For example, on my next CV, I’ll make sure I include the words “Boston Consulting Group” as well as “BCG” – just in case the software searches for only one of those keywords. Further, I’ll make sure I avoid spelling mistakes!