Coding

Error logging with Google Analytics

A quick note: I blogged earlier about Javascript error logging, saying that you can wrap every function in your code (automatically) in a try{} catch{} block, and log the error message in the catch{} block.

I used to write the error message to a Perl script. But now I use Google’s event tracking.

var s = [];
for (var i in err) { s.push(i + '=' + err[i]); }
s = s.join(' ').substr(0,500)
pageTracker._trackEvent("Error", function_name, s);

The good part is that it makes error monitoring a whole lot easier. Within a day of implementing this, I managed to get a couple of errors fixed that had been pending for months.

Short URLs

With all the discussion around URL shorteners, Diggbar, blocking it, and the rev=canonical proposal, I decided to implement a URL shortening service on this blog with the least effort possible. This probably won’t impact you just yet, but when tools become more popular and sophisticated, it would hopefully eliminate the need for tinyurl, bit.ly, etc.

Since the blog runs on WordPress, every post has an ID. The short URL for any post will simply be http://www.s-anand.net/the_ID. For example, http://s-anand.net/17 is a link to post on Ubuntu on a Dell Latitude D420. At 21 characters, it’s roughly the same size as most URL shorteners could make it.

The code is easy: just one line added to index.php:

<link rev="canonical" href="http://s-anand.net/<?php the_ID(); ?>">

… and one line in my .htaccess:

RewriteRule ^([0-9]+)$ blog/?p=$1 [L,R=301,QSA]

Hopefully someone will come up with a WordPress plugin some time soon that does this. Until then, this should work for you.

Automating PowerPoint with Python

Writing a program to draw or change slides is sometimes easier than doing it manually. To change all fonts on a presentation to Arial, for example, you’d write this Visual Basic macro:

Sub Arial()
    For Each Slide In ActivePresentation.Slides
        For Each Shape In Slide.Shapes
            Shape.TextFrame.TextRange.Font.Name = "Arial"
        Next
    Next
End Sub

If you didn’t like Visual Basic, though, you could write the same thing in Python:

import win32com.client, sys
Application = win32com.client.Dispatch("PowerPoint.Application")
Application.Visible = True
Presentation = Application.Presentations.Open(sys.argv[1])
for Slide in Presentation.Slides:
     for Shape in Slide.Shapes:
             Shape.TextFrame.TextRange.Font.Name = "Arial"
Presentation.Save()
Application.Quit()

Save this as arial.py and type “arial.py some.ppt” to convert some.ppt into Arial.

Screenshot of Python controlling PowerPoint

Let’s break that down a bit. import win32com.client lets you interact with Windows using COM. You need ActivePython to do this. Now you can launch PowerPoint with

Application = win32com.client.Dispatch("PowerPoint.Application")

The Application object you get here is the same Application object you’d use in Visual Basic. That’s pretty powerful. What that means is, to a good extent, you can copy and paste Visual Basic code into Python and expect it to work with minor tweaks for language syntax, just please make sure to learn how to update python before doing anything else.

So let’s try to do something with this. First, let’s open PowerPoint and add a blank slide.

# Open PowerPoint
Application = win32com.client.Dispatch("PowerPoint.Application")
# Create new presentation
Presentation = Application.Presentations.Add()
# Add a blank slide
Slide = Presentation.Slides.Add(1, 12)

That 12 is the code for a blank slide. In Visual Basic, you’d instead say:

Slide = Presentation.Slides.Add(1, ppLayoutBlank)

To do this in Python, run Python/Lib/site-packages/win32com/client/makepy.py and pick “Microsoft Office 12.0 Object Library” and “Microsoft PowerPoint 12.0 Object Library”. (If you have a version of Office other than 12.0, pick your version.)

This creates two Python files. I rename these files as MSO.py and MSPPT.py and do this:

import MSO, MSPPT
g = globals()
for c in dir(MSO.constants):    g[c] = getattr(MSO.constants, c)
for c in dir(MSPPT.constants):  g[c] = getattr(MSPPT.constants, c)

This makes constants like ppLayoutBlank, msoShapeRectangle, etc. available. So now I can create a blank slide and add a rectangle Python just like in Visual Basic:

Slide = Presentation.Slides.Add(1, ppLayoutBlank)
Slide.Shapes.AddShape(msoShapeRectangle, 100, 100, 200, 200)

Incidentally, the dimensions are in points (1/72″). Since the default presentation is 10″ x 7.5″ the size of each page is 720 x 540.

Let’s do something that you’d have trouble doing manually in PowerPoint: a Treemap. The Guardian’s data store kindly makes available the top 50 banks by assets that we’ll use for this example. Our target output is a simple Treemap visualisation.

Treemap

We’ll start by creating a blank slide. The code is as before.

import win32com.client, MSO, MSPPT
g = globals()
for c in dir(MSO.constants):    g[c] = getattr(MSO.constants, c)
for c in dir(MSPPT.constants):  g[c] = getattr(MSPPT.constants, c)

Application = win32com.client.Dispatch("PowerPoint.Application")
Application.Visible = True
Presentation = Application.Presentations.Add()
Slide = Presentation.Slides.Add(1, ppLayoutBlank)

Now let’s import data from The Guardian. The spreadsheet is available at http://spreadsheets.google.com/pub?key=phNtm3LmDZEOoyu8eDzdSXw and we can get just the banks and assets as a CSV file by adding &output=csv&range=B2:C51 (via OUseful.Info).

import urllib2, csv
url = 'http://spreadsheets.google.com/pub?key=phNtm3LmDZEOoyu8eDzdSXw&output=csv&range=B2:C51'
# Open the URL using a CSV reader
reader = csv.reader(urllib2.urlopen(url))
# Convert the CSV into a list of (asset-size, bank-name) tuples
data = list((int(s.replace(',','')), b.decode('utf8')) for b, s in reader)

I created a simple Treemap class based on the squarified algorithm — you can play with the source code. This Treemap class can be fed the data in the format we have, and a draw function. The draw function takes (x, y, width, height, data_item) as parameters, where data_item is a row in the data list that we pass to it.

def draw(x, y, w, h, n):
    # Draw the box
    shape = Slide.Shapes.AddShape(msoShapeRectangle, x, y, w, h)
    # Add text: bank name (asset size in millions)
    shape.TextFrame.TextRange.Text = n[1] + ' (' + str(int(n[0]/1000 + 500)) + 'M)'
    # Reduce left and right margins
    shape.TextFrame.MarginLeft = shape.TextFrame.MarginRight = 0
    # Use 12pt font
    shape.TextFrame.TextRange.Font.Size = 12

from Treemap import Treemap
# 720pt x 540pt is the size of the slide.
Treemap(720, 540, data, draw)

Try running the source code. You should have a single slide in PowerPoint like this.

Plain Treemap

The beauty of using PowerPoint as the output format is that converting this into a cushioned Treemap with gradients like below (or changing colours, for that matter), is a simple interactive process.

Treemap in PowerPoint

WordPress themes on Live Writer

One of the reasons I moved to WordPress was the ability to write posts offline, for which I use Windows Live Writer most of the time. The beauty of this is that I can preview the post exactly as it will appear on my site. Nothing else that I know is as WYSIWYG, and it’s very useful to be able to type knowing exactly where each word will be.

The only hitch is: if you write your own WordPress theme, Live Writer probably won’t be able to detect your theme — unless you’re an expert theme writer.

I hunted on Google to see how to get my theme to work with Live Writer. I didn’t find any tutorials. So after a bit of hit-and-miss, I’m sharing a quick primer of what worked for me. If you don’t want to go through the hassle, you can always call on professionals who are adept at services like professional custom website design.

Open any post on your blog (using your new theme) and save that as view.html in your theme folder. Now replace the page’s title with {post-title} and the page’s content with {post-body}. For example:

 

{post-title}

{post-body}
 

This is the file Live Writer will be using as its theme. This page will be displayed exactly as it is by Live Writer, with {post-title} and {post-body} replaced with what you type. You can put in anything you want in this page — but at least make sure you include your CSS files.

To let Live Writer know that view.html is what it should display, copy WordPress’ /wp-includes/wlw-manifest.xml to your theme folder and add the following lines just before </manifest>.

  WebLayout

Live Writer searches for wlmanifest.xml in the <link rel="wlmanifest"> tag of your home page. Since WordPress already links to its default wlwmanifest.xml, we need remove that link and add our own. So add the following code to your functions.php:

function my_wlwmanifest_link() { echo 
  '';
}
remove_action('wp_head', 'wlwmanifest_link');
add_action('wp_head', 'my_wlwmanifest_link');

That’s it. Now if you add your blog to Live Writer, it will automatically detect the theme.

Client side scraping for contacts

By curious coincidence, just a day after my post on client side scraping, I had a chance to demo this to a client. They were making a contacts database. Now, there are two big problems with managing contacts.

  1. Getting complete information
  2. Keeping it up to date

Now, people happy to fill out information about themselves in great detail. If you look at the public profiles on LinkedIn, you’ll find enough and more details about most people.

Normally, when getting contact details about someone, I search for their name on Google with a “site:linkedin.com” and look at that information.

Could this be automated?

I spent a couple of hours and came up with a primitive contacts scraper. Click on the link, type in a name, and you should get the LinkedIn profile for that person. (Caveat: It’s very primitive. It works only for specific URL public profiles. Try ‘Peter Peverelli’ as an example.)

It uses two technologies. Google AJAX Search API and YQL. The search() function searches for a phrase…

1
2
3
4
5
6
7
8
9
10
11
google.load("search", "1");
google.setOnLoadCallback(function () {
    gs = new google.search.WebSearch();
    $('#getinfo').show();
});
 
function search(phrase, fn) {
    gs.setSearchCompleteCallback(gs,
        function() { fn(this.results); });
    gs.execute(phrase);
}

… and the linkedin() function takes a LinkedIn URL and extracts the relevant information from it, using XPath.

13
14
15
16
17
18
19
20
21
22
23
function scrape(url, xpath, fn) {
    $.getJSON('http://query.yahooapis.com/v1/public/yql?callback=?', {
        q: 'select * from html where(url="' +
            url + '" and xpath="' + xpath + '")',
        format: 'json'
    }, fn);
}
 
function linkedin(url, fn) {
    scrape(url, "//li[@class][h3]", fn);
};

So if you wanted to find Peter Peverelli, it searches on Google for “Peter Peverelli site:linkedin.com” and picks the first result.

From this result, it displays all the <LI> tags which have a class and a <H3> element inside them (that’s what the //li[@class][h3] XPath does).

The real value of this is in bulk usage. When there’s a big list of contacts, you don’t need to scan each of them for updates. They can be automatically updated — even if all you know is the person’s name, and perhaps where they worked at some point in time.

Client side scraping

“Scraping” is extracting content from a website. It’s often used to build something on top of the existing content. For example, I’ve built a site that tracks movies on the IMDb 250 by scraping content.

There are libraries that simplify scraping in most languages:

But all of these are on the server side. That is, the program scrapes from your machine. Can you write a web page where the viewer’s machine does the scraping?

Let’s take an example. I want to display Amazon’s bestsellers that cost less than $10. I could write a program that scrapes the site and get that information. But since the list updates hourly, I’ll have to run it every hour.

That may not be so bad. But consider Twitter. I want to display the latest iPhone tweets from http://search.twitter.com/search.atom?q=iPhone, but the results change so fast that your server can’t keep up.

Nor do you want it to. Ideally, your scraper should just be Javascript on your web page. Any time someone visits, their machine does the scraping. The bandwidth is theirs, and you avoid the popularity tax.

This is quite easily done using Yahoo Query Language. YQL converts the web into a database. All web pages are in a table called html, which has 2 fields: url and xpath. You can get IBM’s home page using:

select * from html where url="http://www.ibm.com"

Try it at Yahoo’s developer console. The whole page is loaded into the query.results element. This can be retrieved using JSONP. Assuming you have jQuery, try the following on Firebug. You should see the contents of IBM’s site on your page.

$.getJSON(
  'http://query.yahooapis.com/v1/public/yql?callback=?',
  {
    q: 'select * from html where url="http://www.ibm.com"',
    format: 'json'
  },
  function(data) {
    console.log(data.query.results)
  }
);

That’s it! Now, it’s pretty easy to scrape, especially with XPath. To get the links on IBM’s page, just change the query to

select * from html where url="http://www.ibm.com" and xpath="//a"

Or to get all external links from IBM’s site:

select * from html where url="http://www.ibm.com" and xpath="//a[not(contains(@href,'ibm.com'))][contains(@href,'http')]""

Now you can display this on your own site, using jQuery.

 

This leads to interesting possibilities, such as Map-Reduce in the browser. Here’s one example. Each movie on the IMDb (e.g. The Dark Knight) comes with a list of recommendations (like this). I want to build a repository of recommendations based on the IMDb Top 250. So here’s the algorithm. First, I’ll get the IMDb Top 250 using:

select * from html where url="http://www.imdb.com/chart/top" and xpath="//tr//tr//tr//td[3]//a"

Then I’ll get a random movie’s recommendations like this:

select * from html where url="http://www.imdb.com/title/tt0468569/recommendations" and xpath="//td/font//a[contains(@href,'/title/')]"

Then I’ll send off the results to my aggregator.

Check out the full code at http://250.s-anand.net/build-reco.js.

 

In fact, if you visited my IMDb Top 250 tracker, you already ran this code. You didn’t know it, but you just shared a bit of your bandwidth and computation power with me. (Thank you.)

And, if you think a little further, here another way of monetising content: by borrowing a bit of the user’s computation power to build complex tasks. There already are startups built around this concept.

Infyblogs dashboard

I just finished Stephen Few‘s book on Information Dashboard Design. It talks about what’s wrong with the dashboards most Business Intelligence vendors (Business Objects, Oracle, Informatica, Cognos, Hyperion, etc.), and brings Tuftian principles of chart design to dashboards.

So I took a shot at designing a dashboard based on those principles, and made this dashboard for InfyBLOGS.

Infyblog dashboard

You can try for yourself. Go to http://www.s-anand.net/reco/
Note: This only works within the Infosys intranet.

  1. Right click on the "Infyblog Dashboard" link and click "Add to Favourites…" (Non-IE users — drag and drop it to your links bar)
  2. If you get a security alert, say "Yes" to continue
  3. Return to InfyBLOGS, make sure you’re logged in (that’s important) and click on the "Infyblog Dashboard" bookmark
  4. You’ll see a dashboard for your account, with comments and statistics

The rest of this article discusses design principles and the technology behind the implementation. (It’s long. Skim by reading just the bold headlines.)

Dashboards are minimalist

I’ll use the design of the dashboard to highlight some of the concepts in the book.

I designed the dashboard first on Powerpoint, keeping these principles in mind.

  1. Fits in one screen. No scrolling. Otherwise, it isn’t a dashboard.
  2. Shows only what I need to see. Ideally, from this dashboard, I should receive all the information I need to act on, and no more.
  3. Minimises the data-ink ratio. Don’t draw a single pixel that’s not required.

The first was easy. I just constrained myself to one page of PowerPoint, though if you had a lot of toolbars, the viewing area of your browser may be less than mine.

The second is a matter of picking what you want to see. For me, these are the things I look for when I log into InfyBLOGS:

  1. Any new comments?
  2. Any new posts from my friends?
  3. What’s new? What’s hot?

Then I dig deeper, occasionally, into popular entries, popular referrers, how fast the blogs are growing, etc. So I’ve put in what I think are the useful things.

The third is reflected in the way some of this information is shown. Let me explain.

Keep the charts bare

Consider the graphs on the right. They look like this.

Notice the wiggly line to the right. It’s a graph called sparkline, and was introduced by Edward Tufte. Sparklines are great to show trends in a compact way. Forget the axes. Forget the axes labels. Forget the gridlines. The text on the left ("visitors per day") tells you what it is. The number (10475) is the current value. And the line is the trend. Clearly the number of visitors has exploded recently, from a relatively slow and flat start. The labels and axes aren’t going to tell you much more.

Boldly highlight what’s important

The most important thing here, the title, is on top. It’s pure black, in large font, positioned right on top, and has a line segregating it from the rest of the page.

The sections are highlighted by a bigger font, different colour, and a line, but the effect is a bit more muted.

The numbers on the right are prominent only by virtue of size and position. If anything, the colour de-emphasizes them. This is to make sure that they don’t overwhelm your attention. (They would if they were in red, for instance.)

The number 10475 is carefully set to occupy exactly two line spaces. That alignment is very important. The small lines are at a font size of 11px, and line spacing is 1.5. So at a font size of 2 x 11px x 1.5 = 33px, the size of the large number covers exactly two rows.

The labels, such as "visitors" or "sites" are in blue, bold. Nothing too out of the way, but visible enough that they stand out.

The "View more…" links just use colour to stand out. They’re pretty unimportant.

The bulk of the text is actually made of links, unlike traditional links, they’re not underlined and they’re not blue. It would just add to the noise if everything where underlined. But on mouseover, they turn blue and are underlined, clearly indicating that they’re links.

I’ve used four mechanisms to highlight relative importance: position, size, colour and lines. There are many more: font, styling, boxes, underlining, indentation, etc.

The purpose of good design is to draw attention to what’s important, to direct the flow of the eye in a way that builds a good narrative. Don’t be shy of using every tool at your disposal in doing that. While on the topic, the Non-Designer’s Design Book is a fantastic and readable book on design for engineers.

Always use grids to display

I can’t say this any better than Mark Boulton of subtraction.com, in his presentation Grids are Good. Grids are pervasive in every form of good design. It’s a fantastic starting point as well. Just read the slideshow and follow it blindly. You can’t go wrong.

This dashboard uses 12-grid. The page is divided into two vertically. The left half has mostly text and the right half has statistics and help. There are 3 blocks within each, and they’re all the same size (alignment is critical in good design). Two of the blocks on the left are subdivided into halves, while the bottom right "Links and help" section is subdivided into three. Well, it’s easier to show it than explain it:

Picture of grid

Copy shamelessly

The design for this dashboard was copied in part from WordPress 2.7’s new dashboard, part from the dashboards on Stephen Few’s book, and part from the winners of the 2008 Excel dashboard competition.

Most of these designs are minimalist. There’s no extra graphics, jazzy logos, etc. that detract from the informational content. This is an informational dashboard, not a painting.

Good design is everywhere. View this presentation on How to be a Creative Sponge to get a sense of the many sources where you can draw inspiration from. You’re much much better of copying good design than inventing bad ones.

You can hack the dashboard yourself

I’ve uploaded the source for the dashboard at http://infyblog-dashboard.googlecode.com/

Please feel free to browse through it, but don’t stop there! Go ahead and tweak it to suit what you think should be on an ideal dashboard. I’ll give access to the project to anyone who asks (leave a comment, mail me or call me at +44 7957 440 260).

Please hack at it. Besides, it’s great fun learning jQuery and working on a distributed open source project.

Now for some notes on how this thing works.

Bookmarklets bring Greasemonkey to any browser

This is implemented as a bookmarklet (a bookmark written in Javascript). It just loads the file http://www.s-anand.net/infyblog-dashboard.js which does all the grunt work. This is the code for the bookmarklet.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// Create a new a script element
var s = document.createElement('script');
 
// Set the source of the script element to my bookmarklet script
s.setAttribute('src','http://www.s-anand.net/infyblog-dashboard.js');
 
// Add the script element to the head
// This does the the equivalent of:
// &lt;head&gt;&lt;script src=&quot;...&quot;&gt;&lt;/script&gt;&lt;/head&gt;
document.getElementsByTagName('head')[0].
appendChild(s);
 
// Return void. Not sure why, but this is required
void(s);

This form of the bookmarklet is perhaps it’s most powerful use. It lets you inject any script into any site. If you want to change it’s appearance, content, anything, just write it as a bookmarklet. (Think of it as the IE and Safari equivalent of Firefox’s Greasemonkey.)

So if you wanted to load jQuery into any page:

  1. Change the URL in line 5 above to the jQuery URL and add it as a bookmark. (Or use Karl’s jQuery bookmarklet. It’s better)
  2. Go to any page, say the IMDB Top 250 for instance
  3. Click on the bookmarklet. Now your page is jQueryified. (If you used Karl’s bookmarklet, it’ll tell you so.)
  4. On the address bar, type "javascript:alert($(‘table:eq(10) tr’).length)" and press enter.
  5. You should see 252, the number of rows on main table of the IMDB Top 250

Notes:

jQuery is a powerful Javascript library

Firstly, you don’t want to program Javascript without a library. You just don’t. The state of browser incompatibilities and DOM usability is just to pathetic to contemplate. Stop learning DOM methods. Pick any library instead.

Going by popularity, you would do well to pick jQuery. Here’s a graph of searches on Google for the popular Javascript libraries, and jQuery leads the pack.

Legend: jquery prototype dojo ext yui

Google Trends showing jQuery as the dominant library

I picked it in early 2007, and my reasons were that it:

  1. Is small. At that time, it was 19KB and was the smallest of the libraries. (Today it’s 18KB and has more features.)
  2. Makes your code compact. It lets you chain functions, and overloads the $() function to work with DOM elements, HTML strings, arrays, objects, anything.
  3. Doesn’t pollute the namespace. Just 2 global variables: jQuery and $. And you can make it not use $ if you like.
  4. Is very intuitive. You can learn it in an hour, and the documention is extremely helpful.
  5. Is fully functional. Apart from DOM manipulation, it covers CSS, animations and AJAX, which is all I want from a library.

These days, I have some additional reasons:

  1. It’s extensible. The plugin ecosystem is rich.
  2. It’s hosted at Google, and is really fast to load. Most often, it’s already cached in your users’ browser.
  3. John Resig is a genius. After env.js and processing.js, I trust him to advance jQuery better than other libraries.

So anyway, the first thing I do in the dashboard script is to load jQuery, using the same code outlined in the bookmarklet section above. The only addition is:

1
2
3
4
5
script.onload = script.onreadystatechange = function() {
    if (!this.readyState||this.readyState=='loaded'||this.readyState=='complete'){
        callback();
    }
};

This tells the browser to run the script "callback" once the script is loaded. My main dashboard function runs once jQuery is loaded.

InfyBLOGS has most statistics the dashboard needs

Thanks to posts from Utkarsh and Simon, I knew where to find most of the information for the dashboard:

The only things I couldn’t find were:

  • Friends posts. I’d have liked to show the titles of recent posts by friends. Yeah, I know: blogs/user_name/friends has it, but thanks to user styles, it’s nearly impossible to parse in a standard way across users. I’d really like an RSS feed for friends.
  • Interests posts. Would be cool to show recent posts by users who shared your your interest.
  • Communities posts. Again, I’d have liked to show the recent posts in communities, rather than just the names of communities.

Using IFRAMEs, we can load these statistics onto any page

Let’s say we want the latest 10 comments. These are available on the comments page. To load data from another page, we’d normally use XMLHTTPRequest, get the data, and parse it — perhaps using regular expressions.

1
2
3
$.get('/tools/recent_comments.bml').function(data) {
    // Do some regular expression parsing with data
});

But from a readability and maintainability perspective, regular expressions suck. A cleaner way is to use jQuery itself to parse the HTML.

1
2
3
4
$.get('/tools/recent_comments.bml').function(data) {
    var doc = $(data);
    // Now process the document
});

This works very well for simple pages, but sadly, for our statistics, this throws a stack overflow error (at least on Firefox; I didn’t have the guts to try it on IE.)

So on to a third approach. Why bother using Javascript to parse HTML, when we’re running the whole application in a browser, the most perfect HTML parser? Using IFRAMEs, we can load the whole page within the same page, let the browser do the HTML parsing.

1
2
3
4
5
6
$('<iframe src="/tools/recent_comments.bml"></iframe>')
    .appendTo('body')
    .load(function() {
        var doc = $(this).contents();
        // Now process the document
    });

This way, you can read any page from Javascript within the same domain. Since both the bookmarklet and statistics are on the InfyBLOGS domain, we’re fine.

jQuery parses these statistics rather well

Now the doc variable contains the entire comments page, and we can start process it. For example, the comments are on the rows of the second table in the page. (The first table is the navigation on the left.). So

doc.find('table:eq(1) tr')

gets the rows in the second table (table:eq(0) is the first table). Now, we can go through each row, extract the user, when the comment was made, links to the entry and to the comment.

1
2
3
4
5
6
7
8
doc.find('table:eq(1) tr').slice(0,8).map(function() {
    var e = $(this),
        user = e.find('td:eq(0) b').text(),
        when = e.find('td:eq(0)').text().replace(user, ''),
        post = e.find('td:eq(1) a:contains(Entry Link)').attr('href'),
        cmnt = e.find('td:eq(1) a:contains(Comment Link)').attr('href'),
        // return HTML constructed using the above information
    });

Google Charts API displays dynamic charts

Another cool thing is that we can use Google Charts to display charts without having to create an image beforehand. For instance, the URL:

http://chart.apis.google.com/chart?cht=lc&chs=300x200&chd=t:5,10,20,40,80

shows an image with a line chart through the values (5,10,20,40,80).

Google chart example

The dashboard uses sparklines to plot the trends in InfyBLOG statistics. These are supported by Google Charts. We can extract the data form the usage page and create a new image using Google Charts that contains the sparkline.

If you want to play around with Google Charts without mucking around with the URL structure, I have a relatively easy to use chart constructor.

Always use CSS frameworks

Libraries aren’t restricted to Javascript. CSS frameworks exist, and as with Javascript libraries, these are a no-brainer. It’s not worth hand-coding CSS from scratch. Just build on top of one of these.

The state of CSS frameworks isn’t as advanced as in Javascript. Yahoo has it’s UI grids, Blueprint‘s pretty popular and I found Tripoli to be nice, but the one I’m using here is 960.gs. It lets you create a 12-grid and 16-grid on a 960-pixel wide frame, which is good enough for my purposes.


At nearly 2,500 words, this is the longest post I’ve ever written. It took a day to design and code the dashboard, but longer to write this post. Like Paul Buchheit says, it’s easier to communicate with code. I’ve also violated my principle of Less is More. My apologies. But to re-iterate:

Please hack it

The code is at http://infyblog-dashboard.googlecode.com/. You can join the project and make changes. Leave a comment, mail me or call me.

To Python from Perl

I’ve recently switched to Python, after having programmed in Perl for many years. I’m sacrificing all my knowledge of the libraries and language quirks of Perl. The reason I moved despite that is for a somewhat trivial reason, actually. It’s because Python doesn’t require a closing brace.

Consider this Javascript (or very nearly C or Java) code:

1
2
3
4
5
6
var s=0;
for (var i=0; i<10; i++) {
    for (var j=0; j<10; j++) {
        s = s + i * j
    }
}

That’s 6 lines, with two lines just containing the closing brace. Or consider Perl.

1
2
3
4
5
6
my $s = 0
foreach my $i (1 .. 10) {
    foreach my $j (1 .. 10) {
        $s = $s + $i * $j
    }
}

Again, 6 lines with 2 for the braces. The $ before the variables also drops readability just a little bit. Here’s Python:

1
2
3
4
s = 0
for i in xrange(1, 10):
    for j in xrange(1, 10):
        s = s + i * j

On the margin, I like writing shorter programs, and it annoys me to no end to have about 20 lines in a 100-line program devoted to standalone closing braces.

What I find is that once you’ve really know one language, the rest are pretty straightforward. OK, that’s not true. Let me qualify. Knowing one language well out of C, C++, Java, Javascript, PHP, Perl, Python and Ruby means that you can program in any other pretty quickly — perhaps with a day’s reading and a reference manual handy. It does NOT mean that you can pick up and start coding with Lisp, Haskell, Erlang or OCaml.

Occasionally, availability constrains which programming language I use. If I’m writing a web app, and my hosting provider does not offer Ruby or Python, that rules them out. If I don’t have a C or Java compiler on my machine, that rules them out. But quite often, these can be overcome. Installing a compiler is trivial and switching hosting providers is not too big a deal either.

Most often, it’s the libraries that determine the language I pick for a task. Perl’s regular expression library is why I’ve been using it for many years. Ruby’s HPricot and Python’s BeautifulSoup make them ideal for scraping, much more than any regular expression setup I could use with Perl. Python Image Library is great with graphics, though for animated GIFs, I need to go to the GIF89 library in Java. And I can’t do these easily with other languages. Though each of these languages boast of vast libraries (and it’s true), there are still enough things that you want done on a regular basis for which some libraries are a lot easier to use than others.

So these days, I just find the library that suits my purpose, and pick the language based on that. Working with Flickr API or Facebook API? Go with their default PHP APIs. Working on AppEngine? Python. These days, I pick Python by default, unless I need something quick and dirty, or if it’s heavy text processing. (Python’s regular expression syntax isn’t as elegant as Perl’s or Javascript’s, mainly because it isn’t built into the language.)

To get a better hang of Python (and out of sheer bloody-mindedness), I’m working through the problems in Project Euler in Python. For those who don’t know about Project Euler,

Project Euler is a series of challenging mathematical/computer programming problems that will require more than just mathematical insights to solve. Although mathematics will help you arrive at elegant and efficient methods, the use of a computer and programming skills will be required to solve most problems.

Each problem has been designed according to a "one-minute rule", which means that although it may take several hours to design a successful algorithm with more difficult problems, an efficient implementation will allow a solution to be obtained on a modestly powered computer in less than one minute.

It’s a great way of learning a new programming language, and my knowledge of Python is pretty rudimentary. At the very least, going through this exercise has taught me the basics of generators.

I’ve solved around 40 problems so far. Here are my solutions to Project Euler. I’m also measuring the time it takes to run each solution. My target is no more than 10 seconds per problem, rather than the one-minute, for a very practical reason: the solutions page itself is generated by a Python script that runs each script, and I can’t stand waiting for more than that to refresh the page each time I solve a problem.

Bound methods in Javascript

The popular way to create a class in Javascript is to define a function and add methods to its prototype. For example, let’s create a class Node that has a method hide().

1
2
3
4
5
6
var Node = function(id) {
    this.element = document.getElementById(id);
};
Node.prototype.hide = function() {
    this.style.display = "none";
};

If you had a header, say <h1 id="header">Heading</h1>, then this piece of code will hide the element.

1
2
var node = new Node("header");
node.hide();

If I wanted to hide the element a second later, I am tempted to use:

3
4
var node = new Node("header");
setTimeout(node.hide, 1000);

… except that it won’t work. setTimeout has no idea that the function node.hide has anything to do with the object node. It just runs the function. When node.hide() is called by setTimeout, the this object isn’t set to node, it’s set to window. node.hide() ends up trying to hide window, not node.

The standard way around this is:

3
4
var node = new Node("header");
setTimeout(function() { node.hide()}, 1000);

I’ve been using this for a while, but it gets tiring. It’s so easy to forget to do this. Worse, it doesn’t work very well inside a loop:

1
2
3
4
for (var id in ["a", "b", "c"]) {
    var node = new Node(id);
    setTimeout(function() { node.hide(); }, 1000);
}

This actually hides node "c" thrice, and doesn’t touch nodes "a" and "b". You’ve got to remember to wrap every function that contains a function in a loop.

1
2
3
4
5
6
for (var id in ["a", "b", "c"]) {
    (function() {
        var node = new Node(id);
        setTimeout(function() { node.hide(); }, 1000);
    })();
}

Now, compare that with this:

1
2
3
for (var id in ["a", "b", "c"]) {
    setTimeout((new Node(id)).hide, 1000);
}

Wouldn’t something this compact be nice?

To do this, the method node.hide must be bound to the object node. That is, node.hide must know that it belongs to node. And when we call another_node.hide, it must know that it belongs to another_node. This, incidentally, is the way most other languages behave. For example, on python, try the following:

>>> class Node:
...     def hide():
...             pass
...
>>> node = Node()
>>> node
<__main__.Node instance at 0x00BA32D8>
>>> node.hide
<bound method Node.hide of <__main__.Node instance at 0x00BA32D8>>

The method hide is bound to the object node.

To do this in Javascript, instead of adding the methods to the prototype, you need to do two things:

  1. Add the methods to the object in the constructor
  2. Don’t use this. Set another variable called that to this, and use that instead.
1
2
3
4
5
6
7
var Node = function(id) {
    var that = this;
    that.element = document.getElementById(id);
    that.hide = function() {
        that.element.style.display = "none";
    }
};

Now node.hide is bound to node. The following code will work.

8
9
var node = new Node("header");
setTimeout(node.hide, 1000);

I’ve taken to using this pattern almost exclusively these days, rather than prototype-based methods. It saves a lot of trouble, and I find it makes the code a lot compacter and easier to read.

Downloading online songs

You know those songs on Raaga, MusicIndiaOnline, etc? The ones you can listen to but can’t download?

Well, you can download them.

It’s always been possible to download these files. After all, that’s how you get to listen to them in the first place. What stopped you is security by obscurity. You didn’t know the location where the song was stored, but if you did, you could download them.

So how do you figure out the URL to download the file from?

Let’s take a look at MusicIndiaOnline first. When you click on a song, it pops up in a new window. You can’t figure out the address of that window because the address bar is hidden, but you can get it by hovering over the link before clicking, or by right-clicking and copying the link location. It’s always something like this:

http://www.musicindiaonline.com/p/x/FAfgqz0HzS.As1NMvHdW/

It always has the same structure: http://www.musicindiaonline.com/p/x/something/. Let’s call that the something the song’s key.

Now, what this page does is play a .SMIL file. An SMIL file is a text file that has a list of songs to play. This file is always stored at

http://www.musicindiaonline.com/g/r/FAfgqz0HzS.As1NMvHdW/play2.smil

(Notice that the key remains the same.) If you type this into your browser, you’ll get a text file that you can open in Notepad. You’ll find that it has a bunch of lines, two of which are interesting. There’s one that reads:

<meta name="base" content="http://205.134.247.2/QdZmq-LL/">

The content="…" part gives you the first part of the song’s address. Then there’s a line that reads:

<audio src="ra/film/…"/>

The src="…" part gives you the rest of the address. Putting the two together, you have the full URL at which to download the song.

Except, they’re a bit smart about it. These songs are meant to be played on RealPlayer, and not downloaded. So if you try to access the page using your browser, you get a 404 Page not found error. But if you typed the same page into RealPlayer, you can hear the song play.

To actually download the song, you need to fool the site into thinking that your browser is RealPlayer. So first, you need to get a good browser like Firefox. Then download the User Agent Switcher add-on. Change your user agent to "RMA/1.0 (compatible; RealMedia)" and try the same song: you should be able to download it.

Let me summarise:

  1. Right-click on the song you want to play, and copy the URL
  2. Extract the key. In the URL http://www.musicindiaonline.com/p/x/FAfgqz0HzS.As1NMvHdW/ the key is FAfgqz0HzS.As1NMvHdW
  3. Open http://www.musicindiaonline.com/g/r/<your_key>/play2.smil in your browser. Open it with Notepad
  4. Switch the user agent on your browser to "RMA/1.0 (compatible; RealMedia)"
  5. Put the content="…" and audio src="…" pieces together to form the URL
  6. Type the URL and save the file

I’ve automated this in my music search engine. So if you go to the Hindi, Tamil or any other online music page and click on the blue ball next to the song, you’ll see a menu with a download option. The download opens a Java program that does these steps for you and saves the song in your PC.

So now, you’re probably thinking:

  1. How did he figure this out?
  2. What about other sites?
  3. How does that Java program work?
  4. How do I listen to these on my iPod?

How did I figure this out?

Fiddler.

I believe a good measure of a tool’s power is it’s ability to be the one-word answer to a relatively broad question. For example, "Where can I find more about something?" "Google it." "How do I improve my pictures?" "Photoshop it".

Fiddler’s like that. "How do I find out what I’m downloading?" "Use Fiddler".

It’s a proxy you can install on your machine. It automatically configures your browsers when you run it. Thereafter, it tells you about all the HTTP requests that are being sent by your machine. So if you patiently walk through the logs, you’ll find all the URLs that MusicIndiaOnline or any other site uses, as well as the other headers (like User-Agent) that are needed to make it work.

What about other sites?

I’ll list a couple here.

Smashits:

  1. Right-click on the song you want to play, and copy the URL
  2. View the source and hunt for the URL fragment "player/ra/ondemand/launch_player.cfm?something".  The key is is something.
  3. Open http://ww.smashits.com/player/ra/ondemand/playlist.asp?6;giftsToIndia1.rm;1,something in your browser, using Notepad
  4. Switch the user agent on your browser to "RMA/1.0 (compatible; RealMedia)"
  5. Type in what’s inside the src="…" in your browser and save the file

Oosai:

  1. Right-click on the song you want to play, and copy the URL
  2. View the source and hunt for the URL fragment onclick="setUsrlist(something)".  The key is is something.
  3. Open http://www.oosai.com/oosai_plyr/playlist.cfm?sng_id=something in your browser, using Notepad
  4. Switch the user agent on your browser to "RMA/1.0 (compatible; RealMedia)"
  5. Type in the URL that you see in Notepad and save the file.

Try figuring out the others yourself.

How does the Java program work?

It does most of these steps automatically. The applet itself is fairly straightforward, and you can view it here. It takes two parameters: db, which indicates the server from which to download (M for MusicIndiaOnline, S for Smashits, etc.) and num, which is the key.

But in order for an applet to be able to download files from any site, and to be able to save this on your PC, it needs to be a signed applet. Since Sun offers a tool to sign JAR files, this wasn’t much of an issue.

There is one pending problem with Windows Vista, however. Signed applets can’t save files anywhere on Vista. They are saved in a special folder. This is being fixed, but until then, if you use Internet Explorer on Vista, you probably will be unable to find your saved files.

How do I listen to these on my iPod?

The saved files are in RealMedia format. Your best bet is to convert them to MP3 files using MediaCoder. Then transfer them using iTunes.