I have an automated (and lazy) way of finding interesting sites. This is what I do every day.

  1. I get the tags of every URL I blog about. (It’s available at followed by the MD5 hex version of the URL).
  2. I pick the most popular tags (at least 50 links must have this tag), and use them as my “preferred tags”
  3. I scan the most popular sites on, and get each site’s tags
  4. If a site has my preferred tags, I give it points (the number of points is equal to the number of times I’ve blogged that tag)
  5. I pick the top 5 sites based on my points, and read them.

There are two problems I have now. Firstly, I will find sites similar to those I have blogged about — not discover anything new. That’s fine to start with — I can search for those manually. The bigger problem is, this is restricted to There are two ways I can extend this (lazily).

  1. By finding new sources of popular URLs (which requires a site with a list of popular URLs updated daily, which I will find interesting)
  2. By finding new sites that tag URLs (which ideally requires an API to get the tags for a given URL)

There are lots of sources for popular URLs. But though many of sites, including notably Technorati, tag URLs, but none of them I know have APIs.