I have an automated (and lazy) way of finding interesting sites. This is what I do every day.
- I get the del.icio.us tags of every URL I blog about. (It’s available at http://del.icio.us/rss/url/ followed by the MD5 hex version of the URL).
- I pick the most popular tags (at least 50 links must have this tag), and use them as my “preferred tags”
- I scan the most popular sites on del.icio.us, and get each site’s tags
- If a site has my preferred tags, I give it points (the number of points is equal to the number of times I’ve blogged that tag)
- I pick the top 5 sites based on my points, and read them.
There are two problems I have now. Firstly, I will find sites similar to those I have blogged about — not discover anything new. That’s fine to start with — I can search for those manually. The bigger problem is, this is restricted to del.icio.us. There are two ways I can extend this (lazily).
- By finding new sources of popular URLs (which requires a site with a list of popular URLs updated daily, which I will find interesting)
- By finding new sites that tag URLs (which ideally requires an API to get the tags for a given URL)
There are lots of sources for popular URLs. But though many of sites, including notably Technorati, tag URLs, but none of them I know have APIs.