
ABOUT ME
aliases: Anand, Bal, Bhalla, Stud, Prof.
Vidya Mandir. IITM. IBM. IIMB. LBS.
Lehman. BCG. Infy Consulting. Gramener. Straive.
More about me.
CONTACT ME
whatsapp: +91 9741 552 552
phone: +65 8646 2570
e-mail: [email protected]
social: LinkedIn | GitHub | YouTube
WORKING WITH ME
To invite me to speak, please see my talks page.
For advice, see time management, career or AI advice. Else mail me.
To work with me on projects, please send a pull request.
GET UPDATES
RSS Feed. Visit “Categories” at the bottom for category-specific feeds.
Email Newsletter via Google Groups.
RECENT POSTS
Human as an Interface
People often email me questions they could have answered with ChatGPT. I just copy-paste the question, copy-paste the answer. This isn’t new. From 1998-2005, I used to do this Google searches. Even people who have Google Maps on their phone ask me for directions. I pull out my Google Maps and tell them. They don’t even get the sarcasm. Effectively, I’m the Human-as-an-Interface (HAAI everyone!) But I learnt today that this has historical precedent. Doormen, lift operators, doormen, the waiter who recites the menu, the secretary we used to dictate to, … ...
Software Naming Has Power
Software naming has power. I first became aware of this when a friend commented how much he enjoyed starting Windows 3. “Win,” he said. “I just love typing that!” I felt this this again recently with just. just lint Can you feel it? just build Actually, I just like to say “Just…” just test
Kick-starting a PyConf Panelist Interview
I was a panelist at the PyConf Hyderabad AI in SDLC - Panel Discussion. After that, one of the volunteers asked for a video interview. “How was the panel discussion?” he asked. Ever since I started using AI actively, my brain doesn’t work without it. So, instead of an eloquent answer, I said, “Good.” He tried again. “Um… how did you feel about it?” he asked. I searched for my feelings. Again, fairly empty in the absence of AI. “Good,” I said again. ...
IIM Bangalore PGP Interview Panel
Yesterday, I was part of an IIM Bangalore interview panel at Hyderabad, along with Professor Subhabrata Das and Debajyoti. Panels typically comprise of two faculty and an alumni, and handle 8 interviews in the morning and eight in the evening, though in our case, we had 9 each. As we arrived, we were given a USB drive with the student’s resume, statement of purpose, and other documents that they had submitted, which included employment contracts, declarations, letters of recommendation, etc., depending on the student. Each interview was approximately 20 minutes. Luckily, Dr Das set a timer for 18, so we didn’t go too far beyond. ...
Blog embeddings map
I created an embedding map of my blog posts. Each point is a blog post. Similar posts are closer to each other. They’re colored by category. I’ve been blogging since 1999 and over time, my posts have evolved. 1999-2005: mostly links. I started by link-blogging 2005-2007: mostly quizzes, how I do things, Excel tips, etc. 2008-2014: mostly coding, how I do things and business realities 2015-2019: mostly nothing 2019-2023: mostly LinkedIn with some data and how I do things 2024-2026: mostly LLMs … and this transition is entirely visible in the embedding space. ...
AI Palmistry
I shared a photo of my right hand with popular AI agents and asked for a detailed palmistry reading. Apply all the principles of palmistry and read my hand. Be exhaustive and cross-check against the different schools of palmistry. Tell me what they consistently agree on and what they are differing on. I was more interested in how much they agree with each other than with reality. So I shared all three readings and asked Claude: ...
Hardening my Dev Container Setup
I run AI coding agents inside a Docker container for safety. The setup is dev.dockerfile: builds the image dev.sh: launches the container with the right mounts and env vars dev.test.sh: verifies everything works. I wrote them semi-manually and it had bugs. I had GitHub Copilot + GPT-5.4 High update tests and actually run the commands to verify the setup. Here’s what I learned from the process. 1. Make it easier to review. The first run took long. I pressed Ctrl+C, told Copilot to “add colored output, timing, and a live status line”. Then I re-ran. Instead of a bunch of ERROR: lines, I now got a color-coded output with timing + a live status line showing what’s running. ...
Cracking online exams with coding agents
An effective way to solve online exams is to point a coding agent at it. I use that on my Tools in Data Science course in two ways: As a test case of my code. If my agent can solve it, good: I set the question correctly. As a test of student ability. If it can’t, good: it’s a tough question (provided I didn’t make a mistake). For PyConf, Hyderabad, my colleague built a Crack the Prompt challenge. Crack it and you get… I don’t know… goodies? A job interview? Leaderboard bragging rights? ...
The Future of Work with AI
I often research how the world will change with AI by asking AI. Today’s session was informative. I asked Claude, roughly Economics changes human behavior. As intelligence cost falls to zero, here are some changes in my behavior [I listed these]. Others will have experienced behavioral changes too. Search online and synthesize behavioral changes. It said this. 🟡 People spend time on problem framing & evaluation. AI can execute the middle. (I’m OK at this. Need to do more framing + evaluation.) 🟢 People don’t plan, they just build. (I’m prototyping a lot.) 🟢 People build personal data & context. (I’m mining my digital exhaust.) 🔴 People queue work for agents, delegating into the future. (I’m not. I need to do far more of this.) 🟢 People shift from searching to asking for answers. (I do this a lot, e.g. this post.) 🟡 People are AI-delegating junior jobs and developing senior level taste early. (Need to do more.) 🟡 People treat unresolved emotions as prompts. (Need to do more.) Rough legend: 🟢 = Stuff I know. 🟡 = I kind-of know. 🔴 = New learning. ...
Recording screencasts
Since WEBM compresses videos very efficiently, I’ve started using videos more often. For example, in Prototyping the prototypes and in Using game-playing agents to teach. I use a fish script to compress screencasts like this: # Increase quality with lower crf= (55 is default, 45 is better/larger) # and higher fps= (5 is default, 10 is better/larger). screencastcompress --crf 45 --fps 10 a.webm b.webm ... To record the screencasts, I prefer slightly automated approaches for ease and quality. ...