MGR via ElevenLabs

I was watching Vaa Vaathiyar which has a short clip of MGR speaking. It’s either AI-generated or mimic-ed and it wasn’t bad. I used ffmpeg to record the audio from the film, transcribed it via Gemini 3 Pro on AI Studio with the prompt: Transcribe this into Tamil … which gave me: ராமு… என்ன செய்திருக்கிறாய் நீ… வாத்தியார் கேட்கிறேன் சொல் நிமிர்ந்து பார்க்க கூட தைரியம் இல்லையா… ஓடாதே… நில்… Translation: Ramu… What have you done… Vaathiyar (MGR) is asking, tell me Don’t you have the courage to stand up and look at me… Don’t run… stop… ...

Testing Pólya heuristics on AI Math

Terence Tao said, “We haven’t done many experiments … large-scale studies where we take a thousand problems and just test them.” So I told Claude: You know my style. Suggest some innovative experiments I could run. The first suggestion was cool! The Polya Audit. Polya’s How to Solve It lists 20 heuristics (work backwards, induction, analogy, etc.). Mathematicians treat these as wisdom. Nobody has ever measured which ones actually work, and on what problem types. ...

Hack of the Day on Times of India

Last Friday, 20 Mar 2026, this “Hack of the Day” was published by The Times of India. My agents generated it entirely automatically. Here’s how that happened. On 12 Feb 2026, I met Rohit Saran, Managing Editor at The Times of India. “Our biggest challenge is the starting challenge. What story to do?” he said. “We waste a lot of time and we starve stories because of this.” What if AI could help with that? We talked for nearly two hours - and left asking: “Should we do just a daily visual newspaper?” ...

Read Tamil on TV with Gemini

I’ve been reading books using AI. Today, I used Gemini while watching a TV show. (Not to watch TV - just while watching TV.) There’s this scene in Iru Dhuruvam Season 2 with a sheet of paper with Tamil text on it. The script was small and I couldn’t read it clearly. (I’m pretty slow at reading Tamil anyway.) So I took a screenshot (Linux is great that way - you can record screenshots from any video player) and asked Gemini: ...

Sonnet 4.6 vs MiniMax M2.7

Based on several (i.e. two) recommendations, I subscribed to MiniMax. At $10/month, you get 1,500 requests every 5 hours and 15,000 every week. That’s a LOT! Using the same prompt I had Claude Code generate two data stories: The first paragraph, by Claude Sonnet 4.6 The first paragraph, by MiniMax M2.7 Here’s my comparison of the two. It’s partly based on Claude Opus 4.6’s comparison but I felt the same way. ...

Coding agents ARE the new software

Increasingly, I use coding agents instead of writing software. For example, I built a Blog UMAP. Then, I built Calvin UMAP. And more. But instead of building re-usable software, I just ran Claude with prior context. Increasingly, I use coding agents to run software. For example, I use Codex to classify my expense receipts. It writes re-usable code, but I run it using Codex, and it updates the code with new/edge cases. ...

The Nov 2025 Vibe Coding Ghost Revolution

I kept hearing that with the Nov 2025 release of Opus 4.5 and GPT 5.2 Codex, ex-coders were sprinting back to coding. On a sample of ~1,700 developers on GitHub, exactly ten fit the “dormant returner” profile. Here are a couple of examples: But they’re the exception. I could find only TEN out of 1,700 developers who returned. I also found a few who exited: To be fair, the vibe coding revolution is real, but maybe we are (I am) mis-interpreting it. ...

Live Vibe Coding using Others' Ideas

I spoke today on Design in the Age of Infinite Generativity at the Chennai Design Festival. You can read about the talk in the link about. This post is about my preparation. Tue 10 Mar 2026. Damn! Palani’s asked for the topic. Claude, what should I talk about!? Fri 20 Mar 2026. ChatGPT, tell me who the other speaker are. Fri 20 Mar 2026. Oh, I’ll just pull a bunch of links, use browser tabs as slides, create some slide dividers, and I’m ready! Sat 21 Mar 2026 1:00 pm. I’m NOT ready! The story doesn’t flow. It’s rubbish. Sat 21 Mar 2026 3:00 pm. Let me drop some of the boring ones. I just have 15 minutes. Sat 22 Mar 2026 3:30 pm. Oh, maybe I should listen to what the others are saying, just… you know… … and that proved the best decision ever, because Senthil of Payir showed a re-usable fabric calendar that converts into a bag. It was a fantastic idea, so I got curious. ...

Calvin UMAP

Similar to the embedding map of my blog posts, I created an embedding map of Calvin & Hobbes. It uses the same process as before. Video

How I use AI to teach

I’ve been using AI in my Tools in Data Science course for over two years - to teach AI, and using AI to teach. I told GitHub Copilot (prompt) to go through my transcripts, blog posts, code, and things I learned since 2024 to list my every experiment in AI education, rating it on importance and novelty. Here is the full list of my experiments. 1. Teach using exams and prompts, not content ⭐ Use exams to teach. The typical student is busy. They want grades, not learning. They’ll write the exams, but not read the content. So, I moved the course material into the questions. If they can answer the question, great. Skip the content. Use AI to generate the content. I used to write content. Then I linked to the best content online – it’s better than mine. Now, AI drafts comics, interactive explainers, and simulators. My job is to pick good topics and generate in good formats. Give them prompts directly. Skip the content! I generated them with prompts anyway. Give students the prompts directly. They can use better AI models, revise the prompts, and learn how to learn with AI. ⭐ Add an “Ask AI” button. Make it easy for students to use ChatGPT. Stop pretending that real-world problem solving is closed-book and solo. ⭐ Make test cases teach, not just grade. Automate the testing (with code or AI). Good test cases show students the kind of mistake they may - teaching them, not just grading them. That’s great for teachers to analyze, too. Test first, then teach from the mistakes. Let them solve problems first. Then teach them, focusing on what failed. AI does the work; humans handle what AI can’t. This lets us teach really useful skills based on real mistakes. 2. Make cheating pointless through design, not detection ...

Local context repositories for AI

When people ask me for connections, I share my LinkedIn data and ask them to pick. This week, three people asked for AI ideas. I shared my local content with AI coding agents and asked them to pick. STEP 1: Give access to content. I use a Dockerfile and script to isolate coding agents. To give access, I run: dev.sh -v /home/sanand/code/blog/:/home/sanand/code/blog/:ro \ -v /home/sanand/code/til:/home/sanand/code/til:ro \ -v /home/sanand/Dropbox/notes/transcripts:/home/sanand/Dropbox/notes/transcripts:ro This gives read-only access to my blog, things I learned, transcripts, and I can add more. (My transcripts are private, the rest are public.) ...

AI in SDLC at PyConf

I was at a panel on AI in SDLC at PyConf. Here’s the summary of my advice: Process Make AI your entire SDLC loop. Record client calls, feed them to a coding agent to directly build & deploy the solution. Record your prompts, run post-mortems, and distill them into SKILLS.md files for reuse. Prompting Ask AI to make output more reviewable. Don’t waste time reviewing unclear output. Prefer directional feedback (feeling, emotion, intent) over implementational. Also give AI freedom to do things its way. Learn from that - you’ll be surprised. Learning ...

Interactive Explainers

Given how easy it is to create interactive explainers with LLMs, we should totally do more of these! For example, I read about “Adversarial Validation” in my Kaggle Notebooks exploration. It’s the first time I heard of it and I couldn’t understand it. So, I asked Gemini to create an interactive explainer: Create an interactive animated explainer to teach what adversarial validation is. Provide sample code only at the end. Keep the bulk of the explainer focused on explaining the concept in simple language. ELI15 ...

Human as an Interface

People often email me questions they could have answered with ChatGPT. I just copy-paste the question, copy-paste the answer. This isn’t new. From 1998-2005, I used to do this Google searches. Even people who have Google Maps on their phone ask me for directions. I pull out my Google Maps and tell them. They don’t even get the sarcasm. Effectively, I’m the Human-as-an-Interface (HAAI everyone!) But I learnt today that this has historical precedent. Doormen, lift operators, doormen, the waiter who recites the menu, the secretary we used to dictate to, … ...

Kick-starting a PyConf Panelist Interview

I was a panelist at the PyConf Hyderabad AI in SDLC - Panel Discussion. After that, one of the volunteers asked for a video interview. “How was the panel discussion?” he asked. Ever since I started using AI actively, my brain doesn’t work without it. So, instead of an eloquent answer, I said, “Good.” He tried again. “Um… how did you feel about it?” he asked. I searched for my feelings. Again, fairly empty in the absence of AI. “Good,” I said again. ...

Blog embeddings map

I created an embedding map of my blog posts. Each point is a blog post. Similar posts are closer to each other. They’re colored by category. I’ve been blogging since 1999 and over time, my posts have evolved. 1999-2005: mostly links. I started by link-blogging 2005-2007: mostly quizzes, how I do things, Excel tips, etc. 2008-2014: mostly coding, how I do things and business realities 2015-2019: mostly nothing 2019-2023: mostly LinkedIn with some data and how I do things 2024-2026: mostly LLMs … and this transition is entirely visible in the embedding space. ...

AI Palmistry

I shared a photo of my right hand with popular AI agents and asked for a detailed palmistry reading. Apply all the principles of palmistry and read my hand. Be exhaustive and cross-check against the different schools of palmistry. Tell me what they consistently agree on and what they are differing on. I was more interested in how much they agree with each other than with reality. So I shared all three readings and asked Claude: ...

Hardening my Dev Container Setup

I run AI coding agents inside a Docker container for safety. The setup is dev.dockerfile: builds the image dev.sh: launches the container with the right mounts and env vars dev.test.sh: verifies everything works. I wrote them semi-manually and it had bugs. I had GitHub Copilot + GPT-5.4 High update tests and actually run the commands to verify the setup. Here’s what I learned from the process. 1. Make it easier to review. The first run took long. I pressed Ctrl+C, told Copilot to “add colored output, timing, and a live status line”. Then I re-ran. Instead of a bunch of ERROR: lines, I now got a color-coded output with timing + a live status line showing what’s running. ...

Cracking online exams with coding agents

An effective way to solve online exams is to point a coding agent at it. I use that on my Tools in Data Science course in two ways: As a test case of my code. If my agent can solve it, good: I set the question correctly. As a test of student ability. If it can’t, good: it’s a tough question (provided I didn’t make a mistake). For PyConf, Hyderabad, my colleague built a Crack the Prompt challenge. Crack it and you get… I don’t know… goodies? A job interview? Leaderboard bragging rights? ...

The Future of Work with AI

I often research how the world will change with AI by asking AI. Today’s session was informative. I asked Claude, roughly Economics changes human behavior. As intelligence cost falls to zero, here are some changes in my behavior [I listed these]. Others will have experienced behavioral changes too. Search online and synthesize behavioral changes. It said this. 🟡 People spend time on problem framing & evaluation. AI can execute the middle. (I’m OK at this. Need to do more framing + evaluation.) 🟢 People don’t plan, they just build. (I’m prototyping a lot.) 🟢 People build personal data & context. (I’m mining my digital exhaust.) 🔴 People queue work for agents, delegating into the future. (I’m not. I need to do far more of this.) 🟢 People shift from searching to asking for answers. (I do this a lot, e.g. this post.) 🟡 People are AI-delegating junior jobs and developing senior level taste early. (Need to do more.) 🟡 People treat unresolved emotions as prompts. (Need to do more.) Rough legend: 🟢 = Stuff I know. 🟡 = I kind-of know. 🔴 = New learning. ...