Live Vibe Coding using Others' Ideas

I spoke today on Design in the Age of Infinite Generativity at the Chennai Design Festival. You can read about the talk in the link about. This post is about my preparation. Tue 10 Mar 2026. Damn! Palani’s asked for the topic. Claude, what should I talk about!? Fri 20 Mar 2026. ChatGPT, tell me who the other speaker are. Fri 20 Mar 2026. Oh, I’ll just pull a bunch of links, use browser tabs as slides, create some slide dividers, and I’m ready! Sat 21 Mar 2026 1:00 pm. I’m NOT ready! The story doesn’t flow. It’s rubbish. Sat 21 Mar 2026 3:00 pm. Let me drop some of the boring ones. I just have 15 minutes. Sat 22 Mar 2026 3:30 pm. Oh, maybe I should listen to what the others are saying, just… you know… … and that proved the best decision ever, because Senthil of Payir showed a re-usable fabric calendar that converts into a bag. It was a fantastic idea, so I got curious. ...

Calvin UMAP

Similar to the embedding map of my blog posts, I created an embedding map of Calvin & Hobbes. It uses the same process as before. Video

How I use AI to teach

I’ve been using AI in my Tools in Data Science course for over two years - to teach AI, and using AI to teach. I told GitHub Copilot (prompt) to go through my transcripts, blog posts, code, and things I learned since 2024 to list my every experiment in AI education, rating it on importance and novelty. Here is the full list of my experiments. 1. Teach using exams and prompts, not content ⭐ Use exams to teach. The typical student is busy. They want grades, not learning. They’ll write the exams, but not read the content. So, I moved the course material into the questions. If they can answer the question, great. Skip the content. Use AI to generate the content. I used to write content. Then I linked to the best content online – it’s better than mine. Now, AI drafts comics, interactive explainers, and simulators. My job is to pick good topics and generate in good formats. Give them prompts directly. Skip the content! I generated them with prompts anyway. Give students the prompts directly. They can use better AI models, revise the prompts, and learn how to learn with AI. ⭐ Add an “Ask AI” button. Make it easy for students to use ChatGPT. Stop pretending that real-world problem solving is closed-book and solo. ⭐ Make test cases teach, not just grade. Automate the testing (with code or AI). Good test cases show students the kind of mistake they may - teaching them, not just grading them. That’s great for teachers to analyze, too. Test first, then teach from the mistakes. Let them solve problems first. Then teach them, focusing on what failed. AI does the work; humans handle what AI can’t. This lets us teach really useful skills based on real mistakes. 2. Make cheating pointless through design, not detection ...

Local context repositories for AI

When people ask me for connections, I share my LinkedIn data and ask them to pick. This week, three people asked for AI ideas. I shared my local content with AI coding agents and asked them to pick. STEP 1: Give access to content. I use a Dockerfile and script to isolate coding agents. To give access, I run: dev.sh -v /home/sanand/code/blog/:/home/sanand/code/blog/:ro \ -v /home/sanand/code/til:/home/sanand/code/til:ro \ -v /home/sanand/Dropbox/notes/transcripts:/home/sanand/Dropbox/notes/transcripts:ro This gives read-only access to my blog, things I learned, transcripts, and I can add more. (My transcripts are private, the rest are public.) ...

AI in SDLC at PyConf

I was at a panel on AI in SDLC at PyConf. Here’s the summary of my advice: Process Make AI your entire SDLC loop. Record client calls, feed them to a coding agent to directly build & deploy the solution. Record your prompts, run post-mortems, and distill them into SKILLS.md files for reuse. Prompting Ask AI to make output more reviewable. Don’t waste time reviewing unclear output. Prefer directional feedback (feeling, emotion, intent) over implementational. Also give AI freedom to do things its way. Learn from that - you’ll be surprised. Learning ...

Interactive Explainers

Given how easy it is to create interactive explainers with LLMs, we should totally do more of these! For example, I read about “Adversarial Validation” in my Kaggle Notebooks exploration. It’s the first time I heard of it and I couldn’t understand it. So, I asked Gemini to create an interactive explainer: Create an interactive animated explainer to teach what adversarial validation is. Provide sample code only at the end. Keep the bulk of the explainer focused on explaining the concept in simple language. ELI15 ...

Human as an Interface

People often email me questions they could have answered with ChatGPT. I just copy-paste the question, copy-paste the answer. This isn’t new. From 1998-2005, I used to do this Google searches. Even people who have Google Maps on their phone ask me for directions. I pull out my Google Maps and tell them. They don’t even get the sarcasm. Effectively, I’m the Human-as-an-Interface (HAAI everyone!) But I learnt today that this has historical precedent. Doormen, lift operators, doormen, the waiter who recites the menu, the secretary we used to dictate to, … ...

Kick-starting a PyConf Panelist Interview

I was a panelist at the PyConf Hyderabad AI in SDLC - Panel Discussion. After that, one of the volunteers asked for a video interview. “How was the panel discussion?” he asked. Ever since I started using AI actively, my brain doesn’t work without it. So, instead of an eloquent answer, I said, “Good.” He tried again. “Um… how did you feel about it?” he asked. I searched for my feelings. Again, fairly empty in the absence of AI. “Good,” I said again. ...

Blog embeddings map

I created an embedding map of my blog posts. Each point is a blog post. Similar posts are closer to each other. They’re colored by category. I’ve been blogging since 1999 and over time, my posts have evolved. 1999-2005: mostly links. I started by link-blogging 2005-2007: mostly quizzes, how I do things, Excel tips, etc. 2008-2014: mostly coding, how I do things and business realities 2015-2019: mostly nothing 2019-2023: mostly LinkedIn with some data and how I do things 2024-2026: mostly LLMs … and this transition is entirely visible in the embedding space. ...

AI Palmistry

I shared a photo of my right hand with popular AI agents and asked for a detailed palmistry reading. Apply all the principles of palmistry and read my hand. Be exhaustive and cross-check against the different schools of palmistry. Tell me what they consistently agree on and what they are differing on. I was more interested in how much they agree with each other than with reality. So I shared all three readings and asked Claude: ...

Hardening my Dev Container Setup

I run AI coding agents inside a Docker container for safety. The setup is dev.dockerfile: builds the image dev.sh: launches the container with the right mounts and env vars dev.test.sh: verifies everything works. I wrote them semi-manually and it had bugs. I had GitHub Copilot + GPT-5.4 High update tests and actually run the commands to verify the setup. Here’s what I learned from the process. 1. Make it easier to review. The first run took long. I pressed Ctrl+C, told Copilot to “add colored output, timing, and a live status line”. Then I re-ran. Instead of a bunch of ERROR: lines, I now got a color-coded output with timing + a live status line showing what’s running. ...

Cracking online exams with coding agents

An effective way to solve online exams is to point a coding agent at it. I use that on my Tools in Data Science course in two ways: As a test case of my code. If my agent can solve it, good: I set the question correctly. As a test of student ability. If it can’t, good: it’s a tough question (provided I didn’t make a mistake). For PyConf, Hyderabad, my colleague built a Crack the Prompt challenge. Crack it and you get… I don’t know… goodies? A job interview? Leaderboard bragging rights? ...

The Future of Work with AI

I often research how the world will change with AI by asking AI. Today’s session was informative. I asked Claude, roughly Economics changes human behavior. As intelligence cost falls to zero, here are some changes in my behavior [I listed these]. Others will have experienced behavioral changes too. Search online and synthesize behavioral changes. It said this. 🟡 People spend time on problem framing & evaluation. AI can execute the middle. (I’m OK at this. Need to do more framing + evaluation.) 🟢 People don’t plan, they just build. (I’m prototyping a lot.) 🟢 People build personal data & context. (I’m mining my digital exhaust.) 🔴 People queue work for agents, delegating into the future. (I’m not. I need to do far more of this.) 🟢 People shift from searching to asking for answers. (I do this a lot, e.g. this post.) 🟡 People are AI-delegating junior jobs and developing senior level taste early. (Need to do more.) 🟡 People treat unresolved emotions as prompts. (Need to do more.) Rough legend: 🟢 = Stuff I know. 🟡 = I kind-of know. 🔴 = New learning. ...

LLM Comic Styles

I maintain an LLM art style gallery - prompts to style any image I generate. Since I generate several comics, I added a comic category page that includes styles like: To generate these, I asked Claude: Here are some examples of image styles I've explored. <image-styles> "2D Animation": "2D flat animation style, clean vector lines, cel-shaded coloring, cartoon proportions" "3D Animation": "Modern 3D animation render, smooth surfaces, dramatic lighting, Octane render quality, cinematic depth" ... </image-styles> In the same vein, I'd like to explore **comic** styles. Create 30 popular comic / cartoon styles, aiming for diverse aesthetics and cultural influences. Name it concisely (1-2 words) based on the source, but the description should not reference the source directly (to avoid copyright issues). Focus on the visual characteristics that define each style. Pick those KEY visual elements that will subliminally evoke the style without explicitly naming it. … followed by: ...

Protyping the prototypes

I added a narrative story to my LLM Pricing chart. That makes it easier for me and others to tell the story of AI’s evolution in the last three years. Video It was vibe-coded over two iterations. In the first version, I prompted it to: Add a scrollytelling narrative. So, when users first visit the page, they see roughly the same thing as now (but prettier). As they scroll down, the page should smoothly move to the earliest month, and then animate month by month on scroll, and explaining the key events and insights in terms of model quality and pricing. Use the data story skill to do this effectively, narrating like Malcolm Gladwell, with the visual style of The New York Times, using the education progression as a framework for measure of intelligence (read prompts.md for context). Store the narrative text in a separate JSON file and read from it. This should control the entire narrative, including what month to jump to next, what models to highlight, what insights to share, and so on. ...

Directional feedback for AI

People worry that AI atrophies skills. Also that junior jobs, hence learning opportunities, are shrinking. Can AI fill the gap, i.e. help build skills? One approach is: Do it without AI. Then have AI critique it and learn from it. (Several variations work, e.g. have the AI do it independently and compare. Have multiple AIs do it and compare. Have AI do it and you critique - but this is hard.) ...

Using game-playing agents to teach

After an early morning beach walk with a classmate, I realized I hadn’t taken my house keys. My daughter would be sleeping, so I wandered with my phone. This is when I get ideas - often a dangerous time for my students. In this case, the idea was a rambling conversation with Claude that roughly begins with: As part of my Tools in Data Science course, I plan to create a Cloudflare worker which allows students to play a game using an API. The aim is to help them learn how to build or use AI coding agents to interact with APIs to solve problems. ...

Leaked key sociology

It’s impressive how easy it is to find leaked API keys in public repositories. I asked Codex to run trufflehog on ~5,000 student GitHub accounts and (so far, after a few hours, 15% coverage), it found quite a few. Some are intended to be public, like Google Custom Search Engine keys. 1 2 const GOOGLE_API_KEY = "AIza..."; const GOOGLE_CX = "211a..."; Some are Gemini API keys. 1 2 3 4 5 6 7 api_key1 = "AIza..." But what’s really impressive is, when I ran: ...

Gemini CLI harness is not good enough

I’ve long felt that while the Gemini 3 Pro model is fairly good, the Gemini CLI harness isn’t. I saw an example of this today. Me: Tell me the GitHub IDs of all students in this directory. Gemini CLI: SearchText 'github' within ./ Found 100 matches (limited) Sending this message (14606686 tokens) might exceed the remaining context window limit (1037604 tokens). Me: Only send the (small) required snippets of data. Write code as required. ...

The Nano Banana Paradox

STEP 1: I asked Nano Banana 2 (via Gemini Pro) to: Imagine and draw a photo that looks ultra realistic but on a closer look, is physically impossible, and can only exist because images are a 2D projection that we extrapolate into three dimensions. Avoid known / popular illusions or images of this kind, like Escher’s work, and create something truly original. Think and draw CAREFULLY! … six times, followed by “Suggest a name for this”. ...