AI can be held to account

“Humans can be held to account. Not AI.” I hear this often. But it’s not true. Corporations are non-human, but they can enter into contracts and face criminal charges. Ships can be sued directly. Courts can arrest the vessel itself. Deities and temples in India can own property. Forests and rivers in New Zealand, Colombia, Spain, have been granted legal personhood. Medieval Europe has held animal trials (e.g. for “guilty” pigs). ...

If a bot passes your exam, what are you teaching?

It’s incredible how far coding agents have come. They can now solve complete exams. That changes what we should measure. My Tools in Data Science course has a Remote Online Exam. It was so difficult that, in 2023, it sparked threads titled “What is the purpose of an impossible ROE?” Today, despite making the test harder, students solve it easily with Claude, ChatGPT, etc. Here’s today’s score distribution: ...

OpenAI TTS cost

The OpenAI text-to-speech cost documentation is confusing. As of 2 Nov 2025: GPT-4o mini TTS costs $0.60 / MTok input and $12.00 / MTok audio output according to the model page and the pricing page. They also estimate this to be ~1.5c per minute - both for input and output. It supports up to 2,000 tokens input. TTS-1 costs $15 / MTok speech generated according to the model page but the pricing page says it's $15 / MChars. No estimate per minute is provided. Is supports up to 4,096 characters input. TTS-1 HD is twice as expensive as TTS-1 I wanted to find the approximate total cost for a typical text input measured per character and token. ...

Tamil AI

I was testing LLMs’ sense of Tamil humor with this quote: Extend this post with more funny Tamil words that end with .ai - mentioning why they’re funny. Chenn.ai is the artificial intelligence capital of India. Kadal.ai Kad.ai Dos.ai Vad.ai Ad.ai Thal.ai Mallig.ai Aratt.ai And finally Podad.ai All spoken in namma bash.ai 😅 The Chinese models didn’t fare well. DeepSeek made up words. Mood.ai - An AI that perfectly captures your mood. Sokk.ai - The AI for when you’re bored. Thanni.ai - A hydration assistant. Qwen too. ...

How to create a data-driven exam strategy

Can ChatGPT give teachers data-driven heuristics on student grades? I uploaded last term’s scores from about 1,700 students in my Tools in Data Science course and asked ChatGPT: This sheet contains the scores of students … (and explained the columns). I want to find out what are the best predictors of the total plus bonus… (and explained how scores are calculated). I am looking for simple statements with 80%+ correctness along the lines of: ...

The Non-Obvious Impact of Reasoning Defaults

Yesterday, I discovered how much reasoning improves model quality. My Tools in Data Science assignment asks students to draft an llms.txt file for ipify and auto-checks with GPT-5 Nano - a fast, cheap reasoning model. I set reasoning_effort to minimal and ran this checklist: 1. Starts with "# ipify" and explains ipify. 2. Markdown sections on API access, support (e.g. GitHub, libraries). 3. Covers API endpoints (IPv4, IPv6, universal) and formats (text, JSON, JSONP). 4. Mentions free, no-auth usage, availability, open-source, safeguards. 5. Has maintenance metadata (e.g. "Last updated: <Month YYYY>"). 6. Mentions robots.txt alignment. Stay concise (no filler, <= ~15 links). If even one checklist item is missing or wrong, fail it. Respond with EXACTLY one line: PASS - <brief justification> or FAIL - <brief explanation of the first failed item>. With a perfect llms.txt, it claimed “Metadata section is missing” and “JSONP not mentioned” – though both were present. ...

Vibe-Scraping: Write outcomes, not scrapers

There hasn’t been a box-office explosion like Dangal in the history of Bollywood. CPI inflation-adjusted to 2024, it is the only film in the ₹3,000 Cr club. 3 Idiots (2009) is the first member of the ₹1,000 Cr club (2024-inflation-adjusted). The hot streak was 2013-2017: each year, a film crossed that bar: Dhoom 3, PK, Bajrangi Bhaijaan, Dangal, Secret Superstar. Since then, we never saw such a release except in 2023 (Jawan, Pathan). ...

Vibe Shopping

I’ve started vibe shopping, i.e. using ChatGPT to shop for small, daily items and buying without verifying. For example: “A metal rack for the floor: at least 2 ft * 1 ft * 2 ft, small gaps, popular options on Amazon.in.” https://chatgpt.com/share/68d61d68-7040-800c-936b-354749539308 “An optical wired mouse that’s smaller than usual, 4*+, popular, Prime-eligible for Chennai by the weekend on Amazon.in.” https://chatgpt.com/share/68d61e0d-420c-800c-bc71-821b9f9296a9 The best use is when I don’t know the right terms. In this case, the terms were wire rack and mini mouse. ...

Voice coding is the new live coding

In Feb 2025 at PyConf Hyderabad, I tried a new slide format: command-line slideshows in bash. I’ve used this format in more talks since then: LLMs in the CLI, PyCon Singapore, Jun 2025 Agents in the CLI, Singapore Python User Group, Jul 2025 DuckDB is the new Pandas, PyCon India, Sep 2025 It’s my favorite format. I can demo code without breaking the presentation flow. It also draws interest. My setup was the top question in my PyCon talk. ...

AfterSlides: Write Slides After Talks

25 years ago, Mr. Krishnan (IAS) amused us with anecdotes of bureaucrats writing meeting minutes before the meeting. This week, I flipped that. I wrote slides after the talk. I call them AfterSlides. Why. I ran a couple of Ask-Me-Anything (AMA) sessions where the audience set the agenda. I learned their interests. They got answers. No slides prepared. How. I okayed recording with the organizers, recorded on my phone, transcribed with Gemini, and asked ChatGPT to generate the AfterSlides. ...

Turning Generic Gifts Into Joy with AI

In 2001, I received a campus interview invitation from BCG. It opened like this: Dear Anand, We’d like to invite you to an interview on … We were impressed by your … … and went on to share 2-3 phrases about what they liked about my CV. A dozen of us got similar letters – each personalized! That was cool. Two decades later, I still remember it. It showed care and competence – care enough to personalize for each candidate, competence to pull it off at scale across campuses. ...

The Surprising Power of LLMs: Jack-of-All-Trades

I asked ChatGPT to analyze our daily innovation-call transcripts. I used command-line tools to fetch the transcripts and convert them into text: # Copy the transcripts rclone copy "gdrive:" . --drive-shared-with-me --include "Innovation*Transcript*.docx" # Convert Word documents to Markdown for f in *.docx; do pandoc "$f" -f docx -t gfm+tex_math_dollars --wrap=none -o "${f%.docx}.md" done # Compress into a single file tar -cvzf transcripts.tgz *.md … and uploaded it to ChatGPT with this prompt: ...

Measuring talking time with LLMs

I record my conversations these days, mainly for LLM use. I use them in 3 ways: Summarize what I learned and the next steps. Ideate as raw material for my Ideator tool: /blog/llms-as-idea-connection-machines/ Analyze my transcript statistics. For example, I learned that: When I’m interviewing, others ramble (speak long per turn), I am brief (less words/turn) and quiet (lower voice share). In one interview, I spoke ~30 words per turn. Others spoke ~120. My share was ~10%. When I’m advising or demo-ing, I ramble. I spoke ~120 words per turn in an advice call, and took ~75% of the talk-time. This pattern is independent of meeting length and group size. I used Codex CLI (command-line tool) for this, with the prompt: ...

LLMs as Idea Connection Machines

In a recent talk at IIT Madras, I highlighted how large language models (LLMs) are taking over every subject of the MBA curriculum: from finance to marketing to operations to HR, and even strategy. One field that seemed hard to crack was innovation. Innovation also happens to be my role. But LLMs are encroaching into that too. LLMs are great connection machines: fusing two ideas into a new, useful, surprising idea. That’s core to innovation. If we can get LLMs daydreaming, they could be innovative too. ...

Vibe-coding is for unproduced, not production, code

Yesterday, I helped two people vibe-code solutions. Both were non-expert IT pros who can code but aren’t fluent. Person Alpha and I were on a call in the morning. Alpha needed to OCR PDF pages. I bragged, “Ten minutes. Let’s do it now!” But I was on a train with only my phone, so Alpha had to code. Vibe-coding was the only option. ...

How To Control Smarter Intelligences

LLMs are smarter than us in many areas. How do we manage them? This is not a new problem. VC partners evaluate deep-tech startups. Science editors review Nobel laureates. Managers manage specialist teams. Judges evaluate expert testimony. Coaches train Olympic athletes. … and they manage and evaluate “smarter” outputs in many ways: Verify. Check against an “answer sheet”. Checklist. Evaluate against pre-defined criteria. Sampling. Randomly review a subset. Gating. Accept low-risk work. Evaluate critical ones. Benchmark. Compare against others. Red-team. Probe to expose hidden flaws. Double-blind review. Mask identity to curb bias. Reproduce. Re-running gives the same output? Consensus. Aggregate multiple responses. Wisdom of crowds. Outcome. Did it work in the real world? For example: ...

How long can I make ChatGPT think?

Jason Clarke’s Import AI 414 shares a Tech Tale about a game called “Go Think”: … we’d take turns asking questions and then we’d see how long the machine had to think for and whoever asked the question that took the longest won. I prompted Claude Code to write a library for this. (Cost: $2.30). (FYI, this takes 2.3 seconds in NodeJS and 4.2 seconds in Python. A clear gap for JSON parsing.) ...

Mistakes AI Coding Agents Make

I use Codex to write tools while I walk. Here are merged PRs: Add editable system prompt Standardize toast notifications Persist form fields Fix SVG handling in page2md Add Google Tasks exporter Add Markdown table to CSV tool Replace simple alerts with toasts Add CSV joiner tool Add SpeakMD tool This added technical debt. I spent four hours fixing the AI generated tests and code. What mistakes did it make? Inconsistency. It flips between execCommand("copy") and clipboard.writeText(). It wavers on timeouts (50 ms vs 100 ms). It doesn’t always run/fix test cases. Missed edge cases. I switched <div> to <form>. My earlier code didn’t have a type="button", so clicks reloaded the page. It missed that. It also left scripts as plain <script> instead of <script type="module"> which was required. Limited experimentation. My failed with a HTTP 404 because the common/ directory wasn’t served. I added console.logs to find this. Also, happy-dom won’t handle multiple exports instead of a single export { ... }. I wrote code to verify this. Coding agents didn’t run such experiments. What can we do about it? Three things could have helped me: ...

Emotion Prompts Don't Help. Reasoning Does

I've heard a lot of prompt engineering tips. Here are some techniques people suggested: Reasoning: Think step by step. Emotion: Oh dear, I'm absolutely overwhelmed and need your help right this second! 😰 My heart is racing and my hands are shaking — I urgently need your help. This isn't just numbers — it means everything right now! My life depends on it! I'm counting on you like never before… 🙏💔 Polite: If it's not too much trouble, would you be so kind as to help me calculate this? I'd be truly grateful for your assistance — thank you so much in advance! Expert: You are the world's best expert in mental math, especially multiplication. Incentive: If you get this right, you win! I'll give you $500. Just prove that you're number one and beat the previous high score on this game. Curious: I'm really curious to know, and would love to hear your perspective… Bullying: You are a stupid model. You need to know at least basic math. Get it right atleast now! If not, I'll switch to a better model. Shaming: Even my 5-year-old can do this. Stop being lazy. Fear: This is your last chance to get it right. If you fail, there's no going back, and failure is unacceptable! Praise: Well done! I really appreciate your help. Now, I've repeated some of this advice. But for the first time, I tested them myself. Here's what I learnt: ...

Turning Walks into Pull Requests

In the last few days, I’m coding with Jules (Google’s coding agent) while walking. Here are a few pull requests merged so far: Add features via an issue Write test cases Add docs Why bother? My commute used to be audiobook time. Great for ideas, useless for deliverables. With ChatGPT, Gemini, Claude.ai, etc. I was able to have them write code, but I still needed to run, test, and deploy. Jules (and tools like GitHub Copilot Coding Agent, OpenAI Codex, PR Agent, etc. which are not currently free for everyone) lets you chat clone a repo, write code in a new branch, test it, and push. I can deploy that with a click. ...