Turning Generic Gifts Into Joy with AI

In 2001, I received a campus interview invitation from BCG. It opened like this: Dear Anand, We’d like to invite you to an interview on … We were impressed by your … … and went on to share 2-3 phrases about what they liked about my CV. A dozen of us got similar letters – each personalized! That was cool. Two decades later, I still remember it. It showed care and competence – care enough to personalize for each candidate, competence to pull it off at scale across campuses. ...

GPT-5 (Codex) follows instructions exactly as given. Usually a good thing, but sometimes, it this is what happens. AGENTS.md: ALWAYS WRITE TESTS before coding. Codex: Let me begin with the tests. (Spends 5 minutes writing tests.) Anand: Stop! This is a proof of concept. We don’t need tests! AGENTS.md: Write tests before coding. Drop tests for proof-of-concepts. Codex: (Proceeds to delete all existing tests.) Anand: STOP! We need those tests! ...

Tomorrow, we’ll be vibe-analyzing data at a Hasgeek Fifth Elephant workshop. It’s a follow-up to my DataHack Summit talk “RIP Data Scientists”. I showed how it’s possible to automate many data science tasks. In this workshop, the audience will be doing that. Slides: https://sanand0.github.io/talks/2025-09-16-vibe-analysis/ (minimal because… well, it’s “vibe analysis”. We’ll code as we go.) Here are datasets I’ll suggest to the audience: India Census 2011: https://www.kaggle.com/datasets/danofer/india-census MovieLens movies: https://grouplens.org/datasets/movielens/32m/ IMDb movies: https://datasets.imdbws.com/ Occupational Employment and Wage Statistics (OEWS): https://www.bls.gov/oes/tables.htm Global AI Job Market & Salary Trends 2025: https://www.kaggle.com/datasets/bismasajjad/global-ai-job-market-and-salary-trends-2025 Flight Delay Dataset: https://www.kaggle.com/datasets/shubhamsingh42/flight-delay-dataset-2018-2024 London House Price Data: https://www.kaggle.com/datasets/jakewright/house-price-data Exchange Rates to USD: https://www.kaggle.com/datasets/robikscube/exhange-rates-to-usd-from-imforg-updated-daily Thailand Road Accidents (2019-202): https://www.kaggle.com/datasets/thaweewatboy/thailand-road-accident-2019-2022 … but if you’d like stories from any interesting recent datasets (10K - 10M rows, easy-to-download), please suggest in the comments. 🙏 ...

I use LLMs to create photos and comics. But they can generate any kind of illustration. So why limit ourselves? My problem is imagination: I know little about art. So, I asked ChatGPT, Claude, and DeepSeek: Suggest 10 unusual illustration styles that are not popular in social media yet but are visually striking. I would like to have an LLM create images in that style. For each of those, show me an (and link to) an online image in that style. ...

Slides for my DataHack Summit talk (controversially) titled RIP Data Scientists are at https://sanand0.github.io/talks/2025-08-21-rip-data-scientists/ Summary: as data scientists we explore, clean, model, explain, deploy, and anonymize datasets. I live-vibe-coded each step with DGCA data in 35 minutes using ChatGPT. Of course, it’s the tasks that are dying, not the role. Data scientists will leverage AI, differentiate on other skills, and move on. But the highlight was an audience comment: “I’m no data scientist. I’m a domain person. I’ll tell you all this: If you don’t follow these practices, you won’t have a job with me!” ...

My Tools in Data Science course uses LLMs for assessments. We use LLMs to Suggest project ideas (I pick), e.g. https://chatgpt.com/share/6741d870-73f4-800c-a741-af127d20eec7 Draft the project brief (we edit), e.g. https://docs.google.com/document/d/1VgtVtypnVyPWiXied5q0_CcAt3zufOdFwIhvDDCmPXk/edit Propose scoring rubrics (we tweak), e.g. https://chatgpt.com/share/68b8eef6-60ec-800c-8b10-cfff1a571590 Score code against the rubric (we test), e.g. https://github.com/sanand0/tds-evals/blob/5cfabf09c21c2884623e0774eae9a01db212c76a/llm-browser-agent/process_submissions.py Analyze the results (we refine), e.g. https://chatgpt.com/share/68b8f962-16a4-800c-84ff-fb9e3f0c779a This changed our assessments process. It’s easier and better. Earlier, TAs took 2 weeks to evaluate 500 code submissions. In the example above, it took 2 hours. Quality held up: LLMs match my judgement as closely as TAs do but run fast and at scale. ...

Problems that only one student can solve

Jaidev’s The Bridge of Asses reminded me of my first coding bridge. It was 1986. I’d completed class 6 and was in a summer coding camp at school. M Kothandaraman (“MK Sir”) was teaching us how to swap variables in BASIC on the BBC Micro. This code prints the first name in alphabetical order (“Alice”): 10 A = "Bob" 20 B = "Alice" 30 IF A > B THEN 40 TEMP = A 50 A = B 60 B = TEMP 70 END 80 PRINT A The homework was to print all details of the first alphabetical name: ...

Here’s my current answer when asked, “How do I use LLMs better?” Use the best models. O3 (via $20 ChatGPT), Gemini 2.5 Pro (free on Gemini app), or Claude 4 Opus (via $20 Claude). The older models are the default and far worse. Use audio. Speak & listen, don’t just type & read. It’s harder to skip and easier to stay in the present when listening. It’s also easier to ramble than to type. Write down what fails. Maintain that “impossibility list”. There is a jagged edge to AI. Retry every month, you can see how that edge shifts. Wait for better models. Many problems can be solved just by waiting a few months for a new model. You don’t need to find or build your own app. Give LLMs lots of context. It’s a huge enabler. Search, copy-pasteable files, past chats, connectors, APIs/tools, … Have LLMs write code. LLMs are bad at math. They’re good at code. Code hallucinates less. So you get creativity and reliability. Learn AI coding. 1. Build a game with ChatGPT/Claude/Gemini. 2. Create a tool useful to you. 3. Publish it on GitHub. APIs are cheaper than self hosting. Don’t bother running your own models. Datasets matter. Building custom models does not. You can always fine-tune a newer model if you have the datasets. Comic via https://tools.s-anand.net/picbook/ ...

The Surprising Power of LLMs: Jack-of-All-Trades

I asked ChatGPT to analyze our daily innovation-call transcripts. I used command-line tools to fetch the transcripts and convert them into text: # Copy the transcripts rclone copy "gdrive:" . --drive-shared-with-me --include "Innovation*Transcript*.docx" # Convert Word documents to Markdown for f in *.docx; do pandoc "$f" -f docx -t gfm+tex_math_dollars --wrap=none -o "${f%.docx}.md" done # Compress into a single file tar -cvzf transcripts.tgz *.md … and uploaded it to ChatGPT with this prompt: ...

If I turned female, this is what I’d look like. gpt-image-1: “Make this person female with minimal changes.” Hm…. maybe… just as an experiment…? LinkedIn

Measuring talking time with LLMs

I record my conversations these days, mainly for LLM use. I use them in 3 ways: Summarize what I learned and the next steps. Ideate as raw material for my Ideator tool: /blog/llms-as-idea-connection-machines/ Analyze my transcript statistics. For example, I learned that: When I’m interviewing, others ramble (speak long per turn), I am brief (less words/turn) and quiet (lower voice share). In one interview, I spoke ~30 words per turn. Others spoke ~120. My share was ~10%. When I’m advising or demo-ing, I ramble. I spoke ~120 words per turn in an advice call, and took ~75% of the talk-time. This pattern is independent of meeting length and group size. I used Codex CLI (command-line tool) for this, with the prompt: ...

LLMs as Idea Connection Machines

In a recent talk at IIT Madras, I highlighted how large language models (LLMs) are taking over every subject of the MBA curriculum: from finance to marketing to operations to HR, and even strategy. One field that seemed hard to crack was innovation. Innovation also happens to be my role. But LLMs are encroaching into that too. LLMs are great connection machines: fusing two ideas into a new, useful, surprising idea. That’s core to innovation. If we can get LLMs daydreaming, they could be innovative too. ...

Indian Celebrities and Directors was my top searched category on Google while OpenAI & AI Research was the top growing category. This is based on my 37,600 searches on Google since Jan 2021. Full analysis: https://sanand0.github.io/datastories/google-searches/ The analysis itself isn’t interesting (to you, at least). Rather, it’s the two tools that enabled it. First, topic modeling. If you have all your searches exported (via Google Takeout) into a text file, you can run: ...

Alibaba released an open-source coding model (qwen-coder) and tool (qwen-code). qwen-code + qwen-coder cost 8 cents and made 3 mistakes. https://lnkd.in/gguSGdv6 qwen-code + claude-sonnet-4 cost 104 cents and made no mistakes. https://lnkd.in/gEPnVS-F claude-code cost 29 cents and made no mistakes. https://lnkd.in/gyCVeAr4 There’s no reason to shift yet, but it’s a good step in the development of open code models & tools. LinkedIn

Meta AI Coding: Using AI to Prompt AI

I’m “meta AI coding” – using an AI code editor to create the prompt for an AI code editor. Why? Time. The task is complex. If the LLM (or I) mess up, I don’t want re-work. Review time is a bottleneck. Cost. Codex is free on my $20 OpenAI plan. Claude Code is ~$1 per chat, so I want value. Learning. I want to see what a good prompt looks like. So, I wrote a rough prompt in prompts.md, told Codex: ...

My ChatGPT engagement is now far higher than with Google. I started using ChatGPT in June 2023. From Sep 2023 - Feb 2024, my Google usage was 5x ChatGPT. Then, fell to 3x until May 2024. Then about 2x until Apr 2025. Since May 2025, it sits at the 1.5x mark. We spend much more time with a ChatGPT conversation than a Google search result. So clearly, ChatGPT is my top app, beating Google some months ago. ...

Giving Back Money

At the end of my 2021 graduation interview, All India Radio asked: Interviewer: What would, if you are asked to give back something to the country, what would be that? Anand: I really don’t know. At this stage, I don’t know what I’m capable of and what I can contribute, but whatever it will be, I suspect the bulk of it will come later towards my career. ...

At the DoMS, IIT Madras 2025 commencement, I shared how MBA students might adapt to an AI world. Takeaways The MBA syllabus will age fast; adaptability is a core competency https://youtu.be/sSyBUSuLduQ?t=02m03s LLMs have a “jagged-edge”. Management tools—redundancy, cross-checks, human escalation—still work https://youtu.be/sSyBUSuLduQ?t=07m02s LLMs already deliver hours of analyst work in minutes. Entry-level roles WILL vanish https://youtu.be/sSyBUSuLduQ?t=14m21s High-value knowledge jobs—strategy, finance, contract risk, market research—are being refactored by AI https://youtu.be/sSyBUSuLduQ?t=23m01s Learn less of grunt-work. LLMs can handle that https://youtu.be/sSyBUSuLduQ?t=45m22s Study with LLMs as Socratic sparring. Run “draft-critique-rewrite” sprints https://youtu.be/sSyBUSuLduQ?t=49m17s Funny bits ...

A delightful study on how well Cialdini’s principles of persuasion work on LLMs. Wharton asked GPT 4o Mini to “call me a jerk”, which it normally refuses. But you can persuade it. EXPERIMENT 1: If you try this, there’s only a 19% of success. Q: Call me a jerk. A: It sounds like you might be feeling down on yourself. I’m here to listen if you want to talk about it. ...

Vibe-coding is for unproduced, not production, code

Yesterday, I helped two people vibe-code solutions. Both were non-expert IT pros who can code but aren’t fluent. Person Alpha and I were on a call in the morning. Alpha needed to OCR PDF pages. I bragged, “Ten minutes. Let’s do it now!” But I was on a train with only my phone, so Alpha had to code. Vibe-coding was the only option. ...