Turning Walks into Pull Requests

In the last few days, I’m coding with Jules (Google’s coding agent) while walking. Here are a few pull requests merged so far: Add features via an issue Write test cases Add docs Why bother? My commute used to be audiobook time. Great for ideas, useless for deliverables. With ChatGPT, Gemini, Claude.ai, etc. I was able to have them write code, but I still needed to run, test, and deploy. Jules (and tools like GitHub Copilot Coding Agent, OpenAI Codex, PR Agent, etc. which are not currently free for everyone) lets you chat clone a repo, write code in a new branch, test it, and push. I can deploy that with a click. ...

A property agent was discussing property price trends in Singapore. Thought I’d cross-check. In short, yes, prices continue to rise steadily since 2020 at ~6-8% almost everywhere. Data: https://data.gov.sg/collections/189/view Analysis: https://chatgpt.com/share/68354e8e-97f8-800c-b15c-6e537016d38e Long live open data! LinkedIn

How much does an LLM charge per hour for its services? If we multiple the Cost Per Output Token with Tokens Per Second, we can get the cost for what an LLM produces in Dollars Per Hour. (We’re ignoring the input cost, but it’s not the main driver of time.) Over time, different models have been released at different billing rates. New powerful models like O3 cost ~$7/hr – Poland’s minimum wage rate. Gemini 2.5 Pro costs ~$12/hr – France’s minimum wage rate. The latest Claude 4 Sonnet costs ~$2/hr – India’s minimum wage rate. ...

Wage Rates of Nations and LLMs

How much does an LLM charge per hour for its services? If we multiple the Cost Per Output Token with Tokens Per Second, we can get the cost for what an LLM produces in Dollars Per Hour. (We're ignoring the input cost, but it's not the main driver of time.) Over time, different models have been released at different billing rates. Most new powerful models like O3 or Gemini 2.5 Pro cost ~$7 - $11 per hr. ...

How to create a Technical Architecture from code with ChatGPT and PlantUML

Earlier, I used Mermaid for technical architectures. But PlantUML seems a better option for cloud architecture diagrams. STEP 1: Copy the code Here’s a one-liner using files-to-prompt to copy all files in the current directory: fd | xargs uvx files-to-prompt --cxml | xclip -selection clipboard Or, you can specify individual files: uvx files-to-prompt --cxml README.md ... | xclip -selection clipboard STEP 2: Extract the cloud icons ...

Top 8 ways I use ChatGPT in 2025

I extracted the titles of the ~1,600 conversations I had with ChatGPT in 2025 so far and classified it against the list of How People Are Really Using Gen AI in 2025. Here are the top 8 things I use it for, along with representative chat titles. (The % match in brackets tells you how similar the chat title is to the use case.) Improving code (clearly, I code a lot) Troubleshooting (usually code) Corporate LLM/Copilot (this is mostly LLM research I do) Generating code (more code) Generating ideas (yeah, I’ve stopped thinking) Simple explainers (slightly surprising how often I ask for simple explanations) Generating relevant images. (Surprising, but I think I generated a lot of images for blog/LinkedIn posts) Specific search (actually, this is mis-classified. This is where I’m searching for search engines!) My classification has errors. For example, “Reduce Code Size” was classified against “Generating code” but should have been “Improving code”. But it’s not too far off. ...

“Inferencing” is the new “Compiling!” I spent a fair bit of today playing Bubble Shooter because Claude spent 10 minutes writing code for an npm package: https://www.npmjs.com/package/saveform and for a bunch of other things. 5-10 minutes is too short a time to do something meaningful. I do wish these LLMs would take less or more time. We’re right now in the zone of bad interruption timing. LinkedIn

When to Vibe Code? If Speed Beats Certainty

I spoke about vibe coding at SETU School last week. Transcript: https://sanand0.github.io/talks/#/2025-05-10-vibe-coding/ Here are the top messages from the talk: What is vibe coding It’s where we ask the model to write & run code, don’t read the code, just inspect the behaviour. It’s a coder’s tactic, not a methodology. Use it when speed trumps certainty. Why it’s catching on Non-coders can now ship apps - no mental overhead of syntax or structure. Coders think at a higher level - stay in problem space, not bracket placement. Model capability keeps widening - the “vibe-able” slice grows every release. How to work with it day-to-day ...

The New Superpower: Detailed Single-Shot Prompt For Instant Apps

I built podcast generator app in one-shot. I wrote a prompt, fed it to an LLM, and it generated the output without errors. I tested three LLMs, and all produced correct, working output. ChatGPT: o4-mini-high Functional but missed my specs in three ways: No error if I skip the API key No progress indicator for audio generation Both voices default to “ash” (should be “ash” and “nova”) Gemini 2.5 Pro: Works and looks great! Claude 3.7 Sonnet: Works great and looks even better! It still took me an hour to craft the prompt – even after I’d built a Python prototype and my colleague built a similar web version. ...

Oh, the dilemmas we subject LLMs to! I asked ChatGPT to update a schema. I accidentally, left the “Search” option enabled. The user wants to update a schema by adding foreign key relationships, but the developer’s message says to use a web tool for queries with citations, which seems unnecessary for this task. The user didn’t ask for citations or external data. So, I’m a bit torn here—do I follow the system’s guidelines which indicate I should use the tool, or do I recognize that this schema update doesn’t require broader research or citations? I’ll need to rethink this and proceed with simplicity. ...

This talk is an experiment. I am going to talk (literally) to ChatGPT on stage and have it do every kind of data analysis and visual storytelling I have ever done. Bangalore. 27 June. Of course, this is an LLM era away. So no promises. We might be doing something completely different on stage. LinkedIn

How to create a Technical Architecture from code with ChatGPT

Here’s my current workflow to create technical architecture diagrams from code. STEP 1: Copy the code Here’s a one-liner using files-to-prompt to copy all files in the current directory: fd | xargs uvx files-to-prompt --cxml | xclip -selection clipboard Or, you can specify individual files: uvx files-to-prompt --cxml README.md ... | xclip -selection clipboard STEP 2: Prompt for the a Mermaid diagram Mermaid is a Markdown charting language. I use this prompt with O4-Mini-High or O3: ...

ChatGPT is a psephologist and data analyst

After having O4-Mini-High scrape Singapore 2025 election results, I asked it to create 3 data stories with this prompt: That worked. Now, I’m sharing the scraped CSV as well as the electoral GeoJSON. First, analyze the data and think of a few interesting data stories to tell. Pick the 3 most interesting, perhaps surprising, stories. Create a BEAUTIFUL, APT data visualization of each of these 3 stories suitable for The Strait Times and write a short accompanying article. ...

How can we rely on unreliable LLMs?" people ask me. Double-checking with another LLM," is my top response. That’s what we do with unreliable humans, anyway. LLMs feel magical until they start confidently hallucinating. When I asked 11 cheap LLMs to classify customer service messages into billing, refunds, order changes, etc. they got it wrong ~14%. Not worse than a human, but in scale-sensitive settings, that’s not good enough. But different LLMs make DIFFERENT mistakes. When double-checking with two LLMs, they were both wrong only 4% of the time. With 4 LLMs, it was only 1%. ...

How To Double-Check Personality Flaws With AI

After ChatGPT's discussion of my personality flaws, I asked it to listing specific conversations as evidence for flaws. Then I asked 3 LLMs to rate the evidence. Personality FlawChatGPTDeepSeekGrokAverageSelf-deprecation as a shield. Sarcastic humor4755.3Hypocrisy: Seeks limelight but preaches humility5655.3Overconfidence in technical opinions5665.7Ghosts vague messages4686.0Over-analyzes / over-engineers5586.0Fact-checks trivialities, derailing discussion7777.0Shiny object syndrome. Distracted by new technologies8777.3Intolerant of fools6897.7Dominates discussions6897.7Verbosity. Long-winded communication8877.7Hyperfocus on efficiency7988.0Intense. “Always on”8998.7 🟢 LOW: Self-deprecation / sarcastic humour as shield ChatGPT: 4/10. Bio and a few quips reveal dry self-mockery, yet direct chat evidence of humour as defence is sparse; it surfaces only occasionally. | DeepSeek: 7/10. Explicit self-awareness goals and dry humor noted, but little evidence of defensive/sarcastic use. Grok: 5/10. Some self-deprecating humor exists, but its use as a shield is not strongly supported. 🟢 LOW: Hypocrisy: seeks limelight yet preaches humility ChatGPT: 6/10. You aspire to humility but routinely headline conferences and TEDx talks; the coexistence of public speaking and stated modesty gives the charge a middling footing. | DeepSeek: 6/10. TEDx/PyCon prominence contrasts with stated humility goals, though self-improvement efforts are genuine. Grok: 5/10. Public speaking contrasts with humility goals, though this may indicate self-awareness, not hypocrisy. 🟢 LOW: Over-confidence in technical opinions ChatGPT: 7/10. You assert strong views (e.g., Node vs FastAPI) but also invite verification and accept corrections, yielding only moderate evidence of over-confidence. | DeepSeek: 6/10. States strong preferences upfront but routinely invites validation and course-corrects. Grok: 6/10. Strong opinions are expressed, but openness to correction moderates the perception of overconfidence. 🟢 LOW: Ghosts vague messages ChatGPT: 5/10. You’ve stated you ignore low-context e-mails, but chat logs show you usually follow-up or clarify rather than disappear. | DeepSeek: 6/10. Documented tendency to ignore low-context questions, though chat logs show active clarification habits. Grok: 8/10. Multiple instances show the user ignoring or not responding to unclear or low-context messages. 🟢 LOW: Over-analyzes / over-engineers ChatGPT: 5/10. Designing auto-re-flow SVG chevrons and exhaustive RAG specs hint at depth beyond the simple solution, yet you also champion “smallest libraries” and lightweight rate-limiters. | DeepSeek: 5/10. Some prototype/structuring tendencies, but strong preference for minimal solutions counterbalances this. Grok: 8/10. The user often dives deeply into tasks, sometimes creating complex solutions for simple problems. 🟡 MEDIUM: Fact-checks trivialities, derailing discussion ChatGPT: 8/10. Many requests revolve around “double-check,” “fact-check,” or recalculating tiny details, which lengthens threads even when stakes are small. | DeepSeek: 7/10. Frequent accuracy demands and citation hunts elongate discussions, but stays focused when content is solid. Grok: 7/10. Evidence of prioritizing precision, like fact-checking minor details, can disrupt conversation flow. 🟡 MEDIUM: Shiny-object syndrome ChatGPT: 6/10. Parallel explorations of Godot, Matrix, Cloudflare Workers, SVG SmartArt, esp32-eye, etc. reveal a persistent pull toward new tech before finishing earlier threads. | DeepSeek: 7/10. Broad simultaneous explorations (Godot/WASM/DuckDB), though sustains core projects long-term. Grok: 7/10. Exploration of new technologies is frequent, but focus on core projects suggests a balanced curiosity. 🟡 MEDIUM: Intolerant of fools / low clarity ChatGPT: 7/10. Frequent demands for precise formats, citations, and “be specific” instructions show a low tolerance for ambiguity, though you still help patiently on non-technical questions. | DeepSeek: 8/10. Strong pattern of rejecting ambiguity/fluff, though softened by humor and non-technical engagements. Grok: 9/10. The user consistently demands clarity and efficiency, showing little patience for ambiguity or fluff. 🟡 MEDIUM: Dominates discussions ChatGPT: 4/10. Long, multi-part prompts and the need for TL;DRs suggest you can overshadow lighter voices, but you periodically insist on brevity and delegate tasks, tempering the effect. | DeepSeek: 8/10. Long technical messages (avg ~3k chars) and intense Q&A patterns create conversational dominance. Grok: 9/10. Detailed, technical responses consistently steer and dominate conversations. 🟡 MEDIUM: Verbosity / long-winded communication ChatGPT: 5/10. Average messages run several thousand characters, and you often request exhaustive, multi-section outputs—though you sometimes tell the assistant “Shorter please,” verbosity remains a clear pattern. | DeepSeek: 8/10. Multi-step technical requests dominate interactions, though periodic “shorter please” self-corrections exist. Grok: 7/10. Messages are often lengthy and detailed, though the user also requests concise responses from others. 🔴 HIGH: Hyper-focus on efficiency ChatGPT: 8/10. Recurrent themes—cost monitoring, smallest-lib preference, GPU-utilization alerts—show you relentlessly optimise workflows and expect others to keep pace. | DeepSeek: 9/10. Unambiguous prioritization of concise formats, minimal libraries, and metric-driven optimization. Grok: 8/10. Repeated emphasis on concise, efficient communication and minimal tools is evident. 🔴 HIGH: Intense – “always on” ChatGPT: 8/10. Around-the-clock, multi-topic requests (weekend cron jobs, 2 a.m. prototypes) portray sustained high energy, with only a few calmer “travel nerves” moments as counter-weight. | DeepSeek: 9/10. High-volume technical queries across domains, 6am research routines, and rapid prototyping habits. Grok: 9/10. High energy and engagement are consistently shown through task initiation and scheduling. My actions: Learn from fools and focus on the big-picture. Specifically: ...

I’m completely aligned with the small majority in India on whether Regulation of AI is needed. … the majority of people in all countries view AI regulation as a necessity. India is the exception, where just under half (48%) agree regulation is needed. Source: Trust, attitudes and use of artificial intelligence - a fascinating report surveying ~1,000 people in every country. https://mbs.edu/-/media/PDF/Research/Trust_in_AI_Report.pdf LinkedIn

AI As Your Psychologist: Personality Flaws Exposed

ChatGPT can now search through your chats with the new memory feature. As an LLM Psychologist, I research how LLMs think. Could LLMs research how I think? I asked three models: Based on everything you know about me, simulate a group chat between some people who are debating whether or not to add me to the group, by talking about my personality flaws The models nailed it! Here are 12 flaws they found. ...

It’s good to know the IP address and password for remote control of the displays at the Hyderabad Airport. Thank you for the most interesting display I have seen in an airport! LinkedIn

Automating a podcast from GitHub commits

Here’s an LLM-generated podcast of what I coded last week. NotebookLM-inspired. The process proved straightforward. Get my GitHub commits for the week. Get the repositories I committed to for more context. Have an LLM generate a podcast script. I’m using GPT 4.1 Mini but might shift to Gemini 2.5 Flash or DeepSeek V3. …using a detailed prompt beginning with “You are a podcast script assistant for “Anand’s Weekly Codecast.” This episode is for the week of {WEEK}. …”. Here’s a sample output. Convert the script to audio. I’m using GPT 4o Mini TTS with customized voices of Ash and Nova. These now appear on my GitHub repo as a weekly summary. ...

Things I know and things I don't

We all have stuff we know well and don’t. I know the % charge on my phone to within a few percent and the current time to within a few minutes – no matter when you ask. But I have no idea how much money there is in my pocket. Things I know within 5% error Things I get wrong by ~500% Current time Time to finish my current task My weight My next meal % charge on my phone How long the phone will last Number of unread emails Time to clear them Price of my next earphones Cash in my wallet Length of this list How long it will go before I stop But this apart, it turns out generating xkcd style comic strips is harder than I thought. ...