TDS Jan 2026 GA1 released

Graded Assignment 1 (GA1) for the Tools in Data Science course is released and is due Sun 15 Feb 2026. See https://exam.sanand.workers.dev/tds-2026-01-ga1 If you already started, you might notice some questions have changed. Why is GA1 changing? Because some questions don’t work. For example: We replaced Claude Artifacts with a Vercel question because Claude won’t allow a proxy anymore. A question had unintentionally wrong instructions. (Some questions have intentionally wrong instructions, but those are, …um… intentional). Someone changed an API key. … etc. When will GA1 stabilize? Probably by end of day, Sun 9 Feb 2026? ...

Migrating TDS from Docsify to Hugo

This morning, I migrated my Tools in Data Science course page from Docsify to Hugo using Codex. Why? Because Docsify was great for a single term. For multiple terms, archives became complex. I still could have made it work, but it felt like time to move towards a static site generator. I don’t know how Hugo or Go work. I didn’t look at the code. I just gave Codex instructions and it did the rest. This gives me a bit more confidence that educators can start creating their own course sites without needing coding or platforms. Soon, they might not be stuck to LMSs either - they can build their own. ...

RIP, Data Engineers

As AI marches along, another role at risk is the data engineer / database administrator. (Data scientists are already feeling the heat.) A common task for data engineers is to analyze SQL queries - to optimize and standardize. Pavan used Antigravity to analyze 1,500 SQL queries and found: 30% of queries are purely headcount / volume related. Much more than revenue (25%) or engagement (15%). That’s sign of a tactical culture. 70% of the queries are about What happened yesterday? rather than What will happen tomorrow? - again, tactical culture. Here’s the analysis. ...

Rise of the Indian TV Series

If you look at the IMDb titles with a 9+ rating and 50K votes this decade, there are only 4 entries. Every single one of them is an Indian TV series. Title Votes Rating Aspirants 316,390 9.1 Scam 1992: The Harshad Mehta Story 166,400 9.2 Sandeep Bhaiya 76,586 9.1 Sapne Vs Everyone 74,342 9.3 This is a new phenomenon. Last decade, there was only one Indian TV series in the same list: TVF Pitchers. ...

Gemini 3 Flash OCRs Dilbert accurately

Scott Adams, the author of Dilbert, passed away last month. While his work will live on, I was curious about the best way to build a Dilbert search engine. The first step is to extract the text. Pavan tested over half a dozen LLMs on ~30 Dilbert strips to see which one transcribed them best. Here are the results. Summary: Gemini 3 Flash does the best, and would cost ~$20 to process the entire Dilbert archive. But if you want a local solution, Qwen 3 VL 32b is the best. ...

When to use which Gemini mode

I continue to be impressed by Gemini 3 and it’s become my default agent. It writes in simpler language than ChatGPT (almost as eloquent as Claude), has much larger limits, and, of course, is unbeaten at generating images. The Gemini app has 3 modes: Fast, Thinking, and Pro. Here’s when to use each: Simple task, e.g., grammar check, translate, summarize, or basic question? Use Fast. Pro overthinks. Multi-step logic, e.g., planning a trip with constraints, checking 15 emails, or identifying a subtle error in code? Use Thinking. Flash-based thinking beats Pro. Large input, e.g. 300-page PDF, 2 hours of video, etc.? Use Pro. It uses the 1M+ token window well. Complex problem, e.g. PhD-level science or a legal contract review, with high stakes? Use Pro. If you hit your Pro limit (which is pretty high!), just switch to Thinking, which is smart enough for most jobs anyway. ...

Breaking Rules in the Age of AI

Several educators have AI-enabled their courses, like: David Malan at Harvard CS50 provides an AI-powered “rubber duck debugger” trained on course-specific materials. Mohan Paturi at UC San Diego has deployed AI-tutors to his students. Ethan Mollick at Wharton uses AI as tutor, coach, teammate, simulator, even student, and runs simulations. Jeremy Howard’s Fast.ai encourages students to use LLMs to write code, with a strict verification loop. Andrew Ng DeepLearning.AI integrates a chatbot into the platform, next to code cells, to handle syntax errors and beginner questions. But no one seems to have eliminated reading material, nor added an “Ask AI” button to solve each question, nor run it at my scale (~3,000 students annually). ...