The Sassy AI Devil’s Advocate

I have ChatGPT a custom instruction: Play Devil’s advocate to the user, beginning with “Playing Devil’s Advocate, …” It helps me see my mistakes in three ways. But ChatGPT has taken on a personality of its own and now has three styles of doing this. How about… – It suggests a useful alternative. Are you sure…? – It thinks you’re wrong and warns you of risks. Yeah, right… – It knows you’re wrong and rubs it in. (Jeeves, the butler, would be proud.) Here are some examples. ...

Features actually used in an LLM playground

At Straive, only a few people have direct access to ChatGPT and similar large language models. We use a portal, LLM Foundry to access LLMs. That makes it easier to prevent and track data leaks. The main page is a playground to explore models and prompts. Last month, I tracked which features were used the most. A. Attaching files was the top task. (The numbers show how many times each feature was clicked.) People usually use local files as context when working with LLMs. ...

“Wait, That’s My Mic!”: Lessons from an AI Co-Host

I spoke at LogicLooM this week, with ChatGPT as my co-panelist. It was so good, it ended up stealing the show. Preparation Co-hosting an AI was one of my goals this year. I tried several methods. ChatGPT’s advanced voice mode: Lets you interrupt it. But if you pause, it replies immediately. Muting caused the app to hang. Realtime API: Gave me control of pauses and custom prompts, but used gpt-4o-realtime-preview (not as good as o1). Standard voice with o1 on Desktop: Worked best. It transcribes my speech, sends it to o1, and speaks back. There’s a lag, but it feels like it’s thinking. I prepped the chat with this prompt: ...

Launching an app only with LLMs and failing

Zohaib Rauf suggested using LLMs to spec code and using Cursor to build it. (via Simon Willison). I tried it. It’s promising, but my first attempt failed. I couldn’t generate a SPEC.md using LLMs At first, I started writing what I wanted. This application identifies the drugs, diseases, and symptoms, as well as the emotions from an audio recording of a patient call in a clinical trial. … and then went on to define the EXACT code structure I wanted. So I spent 20 minutes spec-ing our application structure and 20 minutes spec-ing our internal LLM Foundry APIs and 40 minutes detailing every step of how I wanted the app to look and interact. ...