2025-09-30 08:00:00
I recently had the pleasure of speaking with Ben Lorica on O'Reilly's Generative AI in the Real World podcast about how software applications are changing in the age of AI. We discussed a number of topics including:
You can listen to the podcast Generative AI in the Real World: Luke Wroblewski on When Databases Talk Agent-Speak (29min) on O-Reilly's site. Thanks to all the folks there for the invitation.
2025-09-26 08:00:00
In his How to solve the right problem with AI presentation at Future Product Days, Dave Crawford shared insights on how to effectively integrate AI into established products without falling into common traps. Here are my notes from his talk:
2025-09-26 08:00:00
In her talk Reveal the Hidden Forces Driving User Behavior at Future Product Days, Sarah Thompson shared insights on how to leverage behavioral science to create more effective user experiences. Here's my notes from her talk:
2025-09-25 23:00:00
In her The AI Adoption Gap: Why Great Features Go Unused talk at Future Product Days in Copenhagen, Kate Moran shared insights on why users don't utilize AI features in digital products. Here's my notes from her talk:
2025-09-25 17:00:00
In his talk The Future of Product Creators at Future Product Days in Copenhagen, Tobias Ahlin argued that divergent opinions and debate, not just raw capability, are the missing factors for achieving useful outcomes from AI systems. Here are my notes from his presentation:
2025-09-16 08:00:00
With each new technology platform shift, what defines a software application changes dramatically. Even though we're still in the midst of the AI shift, there's emergent properties that seem to be shaping what at least a subset of AI applications, let's call them chat apps, might look like going forward.
At a high level, applications are defined by the systems they're discovered and operated in. This frames what capabilities they can utilize, their primary inputs, outputs, and more. That sounds abstract so let's make it concrete. Applications during the PC era were compiled binaries sold as shrink-wrapped software that used local compute and storage, monitors as output, and the mouse and keyboard as input.
These capabilities defined not only their interfaces (GUI) but their abilities as well. The same is true for applications born of the AI era. How they're discovered and where they operate will also define them. And that's particularly true of "chat apps".
So what's a chat app? Most importantly a chat app's compute engine is an AI model which means all the capabilities of the model also become capabilities of the app. If the model can translate from one language to another, the app can. If a model can generate PDF files, the app can. It's worth noting that "model" could be a combination of AI models (with routing), prompts and tools. But to an end user, it would appear as a singular entity like ChatGPT or Claude.
When an application runs in Claude or ChatGPT, it's like running in an OS (windows during the PC era, iOS during the mobile era). So how do you run a chat app in an AI model like Claude and what happens when you do? Today an application can be added to Claude as a "connector" probably running as a remote Model Context Protocol (MCP) server. The process involves some clicking through forms and dialog boxes but once setup, Claude has the ability to use the application on behalf of the person that added it.
As mentioned above, the capabilities of Claude are now the capabilities of the app. Claude can accept natural language as input, so can the app. When people upload an image to Claude, it understands its content, so does the app. Claude can search and browse the Web, so can the app. The same is true for output. If Claude can turn information into a PDF, so can the app. If Claude can add information to Salesforce, so can the app. You get the idea.
So what's left for the application to do? If the AI model provides input, output, and processing capabilities, what does a chat app do? In it's simplest form a chat app can be a database that stores the information the app uses for input, output, and processing and a set of dynamic instructions for the AI model on how to use it.
As always, a tangible example makes this clear. Let's say I want to make a chat app for tracking the concerts I'm attending. Using a service like AgentDB, I can start with an existing file of concerts I'm tracking or ask a model to create one. I then have a remote MCP server backed by a database of concert information and a dynamic template that continually instructs an AI model on to use it.
When I add that remote MCP link to Claude, I can: upload screenshots of upcoming concerts to track using Claude's image parsing ability); generate a calendar view of all my concerts (using Claude's coding ability); find additional information about an upcoming show (using Claude's Web search tools); and so on. All of these capabilities of the Claude "model" work with my database plus template, aka my chat app.
You can make your own chat apps instantly by using AgentDB to create a remote MCP link and adding it to Claude as a connector. It's not a very intuitive process today but will very likely feel as simple as using a mobile app store in the not too distant future. At which point, chat apps will probably proliferate.