Since I started using LLM chatbots, my enthusiasm for them has been a bit of a rollercoaster. At times, they feel like they slow me down. Other times, I stumble on a prompting approach that unlocks something more powerful than I can achieve with traditional “old school” software.
Lately, I’ve been using Copilot to dig through my previous work. It’s very useful during those moments of I know I’ve dealt with something just like this before, but I also know finding that bug, ticket, or PR will be tricky. When I lack just enough detail to use the right filters or phrasing — manually searching will not be fun.

Copilot’s been surprisingly effective at finding things I know are somewhere in my previous documented work. It’s quicker than wrangling Azure DevOps’ search UI with its wall of filters.
So yes, used the right way, LLMs can be incredibly helpful. But getting to that point takes effort. There’s a real learning curve to crafting and refining prompts that unlock real productivity. And for me, “useful” simply means I’m spending less time doing something than I would with a traditional GUI tool.
That said, the state of chatbot integration overall? Rough. Everyone’s racing to bolt a chatbot onto everything. And honestly, most of the time it’s probably not a great idea. My theory is that average users don’t want to learn how to use a chatbot any more than they want to learn how to use ffmpeg. They’ll always reach for a cleaner solution with a GUI.
And don’t assume you’re not an average user. We all are, depending on the tool. There are plenty of apps where I don’t want to be an expert. I just want a button that gets the job done.
As an example
I’m not a professional video editor, but I edit videos now and then. Most of the time I just crave simplicity: all I want is to upload a short clip and get it back with subtitles. One button click. So I built a quick prototype for myself. Behind the scenes, it uses AI to transcribe the audio and re-render the video with subtitles. But to the user, it’s invisible. No prompts, no chatbot, just results.
Now imagine having to describe the video to a chatbot, ask for transcription, wait for a response, clarify the formatting, and so on. This is the kind of task where AI shines best when it doesn’t announce itself to the end user.
What’s next: my prediction
That’s why I think the next evolution of AI won’t be about bots we talk to. It’ll be software that quietly uses AI behind the scenes to make things smarter without changing how we interact with them. We’re still living in the CLI phase of LLMs, and for now, chatbots are mainly useful to power users who enjoy tinkering.