What is an “AI agent,” anyway? We could concoct a reasonable definition — say, you tell it in natural language what you want and it goes out and does it for you.

But that doesn’t describe the things sold as agents. In practice, they’re any old rubbish branded as “AI” that promises it might do something in the future. Because it certainly doesn’t do it in the present.

At the Wall Street Journal’s CIO Network Summit last week, 61% of attendees polled said they were “experimenting” with AI agents — whatever they thought an “AI agent” was — and 21% said they weren’t using agents at all. [WSJ, archive]

Why? Per the poll: “a lack of reliability.” The things being sold as “agents” don’t … work.

Vendors insist that the users are just holding the agents wrong. Per Bret Taylor of Sierra (and OpenAI):

Accept that it is imperfect. Rather than say, “Will AI do something wrong”, say, “When it does something wrong, what are the operational mitigations that we’ve put in place to deal with it?”

Or perhaps they could sell a thing that works, and not a thing that doesn’t work — and not have to tell their customers to fix for them.

Vendors are reduced to yelling at their customers because they’re setting billions of dollars on fire to build toys that aren’t fit for production use — but they have nothing else to offer.

FOMO can take you only so far. At some point, you have to deliver.

It can't be that stupid, you must be prompting it wrong

https://pivot-to-ai.com