
Anthropic, together with Blackstone, Hellman & Friedman and Goldman Sachs, is setting up a separate company that puts AI engineers literally on the doorstep of mid-sized organizations to embed Claude into their core operations. OpenAI is closing a joint venture with TPG and eighteen other investors, with an explicit mandate: don't sell licenses, embed engineers inside customer organizations. Stripe has several open vacancies in which the success criterion isn't "training delivered" or "adoption measured", but "the number of workflows you have permanently transformed". EY UK & Ireland launched Forward Deployed Engineer roles to take AI systems from pilot into production.
This shifts the work of the future from using AI to building something with AI that makes your work easier, more efficient and better. Which in turn can lead to more and/or better output per employee.
In three years, from prompt engineer to custom AI solutions
Quick rewind to 2023. Anthropic itself had a vacancy for "Prompt Engineer" with a salary that climbed to 375,000 dollars. McKinsey reported that 7 percent of companies adopting AI had already hired prompt engineers. The tone was: this is the new skill you have to learn. Whoever knew how to talk to a model owned the future. My LinkedIn — and probably yours too — was full of magic formulas for the perfect prompt.
Three years later, that story has quietly evaporated, although I still see the LinkedIn posts on a regular basis. Models have become significantly smarter, and if your question is unclear, they ask follow-up questions to understand you better. The number of vacancies with "prompt engineer" as a job title has dropped enormously. Microsoft's research among 31,000 employees in 31 countries placed the role second to last on the list of functions companies still consider adding. Andrej Karpathy traded the term in for "context engineering", because the center of gravity shifted. He's also the person who no longer considers "vibe coding" representative of how well LLMs can write code these days, and instead argues for "agentic engineering" — a position I share, though one that has yet to be widely adopted.
When it comes to prompt engineering, we in the Netherlands have played along enthusiastically with this development. Many organizations have spent three years offering courses, running training days, writing frameworks and giving inspiration sessions, with the implicit promise that learning to operate AI better was the hurdle we had to clear. Good work, often. But largely work focused on talking about and using AI — not on building with it, even though that's where the real value lies.
And yet we're seeing a shift. Both in the announcements from these companies, and in the conversations we're having ourselves. Two years ago, vibe coding was mainly seen as a risk. The number of critics is now declining and more and more companies are organizing (we organize, for) vibe code hackathons to increase internal adoption in this area and build internal tools that add value in the daily work.
80 percent of the value comes from redesigning work
PwC's CEO Survey 2026 makes it very clear. Of the 4,454 CEOs surveyed worldwide, 56 percent say they see no significant financial benefit from AI. Only 12 percent report both cost savings and revenue growth. And the companies that are doing it well, according to PwC, share one thing: 80 percent of the value comes from redesigning work, not from the technology itself.
That figure points to something bigger than adoption. The classic software model — vendor sells license, customer uses tool, value emerges — works fine for a whole class of AI applications. To put it bluntly: GitHub Copilot makes developers measurably faster without their workflow changing. Microsoft 365 Copilot summarizes meetings. There, the tool is simply a better version of what was already there, and the old license model works as it always has.
But that's not what PwC's 80% is about. That figure is about companies that achieve fundamentally different results — different cost structures, different lead times, different product possibilities. For that kind of value, the old model doesn't work. Because that value doesn't come from better use of a tool, but from organizing work differently and developing custom tools to support it.
And that redesign doesn't work from a distance. Not from an IT department that doesn't know the substance of your daily work. Not from a PowerPoint with recommendations drawn up at six weeks' distance and then thrown over the fence for you to execute yourself. Not from a training agency that wants to teach skills, when what's at stake is the restructuring of the work itself. What it does require: someone who sits next to the people doing the actual work, and builds on what they encounter, so the work can genuinely be reorganized together. Which work do you, as a human, still really need to do yourself, and where can the computer in combination with AI take work off your hands?
The Dutch numbers show how deep the gap runs. 67 percent of Dutch companies use AI in some form, but only 8 percent have integrated it organization-wide. EY's own research in the United Kingdom: 78 percent of companies say AI is largely or fully implemented, while at the same time 49 percent say their approach is not sufficient for what is now needed. Translate that to the Dutch mid-market and you get a familiar picture: companies think they've deployed AI, and at the same time know that it isn't working the way they had hoped.
The Microsoft Work Trend Index 2026
Published on May 5th, this report supports PwC's findings from an entirely different angle. Based on a survey of 20,000 workers across ten countries — the Netherlands included — Microsoft concludes that organizational factors such as culture, manager support and talent practices weigh more than twice as heavily on AI impact as individual behavior. Microsoft's own framing ("AI absorption rather than just AI adoption") gets at the distinction that matters here: the difference between a tool that is used and work that is redesigned. Their own conclusion: "The real question isn't whether people have the right skills. It's whether the organization is built to unlock them." And perhaps even more telling: in the same report, drawing on LinkedIn's 2026 Labor Market Report, Microsoft names the "forward-deployed engineer" as one of the 1.3 million new AI jobs that have emerged over the past two years. A role that didn't exist three years ago, and that this week is being positioned simultaneously by Anthropic, OpenAI, Stripe, EY and Microsoft as the defining role of the moment.
Three phases of working with AI
If I had to summarize the past three years in one line: AI work has evolved from talking with AI, via building with AI, to embedded building with end users.
Phase one was 2023. The work sat in the prompt. Models were relatively basic, and whoever knew how to write a good chain-of-thought instruction got considerably better output. That was temporarily valuable — hence the salaries and the hype — but it wasn't a discipline. It was a "hack" for immature models.
Phase two was in 2024 and 2025. The work shifted to building something specific: your own agent, a custom tool, an internal automation, a small application that intervenes in your workflow. No-code tools like Make and N8N opened up a range of automation possibilities, but tools like Claude Code, Cursor and Codex also changed the production cost of software enormously. What used to take months and tens of thousands of euros in internal tooling could suddenly be built in days. The accent shifted from using to building. This is the phase many Dutch organizations are only now entering.
Phase three was set in motion over the past few months, and especially this week. The work shifts from someone who builds to someone who builds with the end user. Not across the table with a PowerPoint, not at a distance with a training curriculum, but physically or digitally embedded in the work of the person the solution is meant for. The Stripe vacancy describes it almost word for word: not lecturing, but building together on a marketer's actual deliverables, on the actual workflow of a specific function. Anthropic's new venture describes it slightly more formally: an engagement begins with a small team that figures out, together with the customer, where Claude can have the most impact. Not "what could theoretically be possible", but "where does your time actually disappear?". Something we help companies with in a very similar way through an AI Discovery engagement.
In all three cases — Stripe, Anthropic, EY — building custom solutions on the specific reality of a specific function is central. Not generic tooling. Not general training. Custom work, tailor-made, in the place where the value is created.
What this means for the Dutch market
For those who work with AI themselves — marketers, analysts, lawyers, communications professionals, all the functions that have been pushed through trainings in recent years — the message may be hard to swallow. Better prompting is not the promised career strategy. It's a hygiene requirement, like "being able to type". The next layer of value sits in building: (small) custom tools, your own agents, automations that take over specific parts of your work.
For directors of mid-sized Dutch companies, the question shifts to: how do we make sure something is actually being built on our specific workflows, and who does it? It can be done through a consultancy; through an internal department; through the AI labs and their delivery partners that crawl into the work themselves. As long as the person currently doing the work is involved in some way. They know the current activities and can therefore contribute when it comes to mapping the process, learning where AI can contribute, and ensuring the eventual adoption of the solution.
How we approach this
This is exactly what we focus on at Think Again. We build with companies — concrete custom solutions for which an organization itself lacks the capacity, knowledge or time. We do this embedded, alongside the people who will work with it daily, because the most usable solutions emerge in the place where the work happens. And we train people to start building themselves, among other things through vibe code hackathons where teams learn the basics of making their own internal tools in a day. Which form works best depends on the organization, the problem and the people who will have to carry it.
Build, don't talk
The prompt engineer was never the job of the future. Strategies, frameworks, adoption models, inspiration sessions: they are all useful ingredients, but they don't deliver value on their own. Value comes the moment someone builds something that fits in your work. That is what was confirmed in four places at once this week, and what we have believed for a while: the place where AI really matters is where people stop talking about AI and start building.
The question for you, for your organization, for your team is therefore reasonably concrete: who is the builder? What are you building today? And what won't you have to do by hand anymore as a result?


