In early February, Anthropic — the company behind Claude — launched its latest AI model, Opus 4.6. That alone would have been worth paying attention to. But it was seven words from Scott White, Anthropic's head of product, that caught the attention of the tech world and ought to catch ours too.
"We are now transitioning into vibe working."
Not vibe coding. Vibe working.
If you work in education and that phrase means nothing to you yet, that's fine. But it will. And the sooner we understand what's behind it, the better positioned we'll be to help our students — and ourselves — navigate what comes next.
First, a quick rewind
To understand vibe working, you need to understand what came before it. In February 2025, Andrej Karpathy — a co-founder of OpenAI and one of the most respected minds in artificial intelligence — posted a single message on X that went on to be viewed over four million times. He described a new way of building software that he called "vibe coding." The idea was simple and radical: you describe what you want in plain English, and AI writes all the code. You don't read it. You don't debug it. You just accept what it produces and keep going.
It was, as Karpathy put it, about "fully giving in to the vibes, embracing exponentials, and forgetting that the code even exists."
And it worked. Not perfectly — Karpathy himself admitted it was best suited to quick prototypes and weekend projects. But the principle held. People with no coding experience started building functional apps. Experienced developers started working at ten times their usual speed. "Vibe coding" became the Collins Dictionary Word of the Year for 2025. Y Combinator reported that a quarter of startups in its winter 2025 cohort had codebases that were 95% AI-generated.
But here's the thing that matters for us. Vibe coding, for all its significance, only really affected one group of people: those who build software. It didn't change much for teachers drafting reports. Or heads of department pulling together data for a governor's meeting. Or consultants assembling strategy decks. Or, frankly, most knowledge workers.
That's what vibe working is about to change.
So what is vibe working?
Vibe working takes the core idea behind vibe coding — describe what you want and let AI do the building — and applies it to all forms of professional work. Not just code. Research. Analysis. Writing. Presentations. Planning. The full range of things that knowledge workers spend their days producing.
Claude went from a model that you can sort of talk to to accomplish a very small task, to something that you can actually hand real significant work to. You can give it an outcome and it can do that work.
— Scott White, Head of Product, Anthropic
This isn't theoretical. Anthropic's latest releases make three concrete shifts that bring vibe working into the real world.
Shift one: agentic teams
Most of us still use AI the way we first learned to — open a chat window, type a prompt, get a response, refine, repeat. That's useful, but it's essentially one task at a time. The new capability Anthropic has shipped allows Claude to orchestrate multiple agents working in parallel. You describe an outcome — "I need a competitive analysis of these five schools, a summary document, and a brief I can send to the head" — and the system spins up a team. One agent does the research. Another writes the narrative. Another assembles the document. They work simultaneously, like colleagues around a table, each handling a different piece.
This is a fundamentally different way of working with AI. You're not giving instructions step by step. You're describing the destination and the AI figures out the route.
Shift two: AI inside the tools you already use
The second shift is arguably the one that will be felt most immediately in schools and offices. Claude can now operate directly inside tools like Excel and PowerPoint. No more copying data into a chat window, getting an output, and pasting it back. The AI sits in the application, working with your spreadsheet or your slide deck in real time.
Microsoft is heading in the same direction with Copilot. Their framing echoes Anthropic's almost exactly — in Excel, an AI agent can now select formulas, create sheets, build visualisations and summarise findings, while in Word it drafts and refines through back-and-forth conversation. Both companies are converging on the same vision: AI that works where you work, not in a separate window.
Shift three: bigger brains, longer attention spans
Opus 4.6 comes with a one-million-token context window. In practical terms, that means you can feed it an entire codebase, an entire set of policy documents, an entire year's worth of reports — all at once. The AI can then make connections across the full picture rather than working with fragments. It doesn't lose the thread halfway through. It can hold a thousand pages in view and spot the relationship between page one and page nine hundred.
Combined with stronger reasoning capabilities, this means longer, more complex, more autonomous work. Not just answering a question — sustaining an entire project.
Why should educators care?
It would be easy to read all of this and think: that's for tech companies and marketing agencies. It doesn't apply to me. I teach Year 10 biology. I run a sixth form. I lead CPD in a primary school.
But that would be a mistake, for two reasons.
First, because this changes what our students are walking into. The world of work that our Year 11s and Year 13s are about to enter is being reshaped right now. When Anthropic's head of product says that knowledge workers will soon be "managing teams of AI agents" rather than doing every task themselves, that's not science fiction. It's a description of tools that already exist and are already being adopted by the companies our students will one day apply to.
The skill set this demands isn't prompt engineering — that's already feeling outdated. It's something closer to project management, systems thinking, and clear communication of outcomes. Can you define what good looks like? Can you break a complex goal into its component parts? Can you review someone else's work critically and give useful feedback? These are profoundly human skills. They're also, not coincidentally, the skills great teachers have always valued.
The question isn't whether this is happening. It's whether you are going to be the person who manages the agents or the person who gets replaced by them.
Second, because this changes our own work too. Teachers are knowledge workers. We research, plan, write, analyse data, produce reports, build resources, and communicate constantly. We are exactly the people this shift is aimed at. A head of department who currently spends an evening assembling assessment data into a presentation for SLT could, in the very near future, describe the outcome they need and have a team of AI agents produce the first draft — complete with analysis, narrative, and slides — in minutes.
That doesn't replace the teacher's judgement. It amplifies it. It means more time for the thinking that only a human can do: interpreting the patterns in the data, deciding what matters for individual students, having the difficult conversation with a parent. Less time formatting spreadsheets.
Three things to take from this
If you're reading this as an educator — whether you teach, lead, or support — here's what I'd suggest taking away.
Start thinking in outcomes, not tasks. The old way of using AI is: "Write me a lesson plan on mitosis." That's a task. The new way is closer to: "I've got a mixed-ability Year 12 group who struggled with cell division last half-term. I need a sequence of three lessons that builds from their misconceptions, includes a practical, and gives me formative checkpoints I can use to assess understanding before the mock." That's an outcome. The more clearly you can describe where you want to end up — and the context around it — the more powerfully these tools can work for you.
Pay attention to workflows, not one-off tricks. The people who will benefit most from this shift aren't the ones who use AI the most. They're the ones who think systematically about their work and identify repeatable processes that can be handed to AI. What do you do every half-term that takes hours? What report do you write every cycle? What data do you pull together every September? Those are the workflows worth thinking about.
Understand what's coming so you can teach into it. Our students don't need to learn to code to benefit from vibe coding — Karpathy himself said the hottest new programming language is English. But they do need to learn how to think clearly, communicate precisely, evaluate critically, and manage complexity. If vibe working becomes the norm — and the trajectory strongly suggests it will — then the young people who thrive will be the ones who can describe an outcome with clarity and review the output with discernment. Those are capabilities we can and should be developing right now.
The shift in a nutshell
Vibe coding (2025): Describe what you want built → AI writes the code. Relevant mainly to software developers.
Vibe working (2026): Describe the outcome you need → AI does the work. Relevant to every knowledge worker — including educators.
The core skill shifts from prompting (giving a single instruction) to managing (defining outcomes, assembling the right tools, reviewing the output, and iterating).
A word of honest caution
It's worth being clear-eyed about this. Vibe coding had its reckoning. By late 2025, developers were reporting what Fast Company called a "vibe coding hangover" — AI-generated code that was fast to produce but brittle, hard to maintain, and sometimes riddled with security flaws. Karpathy himself went back to hand-coding his next project. The lesson was that speed without understanding creates fragility.
Vibe working will likely follow a similar arc. The first wave will be breathless excitement. Then the problems will surface — outputs that look polished but miss the point, AI that confidently assembles the wrong data, workflows that save time but lose nuance. The people who navigate this well will be the ones who bring judgement, context, and domain expertise to the table. Not the ones who blindly accept what the machine produces.
For educators, that's actually encouraging. Our entire profession is built on exactly those capabilities — understanding context, making nuanced judgements, knowing when something looks right but isn't. We are, arguably, better prepared for this shift than most.
Something significant is happening. The technology industry is reorganising itself around a new model of work — one where AI goes from answering questions to executing projects, from chatbot to colleague. Schools don't need to chase every trend. But we do need to understand the landscape our students are entering and the tools that are reshaping professional life around us.
Vibe working isn't a gimmick. It's a direction. And the best time to start thinking about what it means for education is before it arrives in every staffroom and every office our students walk into.
This is what we'll keep exploring here at DEEP Education Network — practical, honest thinking about the intersection of AI and education. No hype. No fear. Just the clarity that comes from paying attention and thinking it through together.
If this was useful, share it with a colleague. The more of us who understand what's coming, the better we'll respond to it.