—and how to smash the "functional fixedness" trap in edtech.
1 | The deja-vu of substitution
You invest in a shiny new AI assistant, open MagicSchool, Khanmigo, or ChatGPT, and within minutes, you're churning out the same worksheets you used to type in Word. The tool is faster, but the learning experience is barely different. When this happens, you've probably hit the psychological speed-bump known as functional fixedness.
2 | What is functional fixedness?
First described by Gestalt psychologist Karl Duncker, functional fixedness is our tendency to see objects (or tools) only in the way we've always used them. In Duncker's famous candle problem, participants struggled to realise the box of tacks could become a candle-holder because they were fixated on its original purpose as a container. When the tacks were removed from the box, almost everyone solved the task—proving the obstacle was in their mental model, not their skill set.
The same mindset shows up in classrooms. Ask AI to "write ten comprehension questions", and you've simply swapped your pen for a chatbot.
3 | The SAMR lens: stuck at 'S'
Ruben Puentedura's SAMR model categorises tech use into four rungs: Substitution, Augmentation, Modification and Redefinition. Most teachers linger on the first two levels: PDFs instead of photocopies; recorded lecture instead of live talk.
Functional fixedness helps explain why we plateau. We see AI primarily as a faster authoring tool rather than a partner capable of reshaping pedagogy.
4 | Spotting fixedness in today's AI tools
[Image description: Table generated in ChatGPT o3]
5 | Breaking the box: five moves
-
Verbal judo – Re-name the tool's role. Instead of "worksheet generator", call ChatGPT a "learning pathway architect" and ask, "Design three radically different routes to the same outcome."
-
SAMR remix – Force yourself to prototype one activity at each SAMR level. Can you redefine learning so pupils do something impossible without AI—e.g., co-write a multilingual podcast transcript with instant accuracy checks?
-
Function hunting – List every component of the tool's interface and brainstorm two alternative functions. (In the candle problem, the box became a shelf.)
-
Evidence pairing – Marry the AI with a proven strategy such as retrieval practice or dual coding. Ask MagicSchool to produce spaced flash-cards, not a one-off list.
-
Time-boxed sprints – Limiting exploration to 30 minutes reduces the overwhelm and nudges creative risk-taking. Overly open-ended tinkering often defaults back to safe, familiar outputs.
6 | Why professional context amplifies the trap
Functional fixedness is a universal cognitive bias, but the professional context of teaching amplifies it in specific ways. Research from Kapur and Bielaczyc (2012) on "productive failure" demonstrates that learners -- including adult professionals -- often default to familiar strategies when facing new tools, particularly under time pressure. Teachers are among the most time-pressured professionals in any school, routinely planning, assessing, and responding to pastoral needs within the same working day. When a new AI tool arrives, the path of least resistance is to use it for the task that feels most urgent: producing tomorrow's worksheet.
This is compounded by the way most AI tools for education are marketed. MagicSchool, Diffit, and similar platforms often lead with worksheet and quiz generation as their headline features, reinforcing the very mental model that limits deeper use. Teachers who interact with these tools for the first time receive an implicit message: this is a content production tool. Breaking past that first impression requires deliberate effort, which is why the five moves outlined above are not just suggestions but necessary interventions.
School leaders have a role to play here as well. If CPD sessions on AI focus exclusively on "how to generate resources faster," they inadvertently cement functional fixedness at an institutional level. The most effective professional development I have seen in this space asks teachers to start not with the tool but with the learning problem: what is the most challenging aspect of this unit for students, and how might AI help us address it in a way we could not before? Starting from the pedagogy rather than the technology consistently produces more innovative uses.
7 | What the research says about climbing the SAMR ladder
The SAMR model is widely used but has also attracted legitimate scholarly critique. Hamilton, Rosenberg, and Akcaoglu (2016) argued in a review published in TechTrends that SAMR lacks the empirical grounding of other technology integration frameworks and can oversimplify the relationship between tool use and learning outcomes. They noted that a Substitution-level use of technology is not inherently inferior if it serves the learning objective effectively -- and that Redefinition-level activities are not automatically better simply because they are novel.
This is an important nuance. The goal of breaking functional fixedness is not to move every lesson to the Redefinition level for its own sake. Rather, it is to ensure that teachers have the cognitive flexibility to choose the right level of technology integration for each learning objective. Sometimes a well-designed AI-generated quiz at the Substitution level is exactly what students need. The problem arises when teachers are unable to see beyond that level -- when every interaction with AI produces the same type of output because the mental model is locked.
A more robust alternative to SAMR that some researchers advocate is the TPACK framework (Technological Pedagogical Content Knowledge), developed by Mishra and Koehler. TPACK recognises that effective technology integration requires the intersection of three knowledge domains: technology, pedagogy, and content. A teacher who understands the technology, knows the subject deeply, and has strong pedagogical instincts is far more likely to use AI in transformative ways than one who has been shown only the tool's surface features.
| Framework | Focus | Strengths | Limitations | Best used for |
|---|---|---|---|---|
| SAMR | Levels of technology use | Simple, intuitive, widely known | Lacks empirical validation; implies linear progression | Quick self-reflection on current practice |
| TPACK | Intersection of tech, pedagogy, content | Theoretically grounded; holistic | Abstract; harder to apply in quick planning | Long-term professional development design |
| RAT (Replace, Amplify, Transform) | Impact on practice | More nuanced than SAMR; avoids hierarchy | Less widely known | Evaluating specific tool implementations |
| TIM (Technology Integration Matrix) | Learning environment characteristics | Detailed; research-backed | Complex; requires training to use well | School-wide audit of technology use |
8 | Mini case study: from worksheet to re-definition in 20 minutes
-
Substitution – Teacher prompts ChatGPT: "Write 10 Newton-law questions."
-
Augmentation – Adds auto-generated solutions.
-
Modification – AI (Magicschool) converts each question into a "confident/unsure" branching quiz; immediate feedback alters the next item's difficulty.
-
Redefinition – Exports quiz data to NotebookLM, which drafts a metacognitive reflection guide. Pupils analyse their own misconception patterns and plan next-step experiments.
The end product is no longer a worksheet—it's a self-regulating learning loop, impossible without AI. For more on moving beyond substitution, see our Think with AI course.
9 | Building a school culture that rewards experimentation
Breaking functional fixedness is not solely an individual endeavour; it requires institutional support. Schools that successfully move beyond Substitution-level AI use tend to share several characteristics. They create low-stakes spaces for experimentation, where teachers can try new approaches without fear of judgement if the first attempt falls flat. They build in time for collaborative planning, recognising that innovative uses of AI often emerge from conversations between colleagues with different subject specialisms and technological comfort levels.
The Education Endowment Foundation's guidance on implementing educational technology recommends that schools treat technology adoption as a change management process, not a procurement decision. This means identifying clear pedagogical goals before selecting tools, providing sustained professional development rather than one-off training sessions, and building in structured reflection opportunities where teachers can share what worked, what did not, and what they plan to try next.
In my own work with schools across the Gulf region, the most successful AI integration programmes have been those where school leaders modelled curiosity and experimentation themselves. When a head of department shares an AI-generated lesson activity that did not work as planned and explains what they learned from the failure, it signals to the whole team that experimentation is valued. This cultural shift is far more powerful than any tool tutorial.
10 | Pulling it together
Functional fixedness is not a tech problem; it's a cognitive bias. By naming it, applying the SAMR ladder, and deliberately rehearsing alternative functions, teachers can unlock the deeper promise of AI: time recouped, feedback personalised, and tasks re-imagined, not just digitised. The evidence is clear that the tools are not the limiting factor -- our mental models are. Whether you use SAMR, TPACK, or another framework entirely, the critical step is the same: pause before prompting, ask what the learning really needs, and resist the gravitational pull of the familiar. That is where the genuine transformation begins.
Next step Download our 1-page "SAMR x AI Reflection Sheet". Tick off where your current lesson sits, jot one idea to climb a rung, and share your best 'box-breaking' moment.
(Download link coming in the next post.)
Stay in the Loop
Get practical insights about AI in education, new articles, and training updates delivered to your inbox.
No spam. Unsubscribe anytime.
Work With Alex
Looking for hands-on support with AI integration, curriculum design, or teacher professional development? Alex works with schools and organisations worldwide to build practical, evidence-informed approaches to education technology.