A new study from researchers Judy Hanwen Shen and Alex Tamkin examines how AI assistance affects skill formation in software developers. While many assume AI helps people learn and produce better work, this research paints a more nuanced picture: developers who lean on AI to complete unfamiliar coding tasks gained less conceptual understanding and debugging ability, even without meaningful efficiency gains on average.
Below we break down what the study did, what it found, and why it matters for teams, learners, and the future of work.
How This Study Was Designed
Prior research has documented that AI assistance can boost productivity across many tasks, especially for novice users. But less is known about how using AI affects the development of the underlying skills that enable long-term success and autonomy.
To investigate, the authors conducted randomized experiments in which participants learned to use a new asynchronous programming library in Python. Some participants were allowed to use AI assistance during coding tasks, and others were not. The study then measured both task outcomes and deeper competencies like conceptual understanding, code reading, and debugging skills.
AI’s Impact on Skills and Productivity
The headline finding is striking: while AI assistance can help with completing code when fully delegated, it did not produce significant efficiency gains on average for participants tackling an unfamiliar library. More concerningly, participants who relied on AI showed impaired conceptual understanding, code reading, and debugging ability compared to those who did not use AI.
In essence, learners who offloaded more of the work to AI tended to finish tasks without internalizing the concepts or skills needed to supervise or extend that work effectively. Simply put: the short-term “help” from AI did not translate into deeper mastery.
Interaction Patterns That Matter
The study also identified six distinct ways participants interacted with AI during the tasks. Some interactions involved active cognitive engagement — such as asking conceptual questions or seeking explanations — and these tended to preserve learning outcomes even when AI was used. Other patterns, especially full delegation to AI, were associated with faster task completion but poorer skill retention.
This suggests that not all AI use is equal. Simply having AI generate code is not enough to drive understanding; the way a learner uses AI (e.g., passively vs. actively engaging with outputs) strongly influences whether skills are formed or eroded.
Why This Matters for Learning and Work
For educators, team leads, and individuals, the key lesson is caution: AI assistance is not a silver bullet for learning. Placing too much trust in AI for unfamiliar tasks can short-circuit the very skill formation that makes workers effective supervisors of complex systems.
In safety-critical domains or roles requiring deep expertise, relying too heavily on AI might produce workers who can finish tasks but can’t understand or troubleshoot them. Thoughtful integration of AI—especially approaches that maintain learner engagement—is therefore essential.
Important Caveats
This study focused on a specific domain (learning a new programming library) and a particular population of developers. Results might differ in other settings, for experienced practitioners working within familiar codebases, or for highly interactive tutoring systems that provide feedback rather than just outputs.
Moreover, while this research highlights potential pitfalls in AI-assisted learning, it also points toward richer, more engaging forms of collaboration between humans and AI that emphasize understanding over delegation.