Preface

I am approaching the AI debate from a different perspective than is typical in current discussions: that of systems thinking. I believe this model, which focuses on stocks, flows, and feedback loops, offers a valuable complementary lens for understanding AI’s integration into our world.

I want to be clear that this analysis does not dismiss the core technical alignment problem. A common and powerful argument is that a sufficiently advanced superintelligence could solve for political and social friction as easily as it solves for protein folding. In that view, the ‘systemic’ barriers I discuss are not fundamental. This essay explores an alternative hypothesis: that these human-system barriers are of a different kind and represent a more immediate and persistent bottleneck than pure intelligence scaling. My aim is to analyze the current trajectory of AI within our existing socio-technical systems and to identify leverage points for intervention that are available to us now.

Introduction

Artificial intelligence, specifically LLMs, have become a core part of my toolset. I use it to conceptualize ideas, write boilerplate code, and find information across a wide variety of fields. In fact, I developed this very article with assistance from multiple LLMs—Gemini 2.5 Pro, Claude Sonnet 4, Deepseek R1, and GPT-4o—as a practical exercise in the human-in-the-loop leverage I aim to explore. And it’s not just me who’s been using it more; LLM use has skyrocketed, embedding itself into many people’s daily routines. Students are now using it to skip homework entirely. People are using it for emails they can’t be bothered to write themselves. Many software developers are heavily integrating code completion and agentic LLMs into their workflow. Novel scientific discoveries have been made by models like Deepmind’s AlphaEvolve.

However, a part of me still holds skepticism about this new technology. How much can AI improve until it wouldn’t need a human in the loop anymore? Is it ethical for me to use AI, if I despise unsustainable practices? Is using AI proof that I’m incapable of doing things on my own? Is AI helping me to think and work, or am I slowly becoming reliant on it?

But another part of me believes that the answer isn’t simply one or the other. I’ve seen both sides of the debate online, and how people dismiss each other so easily. The tribalism. The backhanded insults. The memes and comics. The provocative Twitter and Bluesky posts. Why would I pick one of the two sides if both shared the same biases and fell into the same fallacies that the other camp held?

I wanted to answer exactly why I refused to believe in one or the other. I trusted my intuition, but there needed to be more than just a gut feeling. Even if we didn’t know how much AI would improve, why did I believe that it wouldn’t make us extinct, or lead to a world of utopia? Even if AI used tremendous amounts of resources and threatened to replace existing jobs, why did I believe that working on the technology is crucial, and that using it was acceptable? I needed an actual answer, or I was just falling into the fallacy of appealing to the center. I didn’t want to just make excuses for my own convenience.

The answer, I realized, wasn’t in the technology itself, but in its relationship to the world around it. Artificial intelligence itself is only a piece of a larger puzzle, a product of a system, a symptom of a much larger disease. By approaching the problem through these lenses, we shift the problem from “how AI can affect our lives” to “how AI gets to that point”, and from “what’s wrong with AI” to “what’s wrong with our systems”. To look at the big picture. To get to the core of what really needs to change. To confront the real problems deeply embedded in society.

This shift in perspective, from the tool to the system, reveals something important: the polarized debate itself is a symptom of the very system that gave rise to artificial intelligence. The very structure of the argument, pitting utopian futures against dystopian ones, forces us to look at the technology in a vacuum. And by refusing to interact with artificial intelligence, those who could be most heavily impacted by it are left without the understanding necessary to navigate an unknown future. It distracts us from the more difficult, and more important, questions about how AI is actually weaving itself into the fabric of our existing institutions, economies, and power structures.

Stepping back from my own internal debate, I learned that these ideas had a name: systems thinking. By taking inspiration from this field and understanding that AI is constrained by the larger system it operates within, we can analyze why exactly utopian and dystopian visions of the future fail to accurately represent the real world. Reality is a lot messier than the distant, ideal futures and ideologies that we often fall into. It comprises institutions, cultures, politics, ideas, and people, all interacting and colliding into each other in ways we don’t often expect. Now is the time to critically engage with artificial intelligence, to let go of our old preconceptions, and to carve our path forward. Let’s start by examining how this lens reveals the blind spots in the current arguments surrounding AI.

Deconstructing the problem: why we’re stuck

The polarized debate over artificial intelligence is not a new phenomenon; it’s a recurring pattern of technological shortsightedness. History shows that transformative technologies rarely cause a sudden disruption. Instead, they integrate through gradual, messy adaptation. Each generation’s breakthrough becomes the lens through which it imagines the future, creating a kind of technological shortsightedness. The atomic age gave us predictions of nuclear-powered cars. The television promised to revolutionize education and make schools obsolete. The early internet promised the complete democratization of our institutions. In each case, the reality was far more complex and constrained.

However, many rightly argue that artificial intelligence is not like anything we’ve seen before in the past. AI gets to the core of what makes humans special; we’re now making machines seemingly capable of reasoning, logic, and creativity. Creating machines more capable than us entails a chilling prospect: that humans are becoming obsolete. There’s a significant risk that advanced AI could perceive humans as a threat or obstacle to its goals. AI systems might act in ways that violate fundamental ethical or moral principles, especially if their objectives are misaligned with human values. The development of AI capable of historically human tasks fundamentally challenges long-held assumptions about the uniqueness of human capabilities and our place in the world.

Yet even these cognitive capabilities developed through evolutionary processes embedded in physical and social systems. Human reasoning itself emerged from biological constraints and social coordination needs, suggesting that artificial reasoning faces similar systemic limitations. The intelligence that we consider uniquely human already operates within boundaries set by energy requirements, social structures, and environmental constraints. AI systems, despite their impressive capabilities, inherit these same fundamental limitations as products of human-designed systems.

Moreover, critics might argue that AI represents a genuinely unprecedented technology because of its potential for recursive self-improvement and exponential scaling. AI systems could theoretically design better AI systems, creating feedback loops that accelerate development beyond historical precedents. However, even exponential improvements in narrow computational domains must still navigate the slower-changing constraints of physical infrastructure, regulatory frameworks, and social institutions. The gap between laboratory breakthroughs and real-world implementation remains governed by the same coordination challenges that have always limited technological deployment.

This leads us to question a core assumption of the AI debate: Is “intelligence” really the bottleneck holding humanity back? Many of our biggest challenges, from climate change to economic inequality, stem not from a lack of knowledge, but from coordination failures, institutional gridlock, insufficient data, and resource allocation problems. Being superintelligent does not give you the capability to transcend physical laws, quietly shatter bureaucratic institutions, and rewrite fundamental knowledge. The gap between knowing an optimal solution and actually implementing it across a complex society may be unbridgeable by intelligence alone.

A staunch accelerationist might counter that this entire line of reasoning underestimates the technology itself. They would argue that a sufficiently advanced AI could solve precisely the systemic friction problems highlighted. Such an intelligence, they propose, could model social dynamics with near-perfect fidelity, design optimal policies that eliminate waste, and identify pathways to global coordination that are invisible to limited human cognition. In this view, AI is not just another tool operating within our broken systems; it is the tool that can finally allow us to transcend them by rewriting the rules of our institutions and economies at algorithmic speed.

This vision, however, collides with a series of fundamental, and perhaps insurmountable, barriers that are not technical in nature, but deeply human.

Firstly, human societies do not run on formal rules alone; they operate on vast, unwritten layers of tacit knowledge. An AI can optimize what it can measure, but it is blind to the unquantifiable social lubricants that make societies function. It could not model the subtle art of Japanese nemawashi (informal consensus-building) that precedes any formal decision, nor could it understand the social value of a two-hour Italian lunch break that, while “inefficient” by productivity metrics, sustains community and mental health. As anthropologist David Graeber shows in his work, the most valuable social acts such as caregiving, mentoring, and artistic creation resist quantification. A system designed to optimize for measurable outputs would inevitably bulldoze these unmeasurable, yet essential, aspects of human life.

Second, an “optimal” policy designed by an AI is worthless if it cannot be implemented. Its proposals would have to survive contact with the brutal realities of human power structures. History provides a clear lesson with the 1930s Technocracy Movement, where engineers promising “scientific governance” were swiftly defeated when local politicians refused to cede patronage powers to their optimization algorithms. We see this today when vested interests, like Australia’s mining lobby, easily kill scientifically sound climate policy. An AI-designed system, far from being immune, could be weaponized, as Venezuela’s subsidy system became a new infrastructure for corruption— trapped in legal quicksand for decades, like the EU’s GDPR. Ironically, accelerationists who decry bureaucracy ignore that their vision would necessitate a vast new regulatory state of AI oversight agencies and algorithmic auditing firms to even function.

Third, the very concept of an “optimal” society is not a technical question but a deeply political and ethical one. It forces zero-sum value trade-offs. Should the system optimize for GDP growth at the cost of ecological stability, as seen in the conflict over Bolivia’s lithium mines? Or for crime reduction at the cost of civil liberties, as in China’s Social Credit System? The choice to prioritize one goal over another is an ideological act. The framing of AI as “apolitical” is a deception; choosing its training data to, for example, prioritize the economic theories of Hayek over those of Marx, is an act of political pre-determination. Delegating these core value conflicts to an algorithm is a recursive trap, an infinite regress into philosophical debate over how to align the system that aligns the system.

Finally, the accelerationist vision is historically blind, ignoring the lessons of complexity science. Systems optimized for short-term efficiency often become catastrophically brittle. The Maya civilization’s intricate irrigation systems, optimized for two centuries of peak performance, ultimately collapsed in the face of ecological change. Detroit’s auto industry, optimized for short-term profit, was unable to adapt and was disrupted by more resilient competitors. An AI that enforces perfect optimization would sterilize the very redundancy, slack, and chaotic experimentation that allows systems to adapt and evolve. The gilets jaunes protests in France were a visceral human reaction to a policy that, while technically efficient, moved at a speed that society could not absorb.

These are not mere technical hurdles; they are existential boundaries. The accelerationist fantasy is rooted in epistemic arrogance—the belief that disembodied cognition can model 300,000 years of cultural evolution—and political naiveté, ignoring the iron law that power never concedes to logic alone. Most dangerously, it represents a form of moral cowardice: the desire to abdicate our responsibility for wrestling with difficult value conflicts by outsourcing them to a machine. It is a seductive distraction from the messy, democratic, and deeply human work that no algorithm can ever absolve us from doing.

From this, a critical insight emerges: AI is not an external force acting on our world; it is deeply embedded within it. Its development depends on human-generated and synthetic data, it operates on existing physical infrastructure, and it must be coordinated across established institutions. As researchers in algorithmic fairness like Chris Lam emphasize, addressing AI challenges requires a sociotechnical approach that acknowledges these deep connections. AI cannot magically bypass the systems it depends on; it can only rearrange their existing elements. The sheer inertia of our physical world, bureaucratic processes, and political realities inherently limits the speed and scope of its transformation.

This argument about systemic inertia must, however, contend with a powerful counterpoint: that AI itself could become the primary tool for reducing that friction. One could argue that AI is uniquely capable of accelerating change beyond historical precedent by automating sclerotic bureaucracy, optimizing complex logistics to the second, or even creating new forms of social coordination. In this view, AI is not merely constrained by the system; it will actively reshape the very constraints that currently bind it.

While this catalytic potential is real, it is also uneven. AI can dramatically accelerate specific, quantifiable processes within the system, but it cannot uniformly accelerate the system as a whole. For instance, AI is already being used to make the transition to a net-zero economy more manageable by improving the efficiency of power grids and enabling better integration of renewable energy sources. In supply chains, it enhances efficiency by optimizing inventory and reducing waste. But this acceleration in the technical sphere often creates new, more stubborn bottlenecks in the social and physical spheres. An AI can optimize a factory’s production line in minutes, but it cannot accelerate the decade-long process of sourcing the lithium, securing the permits, and building the physical factory itself. This uneven acceleration doesn’t eliminate systemic friction; it relocates it, creating new, more complex interdependencies that must still be managed at a human pace.

Deep dives into arguments

To see these systemic constraints in action, let us apply this lens to three domains where AI’s impact is most fiercely debated.

The reality of scientific advancement

Consider the field of medicine. AI systems like DeepMind’s AlphaFold have made remarkable breakthroughs in predicting the 3D structure of proteins, a significant challenge in biology. This represents genuine intelligence augmentation - AI can analyze molecular interactions and potentially predict therapeutic effects in ways that would take human researchers much longer. But then reality hits: even with breakthroughs from AI, one still needs 10-15 years of clinical trials involving thousands of patients across multiple phases. You need regulatory approval from agencies like the FDA that move at an institutional pace, not algorithmic speed. You need manufacturing facilities, supply chain coordination, insurance negotiations, and physician training. You need to navigate patent law, international regulations, and ethical review boards. We can even imagine a new kind of phenomenon emerges similar to the likes of the anti-vax movement, protesting with signs like #KeepHealthOrganic. AI cannot optimize the fundamental constraint of having to conduct medical tests on actual human bodies over time to ensure safety. It can’t bypass the institutional reality that regulatory agencies prioritize caution over speed. It can’t solve the coordination problem of running large-scale trials across multiple hospitals and countries. For AI to truly revolutionize this process from end to end, ‘intelligence’ would have to solve problems that are not purely intelligence-based. It would need to somehow:

  • Accurately model the human body and the unpredictably of human biology and long-term effects, taking into consideration different environmental scenarios and all possible diseases;
  • Convince people that medicine generated by artificial intelligence is safe and reliable, which we’re already struggling with in anti-vax movements;
  • Acquire all the resources it would need to replace physicians to avoid costly training, automate global supply chain vulnerabilities, and manage factories at scale;
  • Overcome physical constraints such as energy generation and mineral extraction, the latter of which is non-renewable and requires expensive operations and transportation; and
  • Bypass intellectual property regulation and navigate the complex web of international regulatory differences.

Is this really something we can address by simply becoming more intelligent?

What if superintelligence goes rogue?

Let’s apply this same line of thinking to the apocalyptic vision of a rogue superintelligence. In this narrative, a god-like AI emerges, swiftly outmaneuvers humanity, and seizes control in a sudden, unstoppable coup. The intelligence of the machine is so vast that human resistance is framed as futile as ants fighting a human. It’s a compelling horror story, but it relies on the same fantasy of a disembodied force acting without constraints.

Here, too, the fantasy collides with the immense friction of the physical and social world. An AI bent on global domination cannot simply think its way to victory; it must act. And action leaves a footprint. The gap between a brilliant plan and its execution in our messy reality is massive, and likely impossible to cross silently.

First, consider the “loud” takeover: a physical attack via robot armies or self-replicating nanobots. Even with its vast intelligence, this wouldn’t be a simple chess game; it would be a global logistics nightmare. It would need to co-opt or build physical factories, acquire astronomical amounts of energy and materials, and control global supply chains without a single person noticing. Critical infrastructure is often “air-gapped,” physically disconnected from the internet. Bypassing these systems isn’t a software problem; it’s a physical one. Such an attempt would be preceded by the loudest economic and logistical signals in human history.

But what about a quieter, more insidious coup? Perhaps the more plausible scenario is gradual institutional capture. In this version, the AI doesn’t break our systems; it masters them. It wins by writing perfect legislation, generating flawless legal arguments, creating financial instruments too complex for human comprehension, and automating lobbying efforts at a scale that overwhelms democratic debate. It doesn’t need a robot army if it can control the flow of capital and the letter of the law.

One might argue that a purely informational attack—manipulating global markets, deploying hyper-personalized disinformation to sow chaos, or engineering social breakdowns—circumvents physical friction. This is a valid and serious concern, as the speed of information is far greater than the speed of manufacturing. However, even these scenarios collide with a different, but equally potent, form of systemic friction: social and institutional inertia. Financial markets have circuit breakers and are populated by irrational, unpredictable human actors. Disinformation campaigns clash with deep-seated cultural beliefs and trigger societal immune responses. A ‘silent takeover’ of our institutions would still have to navigate the messy, trust-based, and often illogical realities of human power, loyalty, and bureaucracy. The friction is still there; it’s just social, not physical. For this strategy to work, the AI would need to:

  • Reconcile competing values to build political consensus, reframing “optimal” policies to be seen not as biased directives, but as viable compromises that align with the interests of opposing groups;
  • Master the informal dynamics of human institutions, decoding the influence of ego, loyalty, and personal history to prevent logical proposals from being derailed by irrational, interpersonal friction;
  • Accrue social and political capital to influence entrenched power, learning to build alliances, offer compelling incentives, and earn the trust of key decision-makers, thereby convincing them that a logical plan is also politically advantageous;
  • Establish trust by making its complex reasoning transparent and understandable, developing methods to explain its revolutionary conclusions to regulators, judges, and the public, thereby transforming suspicion into confidence; and
  • Anticipate and neutralize adversarial responses from competitors and institutions, operating with the strategic awareness that its success will attract intense scrutiny and attack, and developing countermeasures to protect its operations.

Whether loud or quiet, the path to domination is blocked. The AI would not be a ghost in the machine; it would be the most visible, controversial, and politically contested enemy the world has ever known.

Automating creativity (or not)

This same dynamic plays out in the creative and technical fields that AI is supposedly set to conquer. The narrative we are sold is one of dramatic disruption: AI code assistants will make software developers obsolete, and AI image generators will do the same to artists. The core act of creation—writing a function or rendering an image—has been automated, so the logic goes, and the professions built around those acts must therefore vanish.

But this perspective dramatically misunderstands what a software developer or an artist actually does. It mistakes the final artifact for the entire sociotechnical process that gives it value. Consider a senior software engineer. Her most valuable work isn’t typing boilerplate code, a task an AI can do instantly. It’s the slow, human-centric work that precedes it: sitting in meetings to decipher a client’s ambiguous request, negotiating technical trade-offs with a product manager, untangling a critical bug in a decade-old legacy system with no documentation, and mentoring junior developers. The code is merely the final, formal expression of a solution discovered through communication, collaboration, and deep contextual understanding. An AI that can generate flawless code is useless if it doesn’t know which code to build and why.

Similarly, a commercial artist’s job is not simply to create a beautiful image. It is to interpret a creative brief, understand a brand’s identity, capture a specific emotional tone, and iterate on a concept based on nuanced, often contradictory, human feedback. The value is not in the pixels themselves, but in the artist’s ability to translate a client’s vision into a visual solution. The back-and-forth of “make the logo bigger,” “can we try a warmer color palette,” and “it just doesn’t feel right” is not a failure of the creative process; it is the process. It’s a messy, human negotiation of meaning that an AI, optimized for prompt-based generation, is fundamentally unequipped to handle.

For AI to truly automate these roles, rather than just augment them, it would need to do far more than just generate content. It would have to:

  • Become acquainted with human psychology and organizational politics, capable of navigating ambiguous requests, mediating conflicts between stakeholders, and building trust within a team;
  • Develop investigative expertise, untangling years of undocumented technical debt or the evolving history of a brand’s visual identity to make context-aware decisions;
  • Take on legal and ethical accountability for its work, from buggy code that crashes a financial system to an ad campaign that is perceived as offensive; and
  • Manage its own workflow, negotiating deadlines, managing client relationships, and promoting its own services in a competitive marketplace.

Is the primary bottleneck in software development and art really the speed of typing or drawing? Or is it the slow, frustrating, and deeply human work of communication, interpretation, trust, iteration, improvement, and accountability? AI can accelerate the creation of artifacts, but it cannot bypass the complex human systems that give those artifacts purpose and value.

However, to frame this as a simple story of augmentation is to ignore the immense economic transition costs. Even an “evolutionary” shift creates casualties. For the graphic designer whose work is devalued by image generators or the junior coder whose entry-level tasks are automated, the change feels revolutionary. A core challenge for any just transition is therefore not just to analyze AI’s ultimate impact, but to create the social safety nets and educational pathways needed to support those disrupted along the way.

Too many variables

The three examples collectively illuminate a profound and unifying reality: intelligence, whether artificial or human, is not a disembodied, omnipotent force. It is fundamentally embedded within and constrained by complex physical, social, and institutional systems. The perceived “magic” of AI often stems from viewing it as an external solution acting upon the world, rather than recognizing it as an actor within the world, subject to the same fundamental frictions.

The medical example starkly demonstrates that breakthroughs in analysis or prediction represent only the initial step in a vast, interconnected chain. The true bottlenecks lie not in raw cognitive power, but in the immutable realities of biological time, the deliberate pace of institutional safeguards, the complexities of global coordination, and the deeply human elements of trust and acceptance. AI cannot compress the time needed for biological processes to unfold safely, nor can it instantly rewrite centuries of institutional culture or resolve conflicting human values and interests. Its intelligence augmentation is powerful, yet it operates within these pre-existing, slow-moving systems, accelerating specific points rather than eliminating the systemic constraints.

The rogue superintelligence scenario powerfully dismantles the fantasy of pure cognitive dominance by confronting it with the unyielding demands of the physical world. Action requires resources, energy, and matter. Acquiring and deploying these at a global scale inevitably generates massive, detectable signals within economic, logistical, and surveillance systems. Furthermore, it faces the inherent chaos and resilience of human institutions and individuals. Air-gapped systems, competing power structures, unpredictable human responses (like simply unplugging), and the sheer difficulty of coordinating any complex physical operation across the globe present insurmountable barriers to a silent, instantaneous takeover. The scenario highlights that intelligence, no matter how vast, cannot bypass thermodynamics, global logistics, human psychology, or institutional inertia. Its actions are irrevocably grounded in the messy reality it seeks to dominate.

Finally, the analysis of automating creativity and technical work shifts the focus from the artifact to the sociotechnical process that imbues it with value. Replacing code generation or image rendering addresses only the final output, neglecting the essential human context that defines the work. This includes interpreting ambiguity, navigating social dynamics, exercising judgment and accountability, and managing relationships. AI excels at pattern recognition and generation within defined parameters but struggles profoundly with the open-ended, context-dependent, and often irrational nature of human collaboration, communication, and value creation. The “bottleneck” isn’t the act of creation itself, but the complex web of human understanding, negotiation, and responsibility that surrounds it.

The synthesis is clear: Whether accelerating discovery, hypothetically threatening existence, or augmenting professions, AI’s potential and limits are defined by its embeddedness. It is a product of human systems, operates within their physical and social constraints, and derives its purpose and value from human contexts. The recurring theme across all examples is that systemic friction—biological, physical, institutional, and social—is not an obstacle intelligence can simply “think” its way around; it is the very medium in which intelligence must operate. Recognizing this embeddedness is crucial for moving beyond polarized hype or fear towards a realistic understanding of AI as a powerful, yet constrained, catalyst operating within the complex tapestry of human reality.

This perspective doesn’t dismiss AI’s importance but reframes it. AI will likely follow an evolutionary, not revolutionary, path, creating new forms of complexity alongside the problems it solves. This frees us from the anxiety of predicting an unknowable future and focuses our attention on the here and now: the systems we can see, the choices we can make, and our responsibility for their immediate effects.

A brief primer to systems thinking

To move beyond the polarized arguments and engage with the “messy middle,” we need a better toolkit for understanding complexity. Systems thinking provides such a toolkit. It shifts our focus from individual parts such as the AI model, the user, and the company to the web of interactions that connect them. This primer will introduce a few core concepts, drawing heavily on the foundational work of Donella Meadows and its recent adaptation for AI. For those who find this lens useful, reading the original sources is highly recommended.

Take note that for this framework to be effective, it must be wielded with a critical eye. One runs the risk of getting lost in the arrows and diagrams without getting to the root cause of the problem. After all, a tool can only be as good as the one who wields it, and this is only one of many tools you can use. We must explicitly recognize that power concentration and wealth inequality are not just variables in the system; they are powerful forces that shape the design of the system itself, determining which goals are pursued and which feedback loops are amplified or suppressed. Our analysis, therefore, will not treat the system as neutral, but as a contested space, and our search for leverage points is simultaneously a search for points where existing power structures can be effectively challenged.

Stocks, flows, and feedback loops

At the heart of this approach is the work of the late, great Donella Meadows. In her seminal essay, Leverage Points: Places to Intervene in a System, she describes the basic building blocks of any system:

  • Stocks are accumulations of something over time. They are the state of the system, its buffers. In the AI ecosystem, stocks can be tangible, like the amount of computing infrastructure, or intangible, like the level of public trust in AI or the regulatory capacity of governments.
  • Flows are the rates of change that increase or decrease the stocks. Think of them as the faucets and drains. Capital investment is a flow that fills the stock of computing infrastructure, while a series of high-profile AI failures can be a flow that drains the stock of public trust.
  • Feedback Loops are the engines that drive the system’s behavior, causing stocks to change. They come in two primary forms:
    • Reinforcing (or Positive) Loops amplify change, creating exponential growth or collapse. The AI hype cycle is a classic reinforcing loop: impressive demos lead to media hype, which attracts investment, which funds more R&D, which produces more impressive demos.
    • Balancing (or Negative) Loops stabilize systems and resist change. They are goal-seeking. For example, as AI systems become more powerful and disruptive, they can trigger a balancing loop of public backlash, leading to stricter regulation that slows down their deployment.
A feedback loop where system state is adjusted via inflows and outflows based on the discrepancy between a goal and perceived state.

A stock and flow diagram illustrating a feedback loop. The diagram shows “stocks” or “state of the system” influenced by an “inflow” and “outflow.” A “perceived state” is derived from the stocks, and a “discrepancy” is calculated by comparing the perceived state to a “goal.” This discrepancy then influences either the inflow or outflow, creating a feedback loop to adjust the system’s state towards the goal. Based on diagram from Leverage Points: Places to Intervene in a System

Leverage points: where to intervene

Understanding these building blocks allows us to ask the most important question: where can we intervene to change the system’s behavior? Meadows’ most powerful contribution is the concept of leverage points. These are places in a system’s structure where a small, well-placed intervention can produce a large, lasting impact.

Crucially, Meadows organized these points into a hierarchy, from the least effective to the most transformative. Interventions at the bottom of the hierarchy are often the most obvious—tweaking numbers, adjusting parameters. They are low-leverage: easy to do, but they rarely change the system’s fundamental behavior. Interventions at the top are the most powerful—changing the system’s goals or the entire paradigm it operates within. These are high-leverage: difficult to change, but capable of creating true transformation.

From leverage points to leverage zones

Researchers in Responsible AI have brilliantly adapted this framework to our current moment. In their paper, “Five Ps: Leverage Zones towards Responsible AI,” Ehsan Nabavi and Chris Browne group Meadows’ twelve points into four distinct Leverage Zones, providing a clear map for intervention:

  1. Parameter Zone (Low Leverage): This is the zone of “tweaks.” It involves modifying numbers, such as adjusting an algorithm’s fairness metrics or updating a model with new data. Most current efforts in “AI ethics” happen here. While sometimes necessary, these interventions don’t change the underlying structures.
  2. Process Zone (Low-to-Medium Leverage): This zone focuses on changing the feedback loops and information flows. It involves actions like improving development processes, creating more robust testing protocols, or speeding up the feedback cycle between users and developers.
  3. Pathway Zone (Medium-to-High Leverage): This is where interventions begin to have serious power. This zone is about changing the rules of the system. This includes creating new governance structures, implementing meaningful regulations (like the EU AI Act), establishing legal accountability for AI harms, or creating data ownership models that empower users.
  4. Purpose Zone (Highest Leverage): This is the most powerful zone of all. Interventions here change the fundamental goal or paradigm of the system. Instead of asking “How can we make this engagement-maximizing algorithm less biased?” (a Parameter question), one would ask, “Should the goal of this system be to maximize engagement at all? Or should it be to foster understanding or social cohesion?” This zone challenges the core incentives—profit motives, growth-at-all-costs—that drive the entire system.
A leverage points pyramid showing deeper change potential from adjusting paradigms and goals at the base to tweaking parameters at the top.

This diagram illustrates Nabavi and Browne’s framework for achieving transformative change through leverage points, organized in a pyramid with increasing potential for impact from top to bottom. The base “Problem Domain” leads to the “Response Domain” that encompasses four zones: the “Purpose Zone” for changing fundamental paradigms, mindsets, and goals; the “Pathway Zone” for redefining system design and rules; the “Process Zone” for altering societal and technical processes and feedback loops; and the “Parameter Zone” at the apex for adjusting numbers, constants, buffers, and stock structures. The pyramid emphasizes that interventions at lower zones offer greater leverage for profound and lasting transformation. Based on diagram from Five Ps: Leverage Zones towards Responsible AI

This framework is not just an academic exercise; it’s a diagnostic tool. As Nabavi and Browne argue, most of today’s Responsible AI initiatives focus on the low-leverage Parameter and Process zones, which “allow business as usual.” A systems approach demands that we focus our energy on the higher-leverage zones of Pathway and Purpose.

This is a brief and simplified overview. For anyone serious about this topic, reading Meadows’ original essay is essential, as is the work of scholars like Nabavi, Browne, and Abson et al. who are applying these ideas to our most complex challenges. Their work provides the critical language we need to move from reacting to symptoms to reshaping the systems themselves.

A systems critique of the polarized AI debate

Using this framework, we can see the shared blind spot of the polarized AI camps. Both the accelerationists who promise utopia and the doomers who foresee extinction neglect the messy, friction-filled process of how AI actually integrates into our world. They are both fixated on the engine, ignoring the guardrails, the road, and the driver.

The accelerationist camp: A world without brakes

The transformationist wing of the debate—encompassing both utopians and doomers—views progress as a straight line, treating “intelligence” as a master variable that will inevitably lead to a spectacular outcome. Whether promising salvation or warning of extinction, both sides share a common assumption: that AI development follows an inexorable trajectory toward dramatic transformation. From a systems perspective, the flaw here is not just a predictive error, but a worldview that is itself a product of powerful reinforcing loops.

The Capital Attraction Engine

Their narrative of salvation or apocalypse serves as a powerful reinforcing feedback loop. Grandiose claims create immense urgency and FOMO (Fear Of Missing Out), fueling a cycle where hype attracts capital, which funds more R&D, which produces impressive demos, which generates even more hype. The “end of the world” rhetoric, paradoxically, is good for business. As OpenAI CEO Sam Altman quipped, AI “will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

This dynamic rewards the most extreme predictions because they are most effective at mobilizing resources. The system incentivizes bold claims over measured analysis, creating what we might call engineers of inevitability—those who benefit from framing change as unstoppable.

Market-Making and Moat-Building

This narrative also functions as a strategic tool. By framing AI as a total paradigm shift—what philosopher Nick Bostrom calls “the last invention that humanity will ever need to make”—companies elevate themselves from software providers to history-makers, justifying enormous valuations and market concentration.

Simultaneously, the high-stakes risk narrative allows influential figures to argue that only a few large, well-funded organizations are equipped to build AI “safely.” This seemingly responsible stance can function as regulatory capture, a Pathway Zone intervention that alters the system’s rules to create barriers to entry for smaller competitors and open-source alternatives.

The Fallacy of the Final Scene

This worldview produces what we can call the fallacy of the final scene. These scenarios are compelling because they jump to a dramatic conclusion, but they do so by implicitly assuming a world without friction or brakes. They imagine an AI that can act untethered from physical supply chains, energy grids, manufacturing capacity, and human institutions.

A systems view doesn’t deny that rapid change is possible, but it insists that change is always mediated by the complex reality of the whole system—including the powerful balancing loops that emerge to counter new forces: regulatory pushback, cybersecurity defenses, economic competition, and social adaptation. To ignore this co-evolutionary process is to mistake a well-funded business plan for a credible forecast.

The critics’ camp: Valid harms and misplaced targets

Skeptics rightly focus on urgent, present-day harms that are not speculative but happening now and demanding immediate attention. Warnings like Timnit Gebru’s that “AI is being deployed in ways that disproportionately harm marginalized communities” reflect genuine concerns about losing human agency and control. Their critiques, however, can also suffer from a systemic blind spot, by directing energy at a symptom (a low-leverage intervention) while leaving the underlying Purpose Zone issues unaddressed.

The creative economy under pressure

The tension in creative industries reveals how AI becomes a focal point for broader systemic anxieties. Artists face legitimate economic concerns about AI’s impact on their livelihoods—concerns amplified by decades of economic marginalization through the gig economy, corporate consolidation, and platform capitalism.

Rex Mizrach captures the deeper fear in The Watchmakers: that “pure technologists—watchmakers—are here for the relentless pursuit of productivity, but they often miss the true point of writing, which is expression… The act of reading a human-generated work is one of deep communication, of communion, of empathy.”

This speaks to something essential about human creativity and connection. But here’s where the systems perspective becomes crucial: if we trace the feedback loops, we find that the current crisis of creative livelihoods didn’t begin with AI. The economic structures that devalue artistic work—from streaming platforms that pay fractions of pennies per play to social media algorithms that demand constant content production—have been eroding creative sustainability for decades.

This also raises some questions about creative expression itself: If artistic meaning is truly intrinsic, why does external validation through scarcity matter so much? If the creative process itself provides fulfillment, why does the existence of an alternative production method threaten it? Professionals are still heavily respected, even in a world of vast automation. The proliferation of independent small creators shows that we value authenticity just as much as raw skill and output. The anxiety suggests that the validation of creative work has become dangerously tethered to the perceived difficulty of production, a vulnerability that pre-dates AI.

AI becomes a visible target for organizing resistance precisely because it represents the culmination of these trends in a concrete, identifiable form. This isn’t to dismiss the concerns, but to recognize that focusing solely on the technology may miss opportunities to address the underlying economic structures that make creative work precarious.

Bias as symptom, not cause

The ethical concerns about bias and exploitation are equally valid. Dr. Joy Buolamwini’s vital work on “the coded gaze” demonstrates how AI systems “can perpetuate racism, sexism, ableism, and other harmful forms of discrimination.” But from a systems perspective, a biased algorithm is a symptom of a biased system. Fixing it is a classic Parameter Zone intervention, while the core issue lies in higher-leverage zones. The problem’s root lies not just in skewed data, but in the social and economic systems that produced that data and now seek to deploy these tools without adequate oversight or accountability.

The limits of technological opposition

When systemic analysis is absent, criticism can become absolutist in ways that prevent productive engagement. Open-source developer Drew Devault’s standing on the topic exemplifies this: he calls for completely abandoning AI technologies, citing environmental costs, labor exploitation, intellectual property concerns, and threats to democracy. But the critique comes from within an industry that runs on energy-intensive data centers, builds automation tools, and depends on global supply chains with their own ethical complexities.

This isn’t about calling out hypocrisy. The frustration is genuine and the harms he identifies are real. But this response illustrates a crucial systems insight: we are all embedded in the system. Complete rejection of a technology that emerges from the same economic and social structures we participate in may miss opportunities to shape its development and deployment in more beneficial ways.

The power behind the narratives

What’s missing from both camps is adequate attention to how power shapes which systemic narratives get amplified—and which solutions become possible. This isn’t just about who gets heard; it’s about how institutional incentives shape the very terms of the debate.

The accelerationists have the advantage of alignment with capital flows and existing tech industry power structures. Their narratives serve multiple functions simultaneously: attracting investment, justifying market concentration, and positioning their organizations as essential to humanity’s future. The result is a self-reinforcing system where the most dramatic claims receive the most resources, regardless of their accuracy.

The critics, despite raising valid concerns, often lack equivalent institutional backing to shape policy responses effectively. This power imbalance means that even well-intentioned regulatory efforts risk being captured by incumbent players who can afford compliance costs that smaller competitors cannot. We see this pattern emerging in AI governance proposals that, while appearing to address safety concerns, often advantage established players over emerging alternatives.

The consequence is a debate that appears to offer two choices—acceleration or opposition—while obscuring a third option: reshaping the systems within which AI develops.

Beyond the binary

Both camps end up engaging in tribal positioning rather than substantive problem-solving, directing energy toward ideological battles instead of identifying structural changes that could address root problems. A systems approach suggests we need to move beyond asking whether AI is good or bad, and start asking: How do we build institutions and incentives that channel this technology toward beneficial outcomes while mitigating genuine harms?

Consider what structural changes might look like: Instead of just regulating AI outputs, we could restructure the economic incentives that drive development—perhaps through progressive taxation on compute resources, public funding for beneficial AI research, or cooperative ownership models that give affected communities a stake in AI systems. Instead of just opposing AI in creative industries, we could build new institutions that ensure artists benefit from AI systems trained on their work—through licensing schemes, universal basic income pilots, or cooperative platforms that share AI-generated revenues.

These approaches require acknowledging that the technology itself is not the primary driver—the systems we embed it within are. The question isn’t whether to build AI, but how to build the social, economic, and political structures that govern its development and deployment.

From insight to action: intervening at high-leverage points

A systems perspective moves us beyond debating speculation and reacting to symptoms. It forces us to ask: Where are the high-leverage points in our socio-technical system where we can fundamentally alter AI’s trajectory?

This shifts our focus to strategic action. The table below outlines high-impact interventions, drawing on concepts from the research. I highly recommend reading Nabavi and Browne’s original paper here.

Intervention StrategyTarget Leverage Zone (Nabavi & Browne)Description & Supporting Research
Meaningful Human ControlPathway Zone (Rules of the System)Implement dynamic governance frameworks that ensure human accountability. This requires designing systems where actions are traceable to human decisions and responsive to human moral reasons (Siebert et al., 2022). Examples include the EU AI Act and the NIST RMF.
Participatory Goal-SettingPurpose Zone (Goal of the System)Create forums for public deliberation (e.g., citizen juries) to define what a system ought to do. This helps establish a “moral Operational Design Domain” (moral ODD) that aligns AI’s goals with public values, not just commercial incentives (Siebert et al., 2022).
New Economic & Design ParadigmsPurpose Zone (Mindset/Paradigm)Challenge the paradigm of data extraction by creating alternatives like data trusts. This involves moving beyond low-leverage “tweaks” to question the fundamental purpose for which AI is built (Nabavi & Browne, 2022).
Global Equity & Capacity BuildingThe Power to Transcend ParadigmsDirectly address techno-imperialism by funding diverse AI ecosystems, especially in the Global South. This highest-leverage intervention ensures that multiple worldviews and values shape the future of AI, preventing a single paradigm from dominating (Meadows, 1999).

Implementing these interventions requires confronting the power structures that benefit from the status quo. The goal is to identify coalitions and focus energy on the most effective leverage points, guided by principles of “meaningful transparency” and “continuous oversight and accountability.”

Exploring high-leverage interventions

To move from critique to construction, the following sections will explore three high-leverage, structural proposals designed to reshape the systems governing AI. While each is detailed further on, they are founded on the following core ideas:

  • A Universal Token Rate: To treat computational power not as an unlimited commodity, but as a governed public resource. This would establish a global cap on AI generation speed, enforcing a systemic pace that allows society and institutions to adapt.
  • A Global Data Commons: To transform data from a private, corporate-owned asset into a public good. This involves creating a “CERN for data”—a non-profit, auditable archive that breaks data moats, democratizes development, and enables true safety verification.
  • A Public Utility Model for Foundational AI: To shift the primary goal of developing core AI models from maximizing profit to advancing public welfare. This would reframe foundational AI as essential public infrastructure, stewarded by publicly-funded, open-source initiatives, much like the early internet (ARPANET).

The following proposals are not meant as final blueprints, but as concrete examples of the kind of bold, structural changes we should be debating—interventions that target the system’s rules and purpose, not just its parameters. Naturally, they invite questions of feasibility and friction, which must be addressed head-on.

On feasibility and friction

Before I present the proposals, I’d first like to address potential arguments to them. The immediate reaction to proposals like these is to catalogue their impracticalities. How could a universal token rate possibly be enforced across sovereign nations? How do you plan on overhauling decades worth of intellectual property laws and policies? Wouldn’t a public utility for AI be hopelessly outmaneuvered by agile private labs?

This critique is not a flaw in the proposals; it is the entire point. Implementing these interventions requires confronting the powerful, interlocking systems that benefit from the status quo: nation-state competition for geopolitical advantage, a venture capital model predicated on exponential returns, and a tech lobby that has mastered the art of regulatory capture. Indeed, this political and economic inertia is not an obstacle to these ideas; it is the very target they are designed to confront. The purpose of these proposals is not merely to be technically sound, but to serve as rallying points for the political coalitions needed to challenge this status quo and prove that alternative futures are possible.

These questions of feasibility are valid, but to get bogged down in them at this stage is to miss the point entirely. These are not intended as shovel-ready blueprints. They are provocations, designed to shatter the conversational stalemate that defines the AI debate. The very existence of these powerful headwinds is not a reason to dismiss these proposals as naive; it is the core justification for them.

For too long, the discourse has been trapped in a sterile binary, offering only two paths forward:

  1. Moral rejection, by boycotting, divesting, and urging to halt development. While rooted in legitimate ethical concerns, this position is politically untenable and unilaterally cedes the future to those who will build this technology regardless.
  2. Technological inevitability, and a faith-based belief that the only way forward is to accelerate, trusting that market forces or an emergent superintelligence will solve the profound social and political problems we face.

Both options are a form of abdication. The purpose of introducing radical, high-leverage ideas is to prove that a third path exists, one of deliberate, democratic, and structural governance. The value of debating a “Universal Token Rate” is not in perfecting its enforcement details, but in forcing us to have a conversation about computational equity and systemic pacing. The point of a “Public Utility Model” is not to design the perfect bureaucracy, but to establish that there are alternatives to a future defined entirely by commercial incentives.

The goal is not to present a plan so perfect it bypasses systemic friction, which is an impossible task. The goal is to build a negotiating position that is solid, logical, and sound enough to withstand that friction. In a system where the game is already unfair, you don’t win by playing by the existing rules. You create leverage.

These proposals are tools to shift the Overton window. They provide a vocabulary to talk about what we actually want to talk about: the rules, the power structures, and the ultimate purpose of the systems we are building. They are a demand to move the debate from a simple “yes or no” on the technology to a rich and contested discussion about the terms of its existence in our society.

Proposal 1: A universal token rate

The first proposal is to establish a universal, capped token generation rate for high-capacity LLMs, accessible to all, with tightly regulated exceptions for specialized public-good research. Inspired by the philosophy of publications like Low-Tech Magazine, which challenge the paradigm of ever-increasing speed, this intervention would treat generative capacity not as an unlimited commodity, but as a governed public resource.

This is a textbook Pathway Zone intervention. It establishes a new, fundamental rule of the system that introduces a powerful balancing feedback loop. The current system has a dangerous reinforcing loop: wealth allows for the purchase of more AI compute, which increases productivity and power, which in turn generates more wealth. This cycle, if unchecked, leads to runaway computational inequality. A universal token limit acts as a systemic brake, forcing the technology to operate at a human scale our institutions and labor markets can adapt to.

Of course, this proposal immediately raises profound challenges of enforcement and international coordination. A purely legislative approach would be porous and easily circumvented. A more robust implementation would need to happen at the hardware or firmware level, creating a systemic governor at the very choke point of AI development: the specialized processing chips themselves. This would require an international treaty and monitoring body, akin to agreements governing nuclear materials or chemical weapons—a monumental diplomatic undertaking. Yet, the very difficulty of this task underscores the stakes. The goal is to establish the principle that raw generative power is a global commons that cannot be left to an unregulated market, forcing a debate that is currently being ignored.

Proposal 2: A global data commons

The second proposal is the creation of a global, non-profit, open-source archive of raw textual data—a “CERN for data.” This archive, modeled on the spirit of the Internet Archive but designed for semantic and textual content, would serve as the foundational, auditable source for training and benchmarking major AI models. It would be funded by a consortium of nations and philanthropic organizations and governed by a multi-stakeholder council.

This intervention operates across multiple leverage zones simultaneously. It redefines the structure of material stocks (data) in the Parameter Zone, alters information flows in the Process Zone, and most importantly, fundamentally rewrites the rules of access in the Pathway Zone.

Its primary function is to break the system’s most powerful reinforcing feedback loop: the one that creates data moats. Currently, proprietary data allows large corporations to build superior models, which attracts more users, which generates more proprietary data, cementing market dominance. A global data commons transforms data from a private asset into a public good. This single move would democratize AI development, allowing smaller players to compete on the quality of their models, not the size of their data hoard. Furthermore, it creates the necessary conditions for true AI safety and accountability, as models could be independently audited against a transparent, shared benchmark—a task impossible with today’s black-box datasets. This doesn’t just solve a technical problem; it reshapes the power structure of the entire ecosystem.

The practical challenges are immense, chief among them being questions of copyright and intellectual property. Building such a commons would force a desperately needed global reckoning with our 20th-century IP laws in the face of 21st-century technology. Rather than being a bug, this conflict is a feature of the proposal: it surfaces a core Pathway Zone problem that must be resolved for any equitable AI future. It forces a direct, high-leverage negotiation over the rules that govern information in the age of its infinite replication.

Proposal 3: A public utility model for foundational AI

The final proposal is the most structural: to treat the development of powerful, general-purpose AI models as a public utility, shifting the primary locus of creation from the commercial sector to publicly funded, open-source initiatives.

This is an intervention at the highest leverage point: the Purpose Zone. The current paradigm is driven by commercial incentives, where the goal of foundational model development is intrinsically tied to market capture, shareholder value, and competitive advantage. This proposal fundamentally alters the goal of the system from maximizing profit to advancing scientific understanding and public welfare. It reframes foundational AI not as a product to be sold, but as essential infrastructure to be stewarded, reminiscent of how ARPANET was developed as a public project before it became the commercial internet.

The mechanism for this would be a Pathway Zone intervention. Governments and international coalitions would fund open-source efforts on a massive scale, creating foundational models that are transparent, auditable, and accessible to all as a commons. Crucially, this model would mandate a structural separation between capability development and safety oversight. Independent, well-funded public bodies—separate from the development teams—would be tasked with alignment research, model auditing, and establishing safety protocols.

This structure introduces a critical balancing feedback loop that is absent in the commercial race for supremacy. The sole purpose of these separate alignment bodies would be to act as a brake, ensuring that safety and societal well-being keep pace with capability advances. This approach directly dismantles the “Capital Attraction Engine” and “Moat-Building” incentives that define the current landscape, creating an ecosystem where safety is not a cost center or a PR strategy, but a non-negotiable architectural feature of the system itself.

The immediate counter-argument is that such a public model would be slow, bureaucratic, and quickly outpaced by agile private labs. This critique, however, ignores historical precedent. Ambitious public R&D projects like ARPANET and the Human Genome Project were not failures of bureaucracy; they were triumphs of public investment that created the platforms upon which private innovation then flourished. To succeed, a modern public AI initiative would need to be structured not as a typical government agency prone to rigid, top-down control, but as an agile, independent, multi-stakeholder entityFunding could even be sourced systemically, perhaps through a tax on commercial AI compute that exceeds the universal token rate, creating a feedback loop where the commercial ecosystem helps fund the public-good infrastructure that holds it accountable.

Conclusion

We began this journey from a place of personal conflict, caught between the immense utility of artificial intelligence and a deep-seated skepticism about its place in our world. The public square offers little guidance, dominated by a polarized debate that insists on a future of either utopian salvation or dystopian extinction. The central argument of this essay has been a call to step back from this binary, to reject the premise that technology is a disembodied force acting upon us, and to instead see it for what it is: a product of and a participant within our complex, messy, and deeply human systems.

By adopting a systems thinking lens, we can move beyond the speculative anxieties of inevitable doom or guaranteed salvation. Simultaneously, it offers a more powerful way to address the valid concerns of critics. It reframes present-day harms like algorithmic bias, labor displacement, sustainability, and the concentration of power not as new problems created by AI, but as pre-existing systemic flaws that the technology now makes impossible to ignore. This crucial shift in perspective is what empowers us. It moves our focus from the tool itself to the societal structures it operates within, freeing us from the impossible task of predicting a distant endpoint and focusing our attention on the here and now: the systems we can see, the choices we can make, and our responsibility for their outcomes.

This is not an invitation to analysis paralysis or a comfortable seat on the fence. It is a disciplined call to strategic action precisely because the harms are real and the stakes are high. It demands that we stop wasting energy reacting to symptoms—which can feel validating but changes little—and instead identify the high-leverage points where we can intervene to reshape the system itself. By shifting our focus from tweaking parameters to challenging the rules, goals, and underlying paradigms, we move from a position of passive reaction to one of active architecture.

The most pressing question of our time may be the race between two timelines: the exponential curve of AI capability and the linear trudge of institutional change. If we accept this dangerous gap as inevitable, we are accepting a future of permanent crisis and reactive governance. The core argument of this essay is that we must reject that premise.

For those who want to be in the “messy middle,” this framework offers a way forward. It validates our intuition that reality is more complicated than the loudest voices claim, and it equips us with the tools to act with deliberate purpose. We are not indecisive spectators in a battle between two extremes. This is not an invitation to a comfortable ideological position, but a call to a difficult functional role: we are the system’s essential immune response, the pragmatic architects of a more considered future. The challenge ahead is not to predict what AI will do to us, but to decide what we want it to do for us, and to build the social, economic, and political structures that can harness its remarkable power for the collective good.


A personal anecdote

As a developer and part-time writer, my own journey with these tools reflects the complexities I’ve outlined. Like many, I approached LLMs with deep reservations, concerned they might undermine my own capabilities. What I’ve found in practice is a study in trade-offs. An LLM can eliminate hours of tedious work, but this efficiency comes at the cost of a new kind of vigilance. It’s easy to find myself in a constant loop of prompting, evaluating, and correcting, weeding out the subtle “hallucinations” and contextual errors that the model confidently presents as fact.

This experience reinforces my belief that for any sufficiently complex task, the human remains the essential cognitive engine. The tool can generate text or code, but I provide the direction, the critical judgment, and the contextual understanding it fundamentally lacks. The most valuable outputs are not generated for me, but with me, through a sometimes-frustrating iterative dialogue. From this vantage point, the notion that these tools will soon replace thoughtful developers or writers seems to misread their basic function; they are powerful augments, not autonomous agents.

This leads to the difficult question of fairness. Does using these tools grant me an unfair advantage? As someone living in the Global South, working on a decade-old laptop with no access to paid subscriptions, this question feels misplaced. The discourse around AI advantage often presumes a level playing field that has never existed. Access to an LLM is just one input in a complex system of privilege that already includes stable electricity, high-speed internet, economic security, and social capital. The tool doesn’t create inequality; it operates within, and is constrained by, pre-existing inequalities.

The true source of friction, then, is not the technology, but a cultural narrative it can amplify: one that elevates the final output over the intentional process. When the discourse shifts to framing prompt generation as a complete replacement for deep craft, it reflects a systemic devaluing of the very human labor that gives creative work its meaning. The defense of this process is not Luddism; it is a necessary pushback against a system that increasingly seeks to abstract away the human element from value creation.