
Follow-up to: The Evolution of Human-Internet Interaction: Toward a New Information Architecture (June 2025)
TL;DR
-
Eight months after proposing the “New Information Architecture,” the thesis has accelerated beyond predictions. A single Anthropic legal plugin triggered USD285 billion in SaaS losses-the “SaaSpocalypse.” Software stocks are down over USD1 trillion in 2026.
-
Three hypotheses: (1) software is disaggregating from monolithic platforms into micro-applications over shared semantic data substrates; (2) this creates interlocking challenges in data siloing, insecurity, and coordination failure; (3) the current moment mirrors the pre-standardization chaos of the industrial revolution, but as a cognitive revolution at information-age speed.
-
The fourth hypothesis: a cognitive adaptation imperative: as software disaggregates to the individual, cognitive demands increase even as mechanical demands decrease. The bottleneck is not infrastructure-it is human capacity to orchestrate intelligence.
-
Software engineering already shows this: AI-assisted developers take 19% longer (while believing they are faster), write less secure code, and produce rising duplication with declining refactoring.
The SaaSpocalypse: Validation by Market Panic
On February 3, 2026, Anthropic announced a single AI-powered legal plugin for Claude Cowork. The market response was immediate and severe: USD285 billion in market value evaporated across the software sector within hours (CoderCops, 2026). Forbes declared: “Intelligence becomes mobile, data becomes interpretable, and workflows become replicable. Software is increasingly viewed as a vessel, with intelligence emerging as the actual product” (Muir, 2026). Software stocks have lost over USD1 trillion in 2026, dropping approximately 20% (Cohan, 2026; Muir, 2026).
This was not a rational repricing based on revenue data. It was the market recognizing-viscerally, suddenly-what the original essay described eight months earlier: we are moving from a world where value lives in the application layer to one where value lives in the intelligence layer. The application is becoming the vessel; the agent is becoming the product.
What follows is an exhaustive examination of where this transition is heading, organized around three hypotheses about the near-term (2025-2027), mid-term (2027-2030), and long-term (2030-2035) trajectory, culminating in the cognitive adaptation dimension that makes this transformation fundamentally harder-and more consequential-than most analyses acknowledge.
Hypothesis 1: Software Disaggregation-From Monolithic Platforms to Personal Micro-Applications Over Shared Semantic Data
Software functionality is being drawn inward to individual users and teams, disaggregated from large enterprise platforms (ERPs, CRMs), and reconstituted as small, personalized, disposable, self-made applications. The enduring substrate is not the application layer but the data layer-data lakes, warehouses, and semantic databases-which is what all SaaS platforms ultimately reduce to.
The Nadella Doctrine
Microsoft CEO Satya Nadella articulated this most bluntly in December 2024: “The notion that business applications exist-that’s probably where they’ll all collapse, right, in the agent era. They are essentially CRUD databases with business logic. The business logic is all going to these agents, and these agents are going to be multi-repo CRUD. They’re going to update multiple databases, and all the logic will be in the AI tier” (CX Today, 2024; Nadella, 2024; Windows Central, 2024). Bill McDermott, CEO of ServiceNow, echoed this at Knowledge 25: “In the era of agentic AI, a traditional application stack will collapse. The number of apps that customers use will be greatly reduced, and traditional applications will become databases” (Cloud Wars, 2025).
This is not speculative positioning. It is the stated strategic direction of two of the world’s largest enterprise software companies.
Near-Term (2025-2027): The Unbundling Begins
Evidence of disaggregation in progress:
- Gartner predicts 40% of enterprise applications will feature task-specific AI agents by end of 2026, up from less than 5% in 2025 (Business 2.0 Channel, 2026).
- Deloitte’s 2025 Tech Value survey found 57% of respondents were putting 21-50% of annual digital transformation budgets into AI automation, with 20% investing over 50%. Deloitte predicts up to half of organizations will exceed 50% AI automation budgets in 2026 (Deloitte Insights, 2026).
- Bain identifies five strategic scenarios for SaaS workflows: AI enhances SaaS, spending compresses, AI outshines SaaS, and AI cannibalizes SaaS-with “easier” business processes like customer service being disrupted first (Bain & Company, 2025).
- SaaS annual recurring revenue dropped 29%; growth forecasts revised downward from 14% to 10.5% (Cohan, 2026). Seat-based pricing dropped from 21% to 15% of SaaS companies in 12 months, while hybrid pricing models surged from 27% to 41% (Business 2.0 Channel, 2026).
The rise of vibe coding and disposable software:
“Vibe coding”-the practice of prompting AI tools to generate working software from natural language often sight unseen, with little to no pre-structuring or architecting-has made disposable micro-applications a reality. While there have been tools such as Bolt, Lovable and Vercel, that have long allowed developers to start building quickly, Claude Co-Work and Code in desktop environments is now allowing, quite literally, anyone to spin up custom micro-apps in minutes (Bez Kabli, 2026; DailyAIWorld, 2026) and this is happening at scale-this is a moment where the practical application of AI is allowing the adoption chasm to be crossed. This isn’t new as users of LLM based chat platforms may have noted when querying, this was already being done autonomously during a reasoning models thinking steps, however, where it was previously used to return a natural language response with, perhaps some artifacts, now, for many users, the output is the micro-application itself.
In 2021, an MVP took three months and cost USD50,000. In 2026, you can build and deploy a small personal SaaS over a weekend (“This Vibe Coding Trend,” 2026). Bain Capital Ventures describes these micro-apps as filling “the gap between the spreadsheet and a full-fledged product” (Bez Kabli, 2026). Users can now create their own bespoke restaurant-picking apps, health symptom trackers, holiday family games, and podcast translation tools-creating a vacuum in demand for applications that “disappear when the need is no longer present” (Bez Kabli, 2026). It is my strong belief that the majority of applications fit that description.
But ERPs are being wrapped, not dismantled:
McKinsey’s January 2026 analysis is clear: “While it is unlikely that AI agents will replace the ERP in the near or medium term because of system complexity, companies should consider not only how AI agents will disrupt ERP operations but also how they provide a powerful capability for evolving and modernizing ERP itself” (McKinsey, 2026). Bain’s analysis of nearly 500 IT leaders found that 78% expect at least some ERP functionality to be replaced or augmented by agentic AI over three years, but only 16% expect AI to affect more than 25% of ERP functionality (Bain & Company, 2026).
While current surveys and analyses suggest that ERP systems will remain the core systems of record, and that AI agents will mostly wrap and extend ERP functionality over the next three to five years, economic pressures that have historically shaped SaaS point toward a different long-run equilibrium. Packaged enterprise platforms have always been constrained by the need to standardise functionality for a sufficiently large serviceable obtainable market, which naturally favours “good-enough,” lowest-common-denominator feature sets over deeply bespoke workflows. This is why enterprise customers periodically swing between major ERP and SaaS suites whose capabilities converge: anything more tailored has rarely justified the engineering and maintenance expense relative to the size of the addressable audience (McKinsey, 2026).
In an agentic stack, that constraint begins to loosen. As agents and generative tooling make it economically viable to assemble more tailored workflows on top of shared data substrates, the rationale for concentrating so much application logic inside monolithic ERP suites erodes. The most likely trajectory is not a sudden collapse of ERP, but a gradual migration of ERP “application” functionality into the agent and semantic layers, leaving ERPs primarily as governed data and transaction backbones that expose clean schemas, events, and controls. In other words, near-term evidence supports the “wrapped, not dismantled” view, but the same standardisation and lowest-common-denominator dynamics that defined the SaaS era also make a slow, continuous chipping-away of ERP’s functional dominance structurally plausible over the longer horizon (Bain & Company, 2026).
The three-layer stack is emerging:
Bain describes the rebundling architecture as three layers: Systems of record (source of truth, storing core business data), Agent operating systems (orchestrating work-Microsoft Azure AI Foundry, Google Vertex AI Agent Builder, Amazon Bedrock Agents), and Outcome interfaces (translating plain-language requests into agent actions via Teams, Slack, or custom apps) (Bain & Company, 2025). The application layer is being squeezed between the data substrate below and the intelligence/agent layer above.
This mirrors the “New Information Architecture” proposed in the predecessor to this essay, which described an emerging stack where semantic databases form the foundation, AI agents serve as the transport and orchestration layer, and conversational interfaces handle delivery (Li, 2025). What Bain now frames as an enterprise rebundling pattern is, in effect, the same structural forecast applied to business software specifically: data at the bottom, intelligence in the middle, and natural-language interaction at the top.
Mid-Term (2027-2030): The Semantic Data Substrate Becomes the Competitive Moat
Gartner’s five-stage trajectory:
| Stage | Year | Capability | Enterprise Impact |
|---|---|---|---|
| 1 | 2025 | AI Assistants Everywhere | Nearly all enterprise apps embed AI assistants requiring human input |
| 2 | 2026 | Task-Specific Agents | 40% of enterprise applications integrate agents acting independently |
| 3 | 2027 | Collaborative Agents Within Apps | One-third of implementations combine agents with different skills |
| 4 | 2028 | Cross-App Agent Ecosystems | 15% of daily work decisions made autonomously; agents collaborate across business functions |
| 5 | 2029 | The New Normal | 50% of knowledge workers develop skills to create AI agents on demand |
Source: Gartner, compiled in Business 2.0 Channel (2026).
By 2030: Gartner projects 35% of point-product SaaS tools will be absorbed into larger agent ecosystems or replaced entirely by AI agents (Business 2.0 Channel, 2026; Deloitte Insights, 2026). At least 40% of enterprise SaaS spend will shift toward usage-, agent-, or outcome-based pricing (Deloitte Insights, 2026). The agentic AI market is projected to grow from USD8.5 billion in 2026 to USD42-52 billion by 2030 (43%+ CAGR) (Business 2.0 Channel, 2026).
The semantic layer becomes the control plane:
AtScale’s 2025 review concluded: “The Universal Semantic Layer has evolved beyond accelerating business intelligence. It’s become the control plane for enterprise AI” (AtScale, 2025a). Three architectural patterns are emerging for 2026 and beyond: Semantic-First AI Agents (LLMs reasoning directly over governed models), Semantic Observability (monitoring how AI interprets business logic, detecting drift and bias), and Composable Governance (treating semantic models as version-controlled shared code) (AtScale, 2025a).
Bain identifies the semantic gap as the critical bottleneck: “The first semantic layer that creates an industry-wide standard to enable an invoice.bot to talk to a payment.bot will reshape the AI ecosystem and direct a large next wave of value” (Bain & Company, 2025). MCP and A2A standardize how agents package tool calls and results, but they don’t provide shared vocabulary or show how business concepts map to APIs and tables (Bain & Company, 2025).
Validation of the “SaaS becomes semantic database” claim:
Glean’s analysis frames this as the “evolution from database-centric to intelligence-centric architecture”-agents interact directly with APIs and databases without requiring graphical interfaces, orchestrating actions across multiple systems simultaneously (Glean, 2025). The Kindo.AI CEO predicts: “Soon, AIs will hold the entire business logic layer of a SaaS like Salesforce in memory. AI will only need access to the data” (Forbes Tech Council, 2025).
Long-Term (2030-2035): Full Disaggregation of Logic from Data
Projected state:
- By 2035, agentic AI could drive approximately 30% of enterprise application software revenue, surpassing USD450 billion
- By 2030, 90% of B2B buying will be AI agent intermediated, pushing over USD15 trillion of B2B spend through AI agent exchanges
- The surviving platforms will function as orchestration layers, “managing fleets of specialised agents rather than serving as destinations for human interaction” (Business 2.0 Channel, 2026).
Ultimately, this hypothesis has been strongly validated for point-solution SaaS and workflow tools, and directionally validated for ERP/CRM systems with a longer timeline. The claim is that SaaS platforms will ultimately reduce to semantic databases described by Nadella, Bain, and the emerging semantic layer architecture. The nuance is that core systems of record will persist as governed data substrates while the application/logic layer migrates to agents.
Hypothesis 2: The Three Attendant Challenges-Siloing, Insecurity, and Coordination Failure
The inward pull of software functionality to individuals creates three interlocking problems: (a) data silos and fragmented organizational truth because data is consumed and generated at the individual level; (b) proliferation of insecure, unsafe, and biased micro-applications; and (c) difficulty coordinating organization-wide transformation due to barely understood micro-application sprawl.
Near-Term (2025-2027): Shadow AI and the Governance Vacuum
Data silos and truth fragmentation:
Equinix’s January 2026 analysis identifies data coherence as the central challenge: “As data proliferates across systems, clouds and formats, it’s increasingly difficult to connect the right information… Lacking context for data, an agent might misinterpret, use stale data and/or take inappropriate actions” (Equinix, 2026). InformationWeek reports that “the poor quality of unstructured data is data noise from too many copies, irrelevant, outdated versions and conflicting versions” (InformationWeek, 2026). AtScale found that “the tolerance businesses had for semantic drift-conflicting definitions, inconsistent metrics, and undocumented logic-ran out the moment AI agents started consuming that data” (AtScale, 2025b).
Shadow AI as the mechanism of fragmentation:
ISACA defines Shadow AI as using AI solutions-chatbots, code assistants, LLMs-without approval from IT or compliance teams (ISACA, 2025a). Unlike shadow IT, which was mostly limited to technically oriented teams, Shadow AI adoption spans every role: engineering, marketing, finance, HR (Invicti, 2025). The Cyber Institute frames the core challenge: “Models ingest and learn from sensitive inputs, generate outputs that can influence decisions, and often leave no clear audit trail” (The Cyber Institute, 2025).
IBM’s 2025 Cost of Data Breach Report found AI-associated breaches cost organizations more than USD650,000 per breach (ISACA, 2025a). Unauthorized AI tools create silos that are “often dangerous for enterprise risk management”-employees may inadvertently create unmonitored information flows and compliance vulnerabilities despite improving productivity (ISACA, 2025a).
Insecure and biased micro-applications:
The vibe-coded micro-apps present specific risks. Ruth Suehle, president of the Apache Software Foundation, cautioned that inexperienced developers “only know whether the output works or doesn’t” (Bez Kabli, 2026). David Heinemeier Hansson (creator of Ruby on Rails, CTO of 37signals) described AI as “a flickering light bulb,” pointing out it still oscillates between helpful and useless (Bez Kabli, 2026). Invicti’s analysis warns that shadow AI applications can produce “misleading information (model hallucinations) and biased outcomes, resulting in poor-quality decisions and diminished organizational trust” (Invicti, 2025). Developers may “integrate LLMs into applications or workflows without security review, embedding unsanctioned APIs, model calls, or cloud-hosted AI services directly into code” (Invicti, 2025).
Coordination failure is already visible:
Gartner warns that over 40% of agentic AI projects will be cancelled by the end of 2027 due to “escalating costs, unclear business value, or inadequate risk controls” (Business 2.0 Channel, 2026). Only approximately 130 of the thousands of agentic AI vendors are considered legitimate, with most engaging in “agent washing”-rebranding existing products without substantial agentic capabilities (Business 2.0 Channel, 2026). InformationWeek reports that 10-20% of leading firms are building internal “agent platforms” because off-the-shelf copilots don’t yet provide the reliability, auditability, and policy control needed (InformationWeek, 2026).
The Governance Reality: Policy on Paper, Implementation in Deficit
The optimistic narrative—that governance frameworks will crystallize around semantic layers and mature into comprehensive standards—deserves scrutiny. Much of this forecast originates from SaaS vendors with a commercial incentive to present their solutions as the answer to governance challenges. What neutral surveys and peer-reviewed research actually show is fragmentation, implementation gaps, and regionally divergent trajectories, with no clear sign of convergence toward a single, widely accepted governance structure.
The gap between policy and practice is large. In the Pacific AI 2025 Governance Survey, 75% of organizations reported AI usage policies, but only 59% had dedicated governance roles and just 54% had AI-specific incident response playbooks—evidence of “a fundamental misalignment between policy creation and operational implementation” (Forte Group, 2025). ISACA’s 2025 research found that in Europe, nearly three out of four IT and cybersecurity professionals reported staff using generative AI at work, but under a third of organizations had formal policies in place, and familiarity with frameworks such as the NIST AI Risk Management Framework remained concentrated in large enterprises (ISACA, 2025b). The Cyber Institute characterizes the present as a “governance vacuum”: unsanctioned shadow AI proliferates, models ingest sensitive inputs without visibility, and most existing controls are still tuned to devices and applications rather than model interactions and prompt flows (The Cyber Institute, 2025).
Some technical infrastructure is emerging. Semantic layers are being positioned as enforcement mechanisms—policies declared once and applied across BI dashboards, agents, and custom applications (Strategy Software, 2025). Oracle Database 26ai exemplifies the approach: row-level policies, masking rules, and audit logging uniformly applicable to both human users and AI agents (McDowell, 2026). But the distance between demonstrating these capabilities in controlled environments and deploying them at enterprise scale across fragmented data estates remains substantial.
At the macro level, governance is diverging, not converging. Comparative analyses highlight three structurally distinct approaches: the EU’s top-down, risk-based AI Act with ex ante conformity assessments; the United States’ bottom-up, sectoral, market-driven regime layering AI guidance on existing laws and voluntary frameworks; and China’s centralized but economically pragmatic model emphasizing security, sovereignty, and stability (Gualdi & Cordella, 2025; Navirego, 2025). Global enterprises are being forced to navigate multiple, partially incompatible regimes and to “govern up” to the strictest environment in which they operate, rather than planning on a single global standard (ComplianceHub, 2025).
In practice, organizations oscillate between two opposing tendencies. On one side, rapid adoption with improvised governance: employees across functions deploy public generative tools without formal oversight, creating opaque data flows that traditional IT controls fail to capture (ISACA, 2025a). On the other, defensive blocking: security intermediaries treat agents as hostile by default, mirrored at the policy level by the EU AI Act’s high-risk categories and China’s registration requirements, which err on the side of restriction (Navirego, 2025). This tug-of-war between bottom-up improvisation and top-down blocking shows no sign of resolving into a stable, coherent global governance layer.
Against that backdrop, confident forecasts that semantic layers and standardized governance frameworks will neatly crystallize into comprehensive standards look optimistic. There is movement toward semantic control planes and shared risk frameworks, but current evidence points to contested, regionally differentiated governance and an implementation gap inside organizations—not to a settled end state. It is more honest to treat the long-term trajectory of AI governance as an open question and to acknowledge that fragmentation—including divergent cultural attitudes toward risk, safety, and autonomy—may persist or even deepen over time (Gualdi & Cordella, 2025).
Again, this second hypothesis is showings strong signs of validation already. Data siloing through shadow AI is already documented and costing organizations significantly. Insecurity and bias in micro-applications are acknowledged by industry leaders (Apache Foundation president, 37signals CTO). Coordination failure is reflected in Gartner’s prediction of 40%+ agentic project cancellations. The governance dimension adds a further layer of concern: the gap between having policies and operationalizing them is large, the macro regulatory landscape is fragmenting rather than converging, and the commercial optimism of SaaS governance vendors should be weighed against the neutral evidence of implementation deficits. These are not hypothetical future risks—they are present realities accelerating with adoption.
Hypothesis 3: A Digital Cottage Industry and an Open Question About What Comes Next
In trying to understand this current period of agentic proliferation, I thought history might present an analogy.
This explosion of self-made agents resembles the early, decentralized “cottage industry” phase of pre-industrial production—and there are emergent moves toward standardization at the integration layer—but whether agents will follow the same trajectory as historical cottage industries into centralized, standardized systems is genuinely unknown.
The Cottage Phase: Where We Are Now
The pre-industrial putting-out system was decentralized, home-based, small-scale, and bespoke. Merchants provided raw materials to rural workers who produced goods in their homes. Quality was inconsistent. There was no standardization—“no two exactly alike.” It eventually gave way to more centralized, standardized factory systems as demand, capital, and technology accumulated (History Crunch, 2022; “Industrial Revolution,” n.d.).
The current agentic situation fits this pattern closely. Many small, locally created tools—micro-apps, no-code flows, personal agents—are built by end users themselves, with uneven quality, weak governance, and no real standardization of outputs. This is very close to what the literature on shadow IT and citizen developers has been describing for a decade: end users building their own solutions with spreadsheets, low-/no-code platforms, and SaaS connectors, outside traditional IT processes (Tech101 for Marketers, 2022; Adapt Consulting, 2025). The difference now is that the gap between the engineer and the end user has collapsed almost entirely. Agents and generative coding tools do not merely lower the barrier to building software—they effectively eliminate it, making the “cottage worker” and the “tool-maker” the same person.
| Putting-Out System (Pre-Industrial) | Current Agentic Micro-App Proliferation |
|---|---|
| Workers produced goods at home with merchant-supplied materials | Workers generate micro-apps/workflows at their desks with AI-supplied capabilities |
| No standardization of outputs; quality inconsistent | No standardization of agent outputs; hallucination and bias |
| Embezzlement of supplies, poor quality control | Data leakage, IP exposure, shadow AI governance gaps |
| Each product unique, no interchangeable parts | Each micro-app unique, no interoperability or shared governance |
| Merchant capitalists provided raw materials, paid by piece | AI vendors provide foundation models, charging by token/usage |
The Martech Parallel: A Software-Native Precedent
The historical cottage-to-factory arc also has a more recent software-native analogue: the explosion of marketing technology (martech) platforms from the 2010s onward. Scott Brinker’s martech landscape grew from a few hundred tools to over 10,000 by the mid-2020s, with consolidation (acquisitions, shutdowns) happening in parallel but never fast enough to prevent continuing sprawl (Brinker, 2017). The no-code and low-code movement followed a similar trajectory: rapid proliferation of platforms enabling citizen developers, followed by partial aggregation into platform ecosystems rather than a neat convergence on a small set of standards (Monkedo, 2024).
The agentic wave looks like martech on a much wider scale. Martech sprawl was largely confined to marketing departments; agentic proliferation spans every function. Martech tools were built by developers and sold to marketers; agents are increasingly built by the end users themselves. The pattern—artisanal sprawl, partial platform consolidation, persistent long-tail diversity—may be the more realistic software precedent than a clean industrial-revolution narrative.
Emerging Standardization at the Integration Layer
Emerging standards such as MCP (Model Context Protocol, developed by Anthropic, adopted by OpenAI March 2025, Google April 2025) and A2A (Google’s Agent2Agent protocol) begin to standardize how agents communicate (Bain & Company, 2025). These play a role analogous to early interchangeable-parts ideas: they standardize interfaces and components, but they do not yet amount to a mature, widely enforced regime of interoperability and governance. Bain notes that “the emergence of these standards has shown strong network-effect dynamics—lightning-fast tipping points, winner takes most” (Bain & Company, 2025). Critically, they do not yet provide shared vocabulary, semantic definitions, or policy frameworks needed for true interoperability (Bain & Company, 2025).
Will Agents Follow the Same Trajectory? Parallels and Counter-Forces
Historical cottage industries typically evolved through three forces: demand pressure and scale economies made home-based production too slow and costly, driving centralization and mechanization (Corporate Finance Institute, n.d.); capital concentration shifted production from worker-owned tools to capitalist-owned means of production (Corporate Finance Institute, n.d.); and labor specialization enabled fine-grained division of work, increasing throughput at the cost of worker autonomy (History Crunch, 2022).
In the agentic context, there are partial analogues to each:
- Demand for reliability and integration is likely to push some consolidation. Enterprises will not run on thousands of unaudited personal agents indefinitely, just as they could not run on thousands of uncoordinated cottage producers (The Cyber Institute, 2025).
- Capital and data gravity favour major platforms—clouds, model providers, large SaaS—as de facto “factories” for agents: hosting, orchestrating, and governing fleets built on shared infrastructure (Deloitte Insights, 2026).
- Specialization is already visible: foundation model providers, agent orchestration platforms, domain-specific tools, and enterprise “agent platforms” emerging inside large firms (InformationWeek, 2026).
But there are important differences that make a one-to-one forecast unsafe:
- Replicability and composability: software agents are cheap to spawn, compose, and discard. The transaction costs are far lower than building or shutting a factory. This could sustain longer-lasting diversity—more like martech’s persistent sprawl—rather than rapid collapse into a handful of “factories” (Brinker, 2025).
- Fragmented governance: as discussed under Hypothesis 2, regulation is diverging by region and culture, which may produce different local equilibria rather than a single global pattern (Gualdi & Cordella, 2025).
- The collapsed engineer-user gap: citizen developers and line-of-business users already build automations and apps; agents intensify that dynamic. The agentic cottage phase is more democratized and potentially more persistent than historical cottage industry, because the “workers” now own—and can endlessly replicate—their own means of production (Adapt Consulting, 2025).
In conclusion, a cottage industry analogy can act as a valid as a structural description of the current phase: decentralized, bespoke, ungoverned production from commoditized raw materials, with emergent but immature standardization at the integration layer, and there are strong structural rhymes with historical cottage-to-factory transitions, and the martech and low-code eras that show that software ecosystems can move from artisanal sprawl toward partial consolidation into platforms and suites. However, I cannot yet find an empirical basis to assert that agents will necessarily converge on a small, factory-like set of standardized forms that many of us (engineers, in particular) might wish for. The eventual industrial structure of the “agent economy” remains an open question rather than a settled historical pattern.
The Cognitive Adaptation Imperative: Why This Transition Is Harder Than It Looks
This is the dimension most analyses miss.
The shift from monolithic SaaS to agent-mediated, individually-generated workflows does not merely change what tools people use. It changes what cognitive demands are placed on them.
The Cognitive Paradox
Cognitive Load Theory (John Sweller, 1988) identifies three types of mental load: intrinsic (the inherent difficulty of the task), extraneous (how information is presented), and germane (the effort required to build understanding) (“Cognitive Offloading or Cognitive Overload,” 2025; Ron Labs AI, 2025). AI agents initially reduce extraneous load by simplifying interfaces. But they simultaneously create new intrinsic load: understanding what the agent did, whether it did it correctly, how its outputs relate to organizational truth, and when to override it.
A 2025 study in PMC found a paradox: “AI can concurrently function as an efficiency facilitator while also contributing to cognitive debilitation, thereby supporting hypotheses regarding learned powerlessness and the erosion of intrinsic decision-making autonomy” (“AI and Cognitive Decision-Making,” 2025). The study found strong correlations between AI’s propensity to overwhelm users with choices and lower attention capacity (r = 0.908) and information overload (r = 0.920) (“AI and Cognitive Decision-Making,” 2025). An April 2025 paper reframed the AI safety debate to center on cognitive overload as “a bridge between near-term harms and long-term risks” (“AI as Catalyst,” 2025).
This is the cognitive equivalent of the factory worker who could operate one machine but couldn’t understand the system. In the agentic era, individuals can prompt one agent but may not understand the data it drew from, the biases it encoded, or the organizational implications of its output.
Evidence from Software Engineering: Cognition Shifts from Design to Verification
This cognitive reconfiguration is already observable in modern software engineering workflows, where generative AI shifts effort away from up-front design/architecture and toward verification, integration, and constraint management (reviewing generated code, debugging, testing, and preventing uncontrolled feature sprawl). A randomized controlled trial by METR studying experienced open-source developers working on tasks in their own repositories found that allowing AI tools increased completion time by 19%, despite developers believing they were faster; the reported gap highlights how AI can introduce substantial overhead in prompt iteration, integration, and review, as well as miscalibrated confidence about actual productivity (Becker et al., 2025; METR, 2025).
A systematic literature review synthesizing 37 peer-reviewed studies (2014-2024) on LLM assistants and developer productivity concludes that while LLM tools can reduce task initiation overhead and accelerate some code-adjacent tasks, the literature repeatedly reports risks consistent with “effort transplant”: cognitive offloading, disrupted flow, and inconsistent effects on code quality and maintainability (which then pushes effort downstream into review, repair, and governance rather than up-front planning) (“Impact of LLM-Assistants,” 2025).
Security evidence points in the same direction: Perry et al. conducted a large-scale user study (“Do Users Write More Insecure Code with AI Assistants?”) and found that participants with access to an AI assistant wrote significantly less secure code than those without, while also being more likely to believe they had written secure code-an overconfidence dynamic that increases the need for disciplined review and security gates (Perry et al., 2023).
Finally, large-scale engineering telemetry is consistent with feature sprawl and downstream cognitive burden. GitClear analyzed ~153 million changed lines of code across four years (2020-2023) and reported rising code churn (lines reverted or updated within two weeks) projected to double compared to the pre-AI baseline, along with increasing “added” and “copy/pasted” code and declining “moved” (refactoring/reuse) code-patterns aligned with faster scaffolding, more duplication, and a larger long-term review/maintenance load (GitClear, 2024).
Implication for the broader thesis: as agentic systems and AI coding tools expand “local software creation capacity,” the individual’s role shifts from careful construction to governed orchestration-allocating attention to verification, narrowing scope, and preventing accidental complexity. The core competency becomes disciplined constraint-setting (security boundaries, architectural invariants, test coverage, performance budgets, dependency control) so that speed does not collapse into brittleness, vulnerability, and unmaintainable feature sprawl (GitClear, 2024; “Impact of LLM-Assistants,” 2025; Perry et al., 2023).
In the agentic web stack described in the original essay (semantic data substrate, agent transport, model inference, conversational delivery), software engineering becomes the earliest “microcosm” of the broader transition: humans increasingly specify intent and constraints while machines generate the intermediate artifacts-forcing the human cognitive burden to concentrate in verification, governance, and coordination rather than direct construction (Becker et al., 2025; “Impact of LLM-Assistants,” 2025; Li, 2025).
Near-Term Cognitive Adaptation (2025-2027): From Consumer to Orchestrator
The shift in mental model:
The fundamental cognitive adaptation required is a shift from application user (I use tools that have been designed, tested, and maintained by professionals) to system orchestrator (I compose capabilities from agents, models, and data sources, and I am responsible for evaluating the outputs). This is a qualitatively different cognitive posture.
The vibe-coding-to-context-engineering arc as the concrete mechanism of this shift:
This consumer-to-orchestrator transition is already observable through a specific skill progression that crystallized with remarkable speed across 2025. Andrej Karpathy coined “vibe coding” in February 2025 to describe the practice of prompting AI to generate code sight unseen, accepting all outputs without review—suitable, in his words, for “throwaway weekend projects” (Karpathy, 2025a). Just four months later, Karpathy himself endorsed the professional counterpart: “context engineering,” which he described as “the delicate art and science of filling the context window with just the right information for the next step,” encompassing “task descriptions, explanations, few-shot examples, RAG, multimodal data, tools, state and history, and compacting” (Karpathy, 2025b). By July 2025, Gartner declared that “context engineering is in, and prompt engineering is out,” advising AI leaders to “prioritize context over prompts—building context-aware architectures, integrating dynamic data and reimagining human-AI interfaces” (Gartner, 2025). Thoughtworks and MIT Technology Review published the definitive retrospective in November 2025, documenting how the industry moved from vibe coding’s speed-first ethos toward context engineering’s recognition that “effectively managing context is far more critical than raw computational scale” (Mugrage, 2025).
The empirical evidence validates this as more than a branding shift. A field study of professional developers (N=13 observations, N=99 surveys) found that experienced software engineers “retain their agency in software design and implementation out of insistence on fundamental software quality attributes,” employing deliberate control strategies rather than passively accepting AI suggestions (Huang et al., 2025). The METR randomized controlled trial reinforces this: experienced developers accepted fewer than 44% of AI generations and spent significant effort on review and rejection—the opposite of vibe coding’s “accept all” posture (Becker et al., 2025). A validated psychometric scale measuring prompt engineering competence found a statistically significant positive correlation (r = 0.605, p < 0.01) between structured AI interaction capability and sustainable AI use (Gibreel & Arpaci, 2025). In practical terms, structured tagged context prompts eliminate hallucinations with 98.88% effectiveness (Feldman, Foulds, & Pan, 2023), while the Agentic Context Engineering (ACE) framework achieves a 10.6% improvement on agent benchmarks by treating contexts as “evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation” (Zhang et al., 2025a). Meanwhile, undisciplined AI use extracts measurable costs: IBM’s 2025 Cost of Data Breach Report found that shadow AI breaches cost organizations USD670,000 more per incident than standard breaches, with 97% of compromised organizations lacking proper AI access controls (IBM, 2025). Vibe-coded applications are 40% more likely to contain critical security vulnerabilities and produce 8x more duplicate code blocks than human-written equivalents (Ge et al., 2025).
In the near term, vibe coding remains the dominant mode of interaction for most users—it is the path of least resistance and sufficient for personal, low-stakes applications. But the empirical evidence above indicates that a professional vanguard is already diverging: those who encounter the limitations of unstructured interaction—hallucination, architectural drift, security vulnerabilities, unmaintainable code—begin imposing structure on their AI interactions through system prompts, project scaffolding, and governance constraints. Context engineering, in other words, is emerging as a professional counteracting force against the fragmentation and unreliability that characterize the cottage phase of disaggregation described under Hypothesis 1. The full progression—from casual vibe coder to disciplined context engineer to system orchestrator—represents a skill ladder whose rungs materialize progressively through the medium and long term as orchestration tooling matures and foundation models grow more capable.
Specific adaptation requirements:
-
Developing “negative capability”: The ability to remain comfortable with uncertainty and incomplete information. Organizational research shows that leaders who frame challenges as hypotheses to test rather than problems to solve demonstrate significantly better adaptation outcomes. Microsoft’s cultural transformation under Nadella-from “know-it-all” to “learn-it-all”-is the enterprise-scale example (Innovative Human Capital, 2025a).
-
Embracing “provisional knowing”: Organizations that adopt language acknowledging the temporary nature of current understanding-“based on what we know now,” “our current best thinking suggests”-create cultures where updating beliefs in response to new evidence becomes normal rather than threatening (Innovative Human Capital, 2025a).
-
Building evaluation literacy: The critical near-term skill is not prompt engineering but output evaluation. Can you assess whether an agent’s output is accurate? Complete? Biased? Aligned with intent? Consistent with organizational data? This is a form of cognitive literacy that most knowledge workers do not currently possess.
-
Understanding the data substrate: Just as factory workers eventually needed to understand not just their machine but the production system, knowledge workers need to understand not just their agent but the data it operates on-its provenance, freshness, governance, and limitations.
Mid-Term Cognitive Adaptation (2027-2030): From T-Shaped to V-Shaped Professionals
IBM Research has advocated for T-shaped professionals who “uniquely combine specialization (critical thinking and problem-solving depth) and flexibility (empathy, breadth of knowledge, skills, experience, and complex communication abilities) and who also use smart machines as assistants” (IBM Research, 2018). But the agentic era may require something further: V-shaped professionals who develop “graduated depth across adjacent domains, creating a more fluid transition between specialization and generalization” (Innovative Human Capital, 2025b).
The difference matters. T-shaped professionals have deep expertise in one area and broad awareness across others. V-shaped professionals develop working depth in multiple adjacent areas-enough to evaluate agent outputs across domains, identify when agents are crossing domain boundaries poorly, and synthesize insights that require understanding of multiple fields simultaneously (Innovative Human Capital, 2025b).
Adobe’s “T to V” initiative systematically reshaped its engineering culture: expertise expansion opportunities, cross-functional problem-solving forums, redesigned performance evaluations rewarding knowledge versatility (Innovative Human Capital, 2025b). IBM redesigned its technical learning ecosystem around “knowledge constellations” rather than isolated competencies (Innovative Human Capital, 2025b).
Organizational sensemaking becomes critical:
The law of requisite variety (Ashby) states that an organization’s internal diversity must match the complexity of its environment. As agents proliferate and generate diverse, potentially conflicting outputs, organizations will need distributed sensemaking capabilities-regular forums for collective interpretation across hierarchical levels that develop “richer, more nuanced understanding that informs better adaptation” (David, 2025; Innovative Human Capital, 2025a). As we can all see in our own rollouts of AI adoption and development of fluency, this is easier said than done, and it’s success will likely need to be hard fought and learned through organizational trials and error. All of this to say: not all organizations and their leaders will survive this transition.
The orchestration tooling inflection:
As the vibe-coding-to-context-engineering arc described in the near-term section matures, the medium term brings a critical development: natural language orchestration tooling lowers the barrier to context engineering from a specialized professional skill to an accessible organizational competency. Anthropic’s Agent Skills framework exemplifies the pattern: rather than “building fragmented, custom-designed agents for each use case,” practitioners “specialize their agents with composable capabilities by capturing and sharing their procedural knowledge” through portable, natural language instruction sets that any agent can dynamically load when relevant (Zhang, Lazuka, & Murag, 2025; Anthropic, 2025). Skills employ progressive disclosure—metadata loads first with minimal token usage, full procedural instructions expand only when contextually appropriate—allowing numerous specialized capabilities to remain available without overwhelming the context window. This architecture reflects a broader engineering insight: “Building effective AI agents is less about finding the right words and more about determining what configuration of context is most likely to generate our model’s desired behavior” (Anthropic, 2025). Combined with natural language project documents that encode organizational knowledge, compliance requirements, and domain-specific procedures, these tools transform context engineering from an individual craft into an organizational practice.
A complementary dynamic accelerates this transition: as foundation models grow more capable, the marginal benefit of highly specialized agent workflows diminishes. Research across 180 multi-agent configurations finds that coordination yields diminishing or even negative returns once single-agent baselines exceed approximately 45% accuracy, and that for sequential reasoning tasks, every multi-agent variant degraded performance by 39–70% compared to a single capable agent (Kim et al., 2025). A separate study confirms that “the benefits of multi-agent systems over single-agent systems diminish as LLM capabilities improve,” with a hybrid approach achieving 1.1–12% accuracy improvement at up to 20% cost reduction by routing simple tasks to single agents and reserving multi-agent configurations for genuinely complex parallel operations (Gao et al., 2025). Anthropic’s own engineering trajectory reflects this convergence: their December 2024 guidance advised developers to start simple and “consider adding complexity only when it demonstrably improves outcomes” (Schluntz & Zhang, 2024), while their October 2025 Agent Skills framework formalizes the architectural response—general-purpose models layered with composable specialization modules, rather than proliferating bespoke agents. For V-shaped professionals, this means that orchestration competency increasingly centers on knowing when complexity is warranted rather than defaulting to elaborate multi-agent architectures.
Context window integrity as the architectural discipline of orchestration:
As context engineering matures from an individual skill into an organizational practice, the separation of agents that persists in well-designed architectures serves a specific and empirically grounded purpose: logical separation of concerns as an architectural discipline of orchestration. The foundational “Lost in the Middle” research demonstrates that LLM performance follows a U-shaped curve, degrading significantly when relevant information appears in the middle of long contexts, even for models explicitly designed for extended windows (Liu et al., 2024). Subsequent work establishes that this is not merely a retrieval problem: even when models can perfectly retrieve all relevant information, performance still degrades 13.9–85% as input length increases (Du et al., 2025). A clinical-scale evaluation provides the most direct evidence for separation as a mitigation strategy: orchestrated multi-agent systems maintained 90.6% accuracy across 80 concurrent tasks while single-agent accuracy collapsed to 16.6%, with the mechanism explicitly identified as “preventing context interference” and token usage reduced up to 65-fold (Klang et al., 2025).
These context window limitations are, however, an active frontier of research rather than a permanent constraint. The Lizard framework demonstrates constant-memory infinite-context generation through a hybrid architecture combining sliding window attention with gated linear attention, achieving near-lossless performance recovery while maintaining stable throughput at sequence lengths where conventional approaches fail (Nguyen et al., 2025). This rolling context window approach is architecturally compatible with progressive disclosure patterns already employed in agent skill frameworks. Anthropic’s own context engineering for long-horizon agents employs compaction—a self-summarization protocol that distills critical details into compressed form when conversations approach context limits—alongside structured note-taking to external memory and multi-agent delegation where sub-agents return condensed summaries of roughly 1,000–2,000 tokens to a primary orchestrator (Rajasekaran et al., 2025). Claude Opus 4.6 now supports one million tokens of context in beta, enabling processing of up to 1,500 pages in a single prompt (Maruf, 2026).
Yet even as these technical constraints relax, the discipline of separating concerns—decomposing problems into discrete units, working them independently, and collaborating on solutions in a controlled manner—persists as a fundamentally human organizational practice. The same principle that makes modular software architecture superior to monolithic design applies to cognitive work: clean boundaries reduce interference, clarify accountability, and make outputs verifiable. As agent architectures evolve, this separation of concerns is likely to manifest as a secondary judge or critic layer—agents that evaluate, validate, and reconcile the outputs of other agents. But that critic layer does not operate in a vacuum. It requires human-generated data to calibrate its judgments: domain expertise encoded as evaluation criteria, organizational priorities expressed as governance constraints, and professional experience distilled into the heuristics that distinguish adequate output from genuinely useful work. The architectural discipline, in other words, matures from managing context window limitations into managing the alignment between machine-generated outputs and human intent—a discipline that remains irreducibly human even as the mechanisms become increasingly automated.
The cognitive model shifts from:
- Pre-agentic: “I am skilled at using specific tools” — the application-user posture, where software is a finished product consumed as designed
- Early agentic (vibe coding): “I am skilled at prompting agents and accepting their outputs” — speed of generation dominates, but evaluation remains shallow and confidence is often miscalibrated
- Intermediate agentic (context engineering): “I am skilled at structuring what agents see, evaluating what they produce, and imposing governance on how they operate” — the professional counterforce against fragmentation, where orchestration tooling and context window discipline replace ad hoc prompting
- Mature agentic (system orchestration): “I am skilled at understanding systems of agents, data, and humans, navigating their interactions, and maintaining the architectural hygiene that keeps them aligned with intent”
Macro-Evolutionary Trajectory: Three Concurrent Arcs
The original essay described the New Information Architecture as: semantic databases, agent transport, LLM aggregation, conversational/immersive delivery (Li, 2025). The evidence gathered for this follow-up suggests the macro trajectory can now be refined into three concurrent evolutionary arcs:
Arc 1: The Infrastructure Arc (Data and Standards)
The enduring substrate is the semantic data layer. SaaS applications are being reduced to their essential function-governed data repositories. The competitive moat moves from application features to data quality, semantic richness, and governance rigor. The standardization pathway follows: MCP/A2A (transport protocols, already adopted) to semantic vocabularies (industry-specific ontologies, emerging) to comprehensive governance frameworks (composable, auditable, version-controlled, still nascent) (AtScale, 2025a; AtScale, 2025b; Bain & Company, 2025).
Arc 2: The Interaction Arc (Agents and Interfaces)
The agent layer replaces the application layer as the primary locus of business logic and user interaction. This is the “collapse” Nadella described-CRUD databases persist, but the logic, the interface, and the value migrate to agents (Nadella, 2024; Windows Central, 2024). The interaction modality shifts from visual/click-based to conversational/ambient, exactly as the original essay predicted (Li, 2025). By 2029, Gartner expects 50% of knowledge workers to create AI agents on demand (Business 2.0 Channel, 2026).
Arc 3: The Cognitive Arc (Human Adaptation)
This is the arc missing from most analyses and represents this essay’s distinctive contribution. As software functionality disaggregates to the individual level, the cognitive demands on individuals increase even as the mechanical demands decrease. The paradox of cognitive offloading-AI reduces effort on individual tasks while increasing the systemic complexity the individual must navigate-creates a new form of professional capability requirement (“AI and Cognitive Decision-Making,” 2025; “AI as Catalyst,” 2025; “Cognitive Offloading or Cognitive Overload,” 2025).
Conclusion: The Integration Point Is Human
The original thesis described the shift from “search and browse” to “ask and receive” (Li, 2025). This follow-up extends that to a deeper claim: the shift from “use applications” to “orchestrate intelligence” is not merely a technological transition but a cognitive one, and the primary bottleneck to realizing the agentic web’s potential is not infrastructure or standards but the human capacity to operate effectively in a world where the individual is the integration point.
Four findings emerge from the evidence. First, disaggregation is real and accelerating-USD300 billion in SaaS market capitalization has already evaporated (Muir, 2026), and both Bain and Deloitte project structural disruption to the application layer within three to five years (Bain & Company, 2025; Deloitte Insights, 2026). Second, the infrastructure required for disaggregation to function at scale-semantic layers, governed ontologies, protocol standards-remains immature, and the governance landscape is fragmenting along geopolitical lines rather than converging toward a single coherent framework (Equinix, 2026; Gualdi & Cordella, 2025). Third, the cottage industry analogy describes the current phase well-decentralized, bespoke, ungoverned-but the martech and low-code precedents suggest that consolidation, if it comes, may be partial and prolonged rather than clean and decisive (Brinker, 2025). Fourth, and most critically, the cognitive demands of an agent-mediated world are qualitatively different from those of an application-mediated one. The evidence from software engineering already shows the pattern: AI shifts effort from construction to verification, introduces miscalibrated confidence, and increases downstream maintenance burdens even as it accelerates initial output (Becker et al., 2025; Perry et al., 2023; GitClear, 2024).
These findings carry different implications for different stakeholders. For enterprise leaders, the priority is not selecting which AI tools to adopt but investing in the semantic and governance infrastructure that makes any agentic deployment trustworthy-while recognizing that the governance gap between policy and practice is large and unlikely to close quickly (Forte Group, 2025; Strategy Software, 2025). For individual professionals, the imperative is developing what this essay terms negative capability: the capacity to operate productively amid ambiguity, evaluate AI-generated outputs critically, and maintain coherent judgment across fragmented information environments (Ron Labs AI, 2025; David, 2025). For policymakers and educators, the challenge is designing competency frameworks and institutional structures that protect individuals from cognitive overload while enabling the productivity gains the technology promises-in a regulatory landscape that is diverging, not converging, across jurisdictions (Gualdi & Cordella, 2025; History of OSH, n.d.; School History, 2024).
Importantly, historical transitions are messier than retrospective narratives suggest, and the eventual industrial structure of the agent economy may look nothing like what any existing framework predicts. The evidence base for cognitive impacts remains early-stage-longitudinal studies of how professionals adapt to agent-mediated workflows do not yet exist. The productivity data from software engineering offers a useful preview but may not generalize to other domains.
What can be said with confidence is this: the transition from cottage industry to factory system was not primarily about machines. It was about reorganizing human work, cognition, and coordination around a new technological capability. The machines came first; the standards, skills, and organizational forms that made them safe and productive took decades to follow. We are in the cottage industry phase of a cognitive transition, and the semantic layers, governance frameworks, and professional competency models are only beginning to be written. And, while this transition to a more productive state will probably be much faster than the transition to factory state was in the early industrial revolution, as we collectively fret about the removal of the human from the means of cognitive production, it is ironic that, at least for the foreseeable future, we may need more human experience and discernment, not less.
References
Adapt Consulting. (2025). Shadow IT v citizen developers. https://www.adaptconsultingcompany.com/2025/02/25/shadow-it-v-citizen-developers/
AI and cognitive decision-making. (2025). PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC12367725/
AI as catalyst for cognitive overload. (2025). arXiv. https://arxiv.org/html/2504.19990v1
AtScale. (2025a). The state of the semantic layer: 2025 in review. AtScale Blog. https://www.atscale.com/blog/semantic-layer-2025-in-review/
AtScale. (2025b). Why 2025 redefined the semantic layer. AtScale Blog. https://www.atscale.com/blog/why-ai-redefined-the-semantic-layer/
Anthropic. (2025). Effective context engineering for AI agents. Anthropic Engineering. https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
Anthropic. (2025). Skills explained. Claude Blog. https://claude.com/blog/skills-explained
Bain & Company. (2025). Will agentic AI disrupt SaaS? Bain Technology Report 2025. https://www.bain.com/insights/will-agentic-ai-disrupt-saas-technology-report-2025/
Bain & Company. (2026). How soon will agentic AI redefine enterprise resource planning? https://www.bain.com/insights/how-soon-will-agentic-ai-redefine-enterprise-resource-planning-snap-chart/
Becker, J., et al. (2025). Measuring the impact of early-2025 AI on experienced open-source developers. arXiv. https://arxiv.org/abs/2507.09089
Brinker, S. (2017). Has the martech landscape consolidated? Not really. Chief Martec. https://chiefmartec.com/2017/10/early-marketing-technology-landscape-consolidated/
Brinker, S. (2025). 2025 marketing technology landscape supergraphic: 100x growth since 2011, but now with AI. Chief Martec. https://chiefmartec.com/2025/05/2025-marketing-technology-landscape-supergraphic-100x-growth-since-2011-but-now-with-ai/
Bez Kabli. (2026). Micro-app boom: AI “vibe coding” lets non-programmers build apps instead of buying them. https://www.bez-kabli.pl/micro-app-boom-ai-vibe-coding-lets-non-programmers-build-apps-instead-of-buying-them/
Business 2.0 Channel. (2026). How agentic AI will disrupt the SaaS industry from 2026 to 2030. https://business20channel.tv/how-agentic-ai-will-disrupt-saas-industry-2026-2030-31-january-2026
Cloud Wars. (2025). Bill McDermott channels Satya Nadella: AI agents will turn apps into CRUD. https://cloudwars.com/cloud-wars-minute/bill-mcdermott-channels-satya-nadella-ai-agents-will-turn-apps-into-crud/
CoderCops. (2026). Anthropic’s AI legal tool triggers USD285 billion ‘SaaSpocalypse.’ https://www.codercops.com/blog/anthropic-legal-ai-saaspocalypse-2026/
Cohan, P. (2026, February 6). SaaSpocalypse now? AI is disrupting SaaS-but not all software is doomed. Forbes. https://www.forbes.com/sites/petercohan/2026/02/06/saaspocalypse-now-ai-is-disrupting-saas-but-not-all-software-is-doomed/
Cognitive offloading or cognitive overload? How AI alters the decision-making landscape. (2025). PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC12678390/
ComplianceHub. (2025). Global AI law snapshot: A comparative overview of AI regulations in the EU, China, and the USA. https://www.compliancehub.wiki/global-ai-law-snapshot-a-comparative-overview-of-ai-regulations-in-the-eu-china-and-the-usa/
Corporate Finance Institute. (n.d.). Cottage industry. https://corporatefinanceinstitute.com/resources/economics/cottage-industry/
CX Today. (2024). Microsoft CEO: AI agents will transform SaaS as we know it. https://www.cxtoday.com/customer-analytics-intelligence/microsoft-ceo-ai-agents-will-transform-saas-as-we-know-it/
DailyAIWorld. (2026). Disposable software & vibe coding 2026: The end of SaaS, the rise of software on demand. https://dailyaiworld.com/post/disposable-software-vibe-coding-2026-the-end-of-saas-the-rise-of-software-on-demand
Du, Y., Tian, M., Ronanki, S., Rongali, S., Bodapati, S., Galstyan, A., Wells, A., Schwartz, R., Huerta, E. A., & Peng, H. (2025). Context length alone hurts LLM performance despite perfect retrieval. Findings of EMNLP 2025. https://arxiv.org/abs/2510.05381
David. (2025). Advanced AI-enhanced knowledge management strategy: Embracing sensemaking, uncertainty, requisite variety, and AI technologies. LinkedIn. https://www.linkedin.com/pulse/advanced-ai-enhanced-knowledge-management-strategy-ai-david
Deloitte Insights. (2026). SaaS meets AI agents. Deloitte TMT Predictions 2026. https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2026/saas-ai-agents.html
Equinix. (2026, January 29). The data problem behind agentic AI, and what you can do about it. Equinix Blog. https://blog.equinix.com/blog/2026/01/29/the-data-problem-behind-agentic-ai-and-what-you-can-do-about-it/
Forbes Tech Council. (2025, February 19). How AI is disrupting the SaaS landscape and reshaping the future. Forbes. https://www.forbes.com/councils/forbestechcouncil/2025/02/19/how-ai-is-disrupting-the-saas-landscape-and-reshaping-the-future/
Feldman, P., Foulds, J. R., & Pan, S. (2023). Trapping LLM hallucinations using tagged context prompts. arXiv. https://arxiv.org/abs/2306.06085
Forte Group. (2025). AI governance in practice: Critical gaps in implementation and strategy. https://fortegrp.com/insights/ai-governance-in-practice-critical-gaps-in-implementation-and-strategy
Gao, M., Li, Y., Liu, B., Yu, Y., Wang, P., Lin, C.-Y., & Lai, F. (2025). Single-agent or multi-agent systems? Why not both? arXiv. https://arxiv.org/abs/2505.18286
Gartner. (2025). Context engineering: Why it’s replacing prompt engineering for enterprise AI success. https://www.gartner.com/en/articles/context-engineering
Ge, et al. (2025). A survey of vibe coding with large language models. arXiv. https://arxiv.org/abs/2510.12399
Gibreel, O., & Arpaci, I. (2025). Development and validation of the prompt engineering competence scale (PECS). Information Development. https://journals.sagepub.com/doi/10.1177/02666669251336455
GitClear. (2024). Coding on Copilot: 2023 data suggests downward pressure on code quality. https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality
Glean. (2025). Will AI agents replace SaaS? Key insights for 2025. https://www.glean.com/perspectives/will-ai-agents-replace-saas-tools-as-the-new-operating-layer-of-work
Gualdi, F., & Cordella, A. (2025). Artificial intelligence and decision-making: A comparative analysis of regulatory approaches in the EU, US, and China. arXiv. https://arxiv.org/abs/2410.21279
History Crunch. (2022). Cottage industry vs. factory system. https://www.historycrunch.com/cottage-industry-vs-factory-system.html
History of OSH. (n.d.). Timeline-History of occupational safety and health. https://www.historyofosh.org.uk/timeline.html
Huang, R., Reyna, A., Lerner, S., Xia, H., & Hempel, B. (2025). Professional software developers don’t vibe, they control: AI agent use for coding in 2025. arXiv. https://arxiv.org/abs/2512.14012
IBM. (2025). Cost of a data breach report 2025. IBM Security. https://www.ibm.com/reports/data-breach
IBM Research. (2018). Cultivating T-shaped professionals in the era of digital transformation. Service Science. https://research.ibm.com/publications/cultivating-t-shaped-professionals-in-the-era-of-digital-transformation
Impact of LLM-assistants on software developer productivity: A systematic literature review. (2025). arXiv. https://arxiv.org/abs/2507.03156
Industrial Revolution. (n.d.). In Wikipedia. https://en.wikipedia.org/wiki/Industrial_Revolution
InformationWeek. (2026). 2026 enterprise AI predictions: Fragmentation, commodification, and the agent push facing CIOs. https://www.informationweek.com/machine-learning-ai/2026-enterprise-ai-predictions-fragmentation-commodification-and-the-agent-push-facing-cios
Innovative Human Capital. (2025a). Organizational change fatigue: Building adaptive capacity in an era of permanent disruption. https://www.innovativehumancapital.com/article/organizational-change-fatigue-building-adaptive-capacity-in-an-era-of-permanent-disruption
Innovative Human Capital. (2025b). The evolution of professional versatility: From T-shaped to V-shaped talent. https://www.innovativehumancapital.com/article/the-evolution-of-professional-versatility-from-t-shaped-to-v-shaped-talent
Invicti. (2025). Shadow AI: Risks, challenges, and solutions in 2025. https://www.invicti.com/blog/web-security/shadow-ai-risks-challenges-solutions-for-2025
ISACA. (2025a). The rise of shadow AI: Auditing unauthorized AI tools in the enterprise. https://www.isaca.org/resources/news-and-trends/industry-news/2025/the-rise-of-shadow-ai-auditing-unauthorized-ai-tools-in-the-enterprise
ISACA. (2025b). AI use is outpacing policy and governance. https://www.isaca.org/about-us/newsroom/press-releases/2025/ai-use-is-outpacing-policy-and-governance-isaca-finds
Karpathy, A. (2025a, February 6). Vibe coding [Post]. X. https://x.com/karpathy/status/1886192184808149383
Karpathy, A. (2025b, June). Context engineering [Post]. X. https://x.com/karpathy/status/1937902205765607626
Kim, Y., Gu, K., Park, C., et al. (2025). Towards a science of scaling agent systems. arXiv. https://arxiv.org/abs/2512.08296
Klang, E., Omar, M., Raut, G., Agbareia, R., Timsina, P., Freeman, R., Gavin, N., Stump, L., Charney, A. W., Glicksberg, B. S., & Nadkarni, G. N. (2025). Orchestrated multi agents sustain accuracy under clinical-scale workloads compared to a single agent. medRxiv. https://www.medrxiv.org/content/10.1101/2025.08.22.25334049v1
Li, R. (2025, June 9). The evolution of human-internet interaction: Toward a new information architecture. drli.blog. https://drli.blog/posts/agentic-web-evolution/
Lindenbauer, T., Slinko, I., Felder, L., Bogomolov, E., & Zharov, Y. (2025). The complexity trap: Simple observation masking is as efficient as LLM summarization for agent context management. NeurIPS 2025 DL4Code Workshop. https://arxiv.org/abs/2508.21433
Liu, H., Li, R., Xiong, W., Zhou, Z., & Peng, W. (2025). WorkTeam: Constructing workflows from natural language with multi-agents. In Proceedings of NAACL 2025 Industry Track (pp. 20-35). https://aclanthology.org/2025.naacl-industry.3/
Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2024). Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12, 157-173. https://aclanthology.org/2024.tacl-1.9/
McDowell, S. (2026, January 27). How Oracle AI Database 26ai addresses the enterprise AI data paradox. Forbes. https://www.forbes.com/sites/stevemcdowell/2026/01/27/how-oracle-database-26ai-addresses-enterprise-ai-data-paradox/
McKinsey. (2026). Bridging the great AI agent and ERP divide to unlock value at scale. https://www.mckinsey.com/capabilities/mckinsey-technology/our-insights/bridging-the-great-ai-agent-and-erp-divide-to-unlock-value-at-scale
Maruf, R. (2026, February 5). Anthropic rolls out Claude Opus 4.6 with 1 million token context support. SiliconANGLE. https://siliconangle.com/2026/02/05/anthropic-rolls-claude-opus-4-6-1-million-token-context-support/
METR. (2025, July 10). Measuring the impact of early-2025 AI on experienced open-source developers [Blog post]. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Monkedo. (2024). No-code platforms history. Monkedo Blog. https://monkedo.com/blog/no-code-platforms-history
Mugrage, K. (2025, November 5). From vibe coding to context engineering: 2025 in software development. Thoughtworks. https://www.thoughtworks.com/insights/blog/machine-learning-and-ai/vibe-coding-context-engineering-2025-software-development
Muir, D. (2026, February 4). USD300 billion evaporated. The SaaS-pocalypse has begun. Forbes. https://www.forbes.com/sites/donmuir/2026/02/04/300-billion-evaporated-the-saaspocalypse-has-begun/
Nadella, S. (2024). Business applications will collapse in the AI era [Transcript]. https://x.com/convequity/status/1868893301732262075
Navirego. (2025). AI regulations: EU, US, China comparison. https://www.navirego.com/blog/ai-regulations-eu-us-china-comparison
Nguyen, C. V., Zhang, R., Deilamsalehy, H., Mathur, P., Lai, V. D., Wang, H., Subramanian, J., Rossi, R. A., Bui, T., Vlassis, N., Dernoncourt, F., & Nguyen, T. H. (2025). Lizard: An efficient linearization framework for large language models. arXiv preprint arXiv:2507.09025. https://arxiv.org/abs/2507.09025
Perry, N., et al. (2023). Do users write more insecure code with AI assistants? In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS ‘23). ACM. https://dl.acm.org/doi/abs/10.1145/3576915.3623157
Rajasekaran, P., Dixon, E., Ryan, C., & Hadfield, J. (2025, September 29). Effective context engineering for AI agents. Anthropic Engineering. https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
Ron Labs AI. (2025). Cognitive load & decision making: Smarter choices with AI. https://www.ronlabs.ai/blog/cognitive-load-on-delayed-decision-making/
Schluntz, E., & Zhang, B. (2024, December 19). Building effective agents. Anthropic Research. https://www.anthropic.com/research/building-effective-agents
School History. (2024). Factory Acts: Timeline, features, impact. https://schoolhistory.co.uk/industrial/factory-acts/
Strategy Software. (2025). How enterprises scale secure, governed AI with a universal semantic layer. https://www.strategysoftware.com/blog/how-enterprises-scale-secure-governed-ai-with-a-universal-semantic-layer
Tech101 for Marketers. (2022). Low-code and no-code: Separating citizen developers from shadow IT. https://tech101formarketers.com/2022/11/13/low-code-and-no-code-separating-citizen-developers-from-shadow-it/
The Cyber Institute. (2025). Shadow AI and the governance vacuum: Confronting the next phase of digital trust risk. https://www.cyber-institute.org/post/shadow-ai-and-the-governance-vacuum-confronting-the-next-phase-of-digital-trust-risk
Windows Central. (2024). Is SaaS dead? Microsoft CEO makes bold agentic AI prediction. https://www.windowscentral.com/microsoft/hey-why-do-i-need-excel-microsoft-ceo-satya-nadella-foresees-a-disruptive-agentic-ai-era-that-could-aggressively-collapse-saas-apps
Zhang, B., Lazuka, K., & Murag, M. (2025, October 16). Equipping agents for the real world with agent skills. Anthropic Engineering. https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills
Zhang, Q., Hu, C., Upasani, S., Ma, B., Hong, F., Kamanuru, V., Rainton, J., Wu, C., Ji, M., Li, H., Thakker, U., Zou, J., & Olukotun, K. (2025). Agentic context engineering: Evolving contexts for self-improving language models. arXiv. https://arxiv.org/abs/2510.04618