Productivity Paradox Redux? Incentives In GenAI Adoption



A new study from the Massachusetts Institute of Technology seems to suggest that most corporate implementations generative AI pilots are failing to generate meaningful financial returns, despite widespread investment. The report, the GenAI Divide: State of AI in Business 2025, published by MIT’s NADA initiative, found that 95% of pilots stall at early stages and never progress to scaled adoption. This points to a gap in understanding the role of incentives in GenAI adoption.

As a graduate student I remember learning about the IT productivity paradox. The IT productivity paradox refers to the observation, famously noted by Robert Solow (“you can see the computer age everywhere but in the productivity statistics”), that despite heavy investments in information technology, measurable productivity gains often lag or remain elusive. This paradox arises because returns on IT depend not only on the technology itself but on complementary investments in organizational change, skills, and process redesign—without which IT can even increase costs or inefficiencies. The same dynamic applies to GenAI: while firms are rapidly adopting these systems, many struggle to capture measurable productivity returns because the technology often substitutes for visible, easy-to-quantify tasks (like drafting text) but requires substantial, harder-to-measure organizational adaptation—such as new workflows, governance, integration with IT systems, and employee reskilling—to unlock true gains. Without these complementary changes, enterprises risk experiencing a GenAI productivity paradox where the promise of transformation is high, but realized value remains limited.

Unlocking the complementarities between generative AI and organizational change requires that firms not only adopt the technology but also invest in complementary assets such as new workflows, governance structures, employee training, and integration with IT and compliance systems. However, these complementary investments are costly, long-term, and often difficult to measure, whereas the short-term gains from deploying generative AI (automation, efficiency, cost savings) are more visible and easier to capture in performance metrics. Without proper incentive alignment, leaders and employees will rationally overemphasize the measurable, short-term benefits and underinvest in the intangible, long-term enablers — reproducing the IT productivity paradox.

Incentive alignment is critical because it ensures that agents across the enterprise (business units, IT, compliance, employees) are rewarded not just for quick wins, but also for contributing to the slower, complementary changes (data governance, integration, reskilling, process redesign) that ultimately determine whether generative AI delivers sustainable productivity or profitability gains.

Enterprise Integration and Incentives in GenAI

The MIT report highlighted that GenAI investments are not yielding results given the lack of enterprise integration. While GenAI tools offer technological benefits, they may not be easily adapted to enterprise workflows.

A crucial reason for the difficulty in interoperability is task interdependence, something I studied in my dissertation. Task interdependence leads to complex agency problems, leading to what is termed the multi-task principal agent problem.

Disaggregating IT services is not just a matter of automating a piece of technology but requires redesign of people and processes and understanding the organizational aspect of AI adoption (what economists’ term “theory of the firm). When firms adopt generative AI, leaders face multiple competing objectives. What is easy to measure, and implement are short term gains such as efficiency, cost-cutting, faster processes etc. Bonuses, KPIs, and evaluations are often tied to easily measurable outputs, which take precedence over less measurable but critical tasks. Generative AI introduces new agency costs (risk-shifting, monitoring costs, opportunistic adoption) and amplifies asymmetric information (between vendors, managers, shareholders, employees, and regulators). These dynamics make AI governance, auditing, and transparency mechanisms central to aligning incentives. Multitask agency costs (a term coined by economists Bengt Holmström and Paul Milgrom) arise when agents (e.g., managers, employees) must allocate effort across multiple tasks that differ in how easily they can be measured or monitored.

Generative AI adoption in enterprises introduces exactly this type of multitask incentive problem, especially as it requires organizational interfaces with IT, compliance, and business units. Leaders may then prioritize quick AI-driven productivity gains (e.g., marketing content, customer support automation) rather than investing in robust data governance and explainability mechanisms, which are harder to quantify. Employees using generative AI may focus on task completion speed (visible to supervisors) but neglect judgmental oversight or error-checking (which is costly to monitor). IT teams may emphasize deploying functional AI interfaces quickly but underinvest in security, interoperability, and responsible-AI guardrails, because these tasks are less observable.

Enterprise adoption of generative AI almost always requires coordination across multiple organizational boundaries and multiple technological systems within organizations, and with external partners. Incentive conflicts may arise when IT prioritizes scalability and technical feasibility while compliance/legal teams focus on data protection, bias audits, and regulatory interfaces. This creates multitask agency costs as IT teams may devote more effort to visible deliverables (e.g., working chatbot) than invisible safeguards (e.g., audit logging).

Such multitask incentives exacerbate agency costs. This has two important consequences. First, there is distorted effort allocation in that tasks that are observable and short-term (deploying AI pilots, reducing call center headcount) get prioritized, while less measurable long-term tasks (model fairness audits, employee reskilling, data governance) are underprovided. The second problem is coordination failures when multiple interfaces raise monitoring costs, creating more opportunities for blame shifting. Ultimately such skewed incentives derail potential returns from GenAI initiatives. It is not always easy to establish mechanisms for algorithmic accountability.

Creating Aligned Incentives in GenAI Adoption

Generative AI adoption is not just a tech integration problem; it creates multitask incentive conflicts because leaders, employees, IT, and compliance must balance measurable gains against hard-to-monitor long-term safeguards. In generative AI investments, task interdependence magnifies agency costs by making responsibility diffuse, information asymmetric, and incentives misaligned. The very fact that generative AI spans multiple organizational boundaries (business, IT, compliance, vendors) means governance and incentive design are as critical as the technology itself. The organizational interface with IT is where many of these incentive misalignments crystallize.

Leaders need to safeguard against conflicts of incentives in GenAI adoption by adopting the right processes, frameworks and audit structures. This can include such mitigating mechanisms as the following:

  • Balanced scorecards: Leaders need to tie evaluations to both measurable outcomes and compliance/ethical safeguards.
  • Joint KPIs across IT and business units: Leaders should enable coordination by linking business outcomes with IT and compliance outcomes (e.g., deployment speed and adherence to responsible AI frameworks).
  • Cross-functional AI governance boards: Leaders need to take steps to reduce asymmetric information between units by making deliberations transparent.
  • Audit trails and explainability dashboards: Leaders should adopt mechanisms to make otherwise “hidden” tasks more observable.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *