Organizational Barriers to Implementing Artificial Intelligence Technologies in Engineering Team Workflows

Aneesha Sharma

Citation: Aneesha Sharma, "Organizational Barriers to Implementing Artificial Intelligence Technologies in Engineering Team Workflows", Universal Library of Business and Economics, Volume 03, Issue 01.

Copyright: This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Artificial intelligence is increasingly embedded in engineering team workflows; yet, sustained use often weakens after initial uptake, resulting in patterns of superficial compliance, concealed workarounds, or complete discontinuance. This article aims to explain post-adoption trajectories by developing a conceptual model of organizational barriers that shape long-term use of artificial intelligence tools in engineering work, with software development and coding assistants used as an illustrative domain. Based on a structured synthesis of recent scholarship on technology adoption, trust in artificial intelligence, cognitive load, and employee resistance to digital transformation, the study derives an integrative framework that links individual evaluations to organizational conditions. The model specifies five constructs—usefulness, trust, challenges (technical, cognitive, emotional, and process-related), organizational influence, and disengagement triggers—and articulates how their interaction shifts behavior from sustained engagement to minimal or avoided use. As a result, the article (i) proposes a barrier taxonomy relevant to engineering teams, distinguishing infrastructure and integration constraints, cognitive–emotional strain, procedural and regulatory misalignment, cultural and power dynamics, and managerial–strategic inconsistency; (ii) maps these mechanisms onto established adoption logics to show where intention-focused explanations fail to capture withdrawal dynamics; and (iii) formulates research propositions describing how organizational practices moderate the link between perceived value and real usage, and how status, identity, autonomy, and perceived fairness threats catalyze disengagement. To facilitate empirical verification, the paper outlines a planned mixed-methods design that combines large-scale secondary survey evidence on developer tool use with primary qualitative data from engineers and engineering leaders to reconstruct post-adoption pathways and identify disengagement triggers in situ. The article is intended for researchers studying technology adoption and organizational behavior, as well as for engineering managers, transformation leaders, and governance functions seeking to design workflows, policies, and incentives that support the durable, auditable, and trusted use of artificial intelligence.


Keywords: Artificial Intelligence Adoption, Disengagement Triggers, Engineering Workflows, Organizational Barriers, Trust in Artificial Intelligence.

Download doi https://doi.org/10.70315/uloap.ulbec.2026.0301002