In this paper, we analyze how Large Language Models (LLMs) like ChatGPT, Copilot, Gemini, and Llama are reshaping software development work processes in relation to development tasks such as coding, testing, debugging, documentation, and, by extension, work output, among software development teams in a product-based software development company. Although LLMs are being integrated into software development teams, it remains an open question how they influence existing coding practices, collaboration, and output at the team level. In practice, the extant literature tends to concentrate on technical capabilities and address less about the influence LLMs have on developers’ day-to-day work, team relations, or perceptions of the tools’ integration into software development.
To fill this void, this thesis studies how LLMs are used in the software development teams within product-based software development companies. Supported by Adaptive Structuration Theory (AST), we conducted a qualitative case study using semi-structured interviews and workplace observations. To guide the study, it investigated how developers integrate LLMs into their daily software development work processes, which include coding, testing, debugging, documentation, and, by extension, work output, and which key attributes influence LLM integration and utilization, and how development teams perceive the impact of LLMs, especially in relation to team collaboration and work output. The aim is to contribute to the socio-technical literature by demonstrating how LLMs reshape collaborative software development practices, particularly in product-based software development contexts, through the lens of AST.
LLMs have been established to enhance coding effectiveness, accelerate problem-solving efforts, facilitate team collaboration, and increase work productivity. However, issues of trust, hallucination, and tool reliability were also reported. Developers were first exposed to LLMs passively through pre-integrated Microsoft Copilot and others. Over time, they gradually started using them more actively via experimentation, informal sharing, and team dialogue. The tool’s utilization was formed based on usability, speed, accessibility, reliability, trust, and context matching. Trust and reliability in the tools, however, changed over time, depending on factors such as accuracy and emotional response. Perceptions were mixed; while many valued the cognitive and stress relief, and task acceleration LLMs offered, others were cautious of over-reliance, skill erosion, or redundancy.
This study provides empirical evidence on the integration of intelligent assistant tools into software developer environments. It captures developers’ perspectives on the use of LLM, highlights key opportunities for LLM, and identifies the main barriers faced in practice. Additionally, it provides actionable recommendations for teams and organizations to integrate LLMs into their software development work processes. By doing so, the study expands the body of knowledge. Guided by key AST framework concepts, including time processes, group decision processes, AITs as social structures, other sources of structure, and factors that influence the appropriation of these structures, the study contributes to the body of knowledge by providing practical insights into the interaction between human-AI collaboration, trust in the tool, and job security among software development teams.