Google Study Reveals A.I.’s Growing Role in Software Development, But Trust Remains a Major Hurdle
Despite 90% adoption, most developers still see A.I. as an assistant, not a partner. According to Google’s latest annual DORA: State of A.I.-assisted Software Development report, released on September 23, the adoption of A.I. in software development has increased by 14% from last year, with 90% of technology professionals now using A.I. in their workflows.
A.I.’s Impact on Software Development
The survey of over 5,000 software professionals and IT specialists found that developers rely on A.I. for tasks ranging from writing code snippets to running tests and reviewing security. However, despite higher A.I. adoption, trust in the technology remains low, with only 24% of respondents saying they trust it “a lot” or “a great deal.” Nearly a third admit they trust it “a little” or not at all, highlighting the “trust paradox” in A.I. adoption.
Nathen Harvey, the study’s lead researcher and a developer advocate at Google Cloud, notes that “boardroom-level prioritization shows that this change is likely here to stay.” He emphasizes that A.I. has captured the human imagination and inspired developers to find ways to drive both top and bottom-line improvements for businesses. The study found that 85% of professionals say A.I. has made them more productive, though 41% call the improvement only “slight.” Fewer than 10% reported any decline in productivity.
Code Quality and A.I.’s Limitations
Code quality is where A.I.’s impact is most evident, with much of the software it helps create ending up running in production systems far longer than developers ever anticipated. However, Harvey cautions that while A.I. speeds development, it can also make software delivery more unstable. “Even with the help of A.I., teams will still need ways to get fast feedback on the code changes that are being made,” he said.
Developers are hesitant to give up control, with only a quarter in the survey saying they have high trust in A.I.’s coding abilities. Harvey notes that developers treat A.I. output with the same healthy skepticism they apply to go-to resources, like coding solutions found on Stack Overflow—useful but never blindly trusted. “A.I. is only as good as the data it has access to,” he said. “If your company’s internal data is messy, siloed, or hard to reach, your A.I. tools will give generic, unhelpful answers, holding you back instead of helping.”
Addressing the Trust Gap
To address this gap, Google introduced the DORA A.I. Capabilities Model, a framework of seven technical and cultural practices aimed at amplifying A.I.’s impact. The model emphasizes user focus, clear communication, and small-batch workflows—underscoring that success requires more than just new tools. Harvey emphasizes that “culture and mindset continue to be huge influences on helping teams achieve and sustain top performance.” A climate for learning, fast flow, fast feedback, and a practice of continuous improvement are what drive sustainable success, and A.I. amplifies the necessity for all of these.
Ultimately, Google’s 2025 report argues that the biggest barrier isn’t adoption but trust. Without stronger confidence in A.I.’s reliability, the future of software development will depend as much on winning developer faith as on improving the technology itself. As the industry continues to evolve, it’s clear that A.I. will play an increasingly important role in software development, but it’s equally important to address the trust gap and ensure that developers have the confidence they need to fully leverage A.I.’s potential.
Image Source: observer.com


