Top industry executives discuss five strategic approaches to harness AI’s transformative potential in software creation, emphasising flexibility, communication, security, and cultural change amidst rising adoption and associated risks.
Industry leaders in financial services are showcasing how artificial intelligence (AI) is revolutionizing software development, driving both efficiency and innovation while highlighting the importance of cultural and procedural shifts to fully realise AI’s benefits. At a recent conference hosted by technology specialist Harness in London, executives outlined five strategic ways their firms are maximising AI’s impact, underscoring the evolving role of developers in an AI-augmented landscape.
One key strategy is fostering flexibility within clear guidelines. At Allianz Global Investors, AI technical lead Dill Bath described using the Open Policy Agent (OPA) engine to codify policies that act as a “copilot” for developers—nudging rather than blocking them towards compliance. This tech-first approach anticipates regulatory changes and aims for agile delivery without compromising standards. Bath emphasised the cultural shift towards platform engineering and granting developers autonomy while maintaining security and audit requirements.
Communication is equally critical in large enterprises. Tony Phillips of Lloyds Banking Group explained the bank’s Platform 3.0 initiative, which modernises infrastructure to enable broader AI adoption beyond coding assistance. Philips admitted managing change across thousands of developers is challenging, stressing the need to “hammer home the changes” so that scepticism transforms into belief through tangible successes. Learning from hands-on experience and iterative feedback is vital to integrating AI effectively.
Driving innovation within risk-managed environments is a focus at Hargreaves Lansdown. Senior software engineering manager Bettina Topali highlighted automation’s role in embedding guardrails—automated testing, security scanning, and code coverage—that enable faster innovation safely. She urged digital leaders to move beyond buzzwords and visibly demonstrate AI’s value to shift organisational mindsets and keep pace with emerging fintech competitors.
Providing regular feedback to developers about AI-generated code quality is another essential element. Daniel Terry at SEB described how his team equips developers with tools like GitHub Copilot while preparing them for agentic AI, where humans oversee AI agents generating large volumes of code. Terry cautioned novices against “vibe coding,” where blind reliance on AI can introduce errors, stressing the importance of testing and governance to ensure secure, compliant software development.
Finally, enterprises must “fight fire with fire” by empowering IT and security teams with AI tools to counter increasingly sophisticated cyber threats. Aaron Gallimore of Global Payments emphasised scalable, secure platforms that reduce developers’ overhead in tooling transitions and help audit and security professionals keep pace with AI-driven development. He described educational initiatives aimed at sparking widespread AI adoption and cultivating a culture of ongoing learning.
These practitioner insights align with broader industry data signalling AI’s transformative potential but also its limitations and risks. Surveys indicate nearly 90% of developers regularly use AI tools, mainly for routine coding tasks, which frees them to focus on problem-solving and oversight. AI has been shown to increase productivity and code quality for many, yet trust in AI remains tentative, with less than a quarter of developers strongly confident in AI outputs. Many still prefer peer review and worry about the significant time lost debugging AI-generated code.
Further complicating the picture, recent research reveals that experienced developers working on familiar codebases may actually slow down when using AI tools, as they spend considerable effort reviewing and correcting AI suggestions. However, such findings might not apply to junior developers or new projects, where AI’s support can be more impactful.
Security vulnerabilities in AI-generated code present a critical challenge. Independent studies find nearly half of AI-produced code contains exploitable security flaws, often due to insufficient specification of security requirements during code generation. This risk is exacerbated by “vibe coding,” a practice increasingly common but fraught with danger if not properly managed. Experts urge integrating security checks directly into AI workflows, leveraging AI-powered remediation tools, and training developers in secure coding practices to mitigate these risks.
Despite these challenges, AI is reshaping nearly every phase of software development. Automation now extends from coding and refactoring to code review, testing, and debugging—enhancing efficiency, improving error detection, and enabling developers to focus on higher-level creative and problem-solving tasks. Industry commentators advocate establishing new frameworks to ensure responsible AI use that balances innovation with ethical standards and security.
Moreover, certain sectors stand to benefit enormously. The Indian IT industry, for example, anticipates productivity improvements of up to 45% attributable to generative AI over the next five years. Software development roles, in particular, are projected to see productivity boosts around 60%, underlining AI’s strong potential despite ongoing challenges.
In summary, AI’s integration into software development is unmistakably profound, accelerating productivity and innovation while demanding cultural change, strong governance, and a renewed emphasis on security and trust. The future for developers is increasingly collaborative, with AI acting less as a replacement and more as an enhancer—amplifying human expertise, automating routine tasks, and prompting organisations to evolve rapidly to keep pace with technological advances.
📌 Reference Map:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative appears to be original, with no evidence of prior publication. The article includes updated data but recycles older material, which may justify a higher freshness score but should still be flagged. The inclusion of recent data on AI’s impact in software development suggests a moderate freshness score. However, the lack of earlier versions with different figures, dates, or quotes indicates originality.
Quotes check
Score:
9
Notes:
The direct quotes from industry leaders are unique to this narrative, with no identical matches found in earlier material. This suggests the content is original or exclusive.
Source reliability
Score:
7
Notes:
The narrative originates from ZDNet, a reputable technology news outlet. However, the inability to access the article directly raises some uncertainty about the source’s reliability.
Plausability check
Score:
8
Notes:
The claims made in the narrative align with current industry trends and are supported by references to reputable sources. The language and tone are consistent with professional reporting. However, the inability to access the article directly raises some questions about the plausibility of the content.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
While the narrative appears to be original and is supported by references to reputable sources, the inability to access the article directly raises some questions about its reliability and plausibility. Therefore, further verification is needed to confirm the accuracy and credibility of the content.
