Industry News for Business Leaders
Artificial IntelligenceBuilding & ConstructionFeatured

AI Lays the Foundations for the Future of Infrastructure — Expert Insights from the Field

AI Lays the Foundations for the Future of Infrastructure — Expert Insights from the Field
AI is reshaping the built environment: insights from Pinsent Masons, Turner & Townsend, and Mott MacDonald. (Courtesy of Bentley Systems / YII / October 2025)

From risk management to data governance, AI is quietly becoming a cornerstone of modern infrastructure. While early use cases often target quick wins, the real transformation will depend on structure, trust, and collaboration among stakeholders. We attended a panel during Bentley Systems’ Year in Infrastructure (YII) conference in Amsterdam a couple of weeks ago where three industry experts discussed what it takes to make AI truly work in the built environment.

A Sector at a Turning Point

Infrastructure has long symbolized stability and endurance. Yet today, it stands on shifting ground as artificial intelligence reshapes how projects are designed, managed, and delivered. The infrastructure of the future won’t just be built from concrete and steel. It will be constructed on data, algorithms, and collaboration.

A recent white paper from Bentley Systems, The Impact of Artificial Intelligence on the Built Environment, sets the stage. Seventy-eight percent of respondents said their organizations are still in the early stages of AI adoption, while 68% already have AI policies in place. Moreover, 38% expect that within three years, more than half of their projects will incorporate AI tools.

This is corroborated by another study published last week by Bluebeam, a leading provider of solutions and services for architecture, engineering, and construction (AEC) professionals. According to their global study, “Building the Future: 2026 Construction Technology Maturity Report,” only 27% of companies in the construction sector currently use AI to automate certain tasks, optimize decision-making, or anticipate potential issues. They are held back primarily by the cost, complexity, and risks associated with its integration.

The momentum is clear, but so is the challenge. Adoption remains fragmented, and organizations are still figuring out how to translate experimentation into measurable value.

To explore what’s working — and what isn’t — a panel at Bentley’s Year in Infrastructure (YII) conference in Amsterdam brought together three experts: Anne-Marie Friel, Partner at global law firm Pinsent Masons; Guy Beaumont, Digital Lead for Infrastructure at Turner & Townsend; and YJ Kim, AI Technical Lead at Mott MacDonald.

Their message was clear: the technology itself is no longer the barrier. The real work lies in aligning people, governance, and business models around it. For AI to deliver on its promise, it must be useful, responsible, and explainable. The challenge ahead is not purely technological but profoundly human.

If there’s one area where AI is already delivering tangible results, it’s in document automation — a space that combines low risk with high value. (Courtesy of Bentley Systems / YII / October 2025)

The First Wins: Useful AI Before Spectacular AI

If there’s one area where AI is already delivering tangible results, it’s in document automation — a space that combines low risk with high value.

“When we define AI broadly — from predictive analytics to statistical learning — it’s clear that many forms of AI have been around for a while,” says YJ Kim of Mott MacDonald. “But with the recent boom in natural language processing, the most likely wins come from document automation. This is where the technology is relatively mature and easy to integrate.”

From predictive maintenance to demand forecasting, AI’s presence in infrastructure isn’t new. But tools that process and classify information — summarizing reports, extracting insights, and tagging assets — have become the gateway use case for many engineering firms.

“Document automation offers quick wins,” Kim explains. “It’s relatively low risk, technically well understood, and delivers immediate productivity gains. We’re seeing an increasing number of case studies providing practical value.”

Just as important, these systems are auditable. Unlike opaque black-box models, language-based tools allow engineers to trace how and why a system produced a specific result. 

“If an AI summarizes a report, we can ask it to explain its reasoning,” Kim adds. “That transparency is crucial for trust.”

Opportunity and Risk: Value Depends on Trust

For Anne-Marie Friel, from Pinsent Masons, the potential of AI in infrastructure is enormous. But so are the risks.

“The sector has been a slow adopter compared to others,” she notes. “That means the opportunity is huge. If we close the gap on cost, quality, sustainability, and productivity, the impact could be transformative.”

However, she warns that many projects still fail for reasons unrelated to technology. 

“The main risk is failing to deliver value because the purpose hasn’t been properly defined. Too often, teams chase technical novelty instead of business value.”

This lack of clarity can erode stakeholder engagement and public trust — two assets as vital as the physical structures themselves. 

“You can’t rely on decisions you don’t understand,” she says. “Outputs must be reliable, compliant, and explainable.”

Auditing AI to Build Trust

This is why in a high-risk industry like construction, auditability is non-negotiable. According to Guy Beaumont, from Turner & Townsend, a good AI audit trail starts with user accountability. 

“Keeping a log of prompts, actions, and versions of prompts — and linking them to specific user accounts — ensures every AI decision has a traceable origin. That responsibility is often overlooked.”

The next step is validation and assurance: continuous monitoring and testing of outputs. 

“We need rules about when AI hands control back to human experts,” he says. “It’s not full automation — it’s collaboration.”

YJ Kim agrees that transparency is key to maintaining trust. 

“If we can’t show clients how an AI came to a conclusion, we lose credibility. Clear audit trails are essential in the built environment.”

Anne-Marie Friel adds the legal and ethical dimension. 

“The built industry is a high-risk sector. The decisions we make are crucial. They must be explainable and verifiable. Continuous testing isn’t optional. Too often, projects fail because partners don’t see the value in sharing data,” warns Friel. “We need operating models that make collaboration worthwhile for everyone involved.””

AI can make infrastructure smarter, greener, and more efficient, but only if stakeholders are willing to share data, align incentives, and embrace transparency. In this sense, trust becomes the invisible infrastructure underpinning the digital transformation of the built environment.

Anne-Marie Friel, Partner at global law firm Pinsent Masons; Guy Beaumont, Digital Lead for Infrastructure at Turner & Townsend; and YJ Kim, AI Technical Lead at Mott MacDonald/ (Courtesy of Bentley Systems / YII / October 2025)

From Pilot to Production: Escaping the “Pilot Purgatory”

AI in infrastructure has no shortage of pilot projects — what’s missing is scale. Guy Beaumont says the key is to move from proof of concept to proof of value through structured, methodical steps. 

“We see many organizations stuck in pilot mode,” he says. “Scaling AI isn’t about a leap of faith — it’s a step-by-step process.”

The first enabler of success is data readiness. Without clean, structured, and compliant data, AI cannot perform effectively.

“Investing in the data asset of the organization is fundamental,” Beaumont explains. “That means getting data organized, conforming to standards, and ensuring quality.” 

This includes making data retrievable, controlled, and connected to the right endpoints — a challenging task across large enterprises.

The second is infrastructure readiness. AI pilots often exist in functional silos, but scaling requires shared digital platforms and a DevOps mindset that turns prototypes into products.

The third enabler is governance. Effective governance must balance control and agility.

“You need the right level of oversight without hindering progress,” says Beaumont. “Governance should shorten time to value, not extend it.”

Finally, user experience is essential. AI adoption succeeds only when it improves how people work. 

“It’s not just about user interfaces,” he stresses. “It’s about redesigning how people interact with software every day.”

This human-centered approach helps prevent resistance and ensures that technology serves its purpose — not the other way around. Beaumont also advocates for strong internal sponsorship. Organizations, he believes, need a corporate leader with both the mandate and the budget to drive adoption forward.

Case Study: Heathrow’s Leap Toward Automation

A powerful example of AI in action comes from Heathrow Airport, where Turner & Townsend has been a strategic partner for over three decades. The firm is developing a commercial intelligence platform to automate and streamline contract administration.

“We’re cutting down the time our commercial experts spend on repetitive, admin-heavy tasks,” Beaumont says. “By automating these processes, we let them focus on value creation instead of paperwork.”

More than a technology upgrade, this initiative signals a shift in business model. Instead of traditional time-based contracts, Turner & Townsend and Heathrow are exploring subscription-based services centered on outcomes — a clear sign of how AI is redefining not only workflows but also how value itself is measured.

“Clients are increasingly asking us for integrated bundled solutions — managed services that combine our people, our expertise, our frameworks, and our processes, all enabled by seamlessly integrated technology,” Beaumont says. “That’s becoming the new norm.”

AI is therefore redefining how firms create and capture value. This shift is forcing them to rethink pricing, delivery, and partnership structures, and it’s creating demand for new forms of economic analysis.

Back to the Basics: Data, Standards, and Openness

After an initial wave of enthusiasm, many organizations are realizing that sustainable AI success requires a return to fundamentals. 

“When the AI boom began, everyone jumped straight into experimentation,” recalls Kim. “Now they’re realizing that without strong foundations, AI can actually amplify inefficiency rather than deliver value.”

We recently published an article echoing a study from MIT showing that 95% of companies and organizations across all sectors — not just infrastructure — that implemented AI pilots were dissatisfied, due to the lack of productivity gains promised by AI vendors.

According to Kim, the next phase of investment should focus on three priorities: data standardization, open data models, and automated data quality control. Standardization involves shared taxonomies, common data schemes, and consistent tagging. Open data models enable collaboration across fragmented sectors like transport, energy, and water. Automated quality control ensures data integrity continuously, not just at project kickoff.

“If we can describe our data in a consistent and systematic way, we amplify its value,” Kim says. This emphasis on interoperability — the ability to share and understand data seamlessly — is what will unlock AI’s societal value beyond individual projects.

Upskilling the Workforce

Keeping people in the loop is also important. All three experts agree that AI will not replace people — it will redefine what they do. Beaumont argues that data literacy is becoming a core professional skill, requiring extensive reskilling and upskilling.

“We don’t expect project managers to become data scientists,” he says. “But they need to treat data as a valuable asset — label it, tag it correctly, and understand how it feeds downstream systems.”

Turner & Townsend is promoting this mindset through cross-functional working groups that mix technical and operational experts. The company also runs super-user programs to spread good practices internally and embeds data and AI professionals directly into project teams.

“It’s a two-way exchange,” Beaumont explains. “AI experts learn from project professionals, and vice versa. That’s how knowledge scales.”

He takes the argument further:

“Over the next couple of years, we’ll start seeing humans as the luxury rather than AI. Those who can manage change, align behaviors, and foster collaboration will be the most valuable assets in any AI-enabled organization.”

In other words, deep domain knowledge — being an excellent scheduler, planner, or engineer — will only become more important. At the same time, the technical digital landscape is evolving rapidly. 

“It’s now much easier to produce software and solutions with AI built in,” Beaumont notes. “But it’s still very difficult to do it well.”

As AI becomes operational, Kim identifies two emerging professions: AI Governance Officers, who ensure adoption remains ethical, transparent, and secure, and AI Technical Integrators, who bridge AI systems with legacy infrastructure to ensure compliance and robustness. 

“We’re moving from experimentation to deployment,” she says. “That requires technically sound, compliant, and well-integrated solutions.”

Anne-Marie Friel predicts another new role: the digital economist. 

“CEOs funding digital twin projects don’t care about governance frameworks,” she says. “They want to know where the value is — in numbers they can understand.” 

Being able to quantify digital ROI in terms of efficiency, resilience, and sustainability will soon become as critical as engineering precision.

For the CEO’s AI Report

As a closing exercise, the panelists were asked what they would write in their CEO’s six-month AI progress report. Their answers capture the essence of the transformation underway.

For Anne-Marie Friel, from Pinsent Masons,

“First, focus on purpose – define the problem to define the value. Then enable collaboration. Data gains power when it’s shared. And finally, build reliability and trust. Accountability is everything.”

For Guy Beaumont, from Turner & Townsend, it’s firstly

“Identify high-impact use cases that align with business goals. Secondly, dtrengthen data architecture and treat data as a strategic asset. And thirdly, invest in people and culture. Readiness is as much human as technical.”

Finally for YJ Kim from Mott MacDonald,

“Start with the data that matters most, even if incomplete. Make governance an enabler, not a blocker. Respect human adoption pathways – not everyone moves at the same pace.”

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement