Generative AI is beginning to change the mechanics of third-party cyber risk management, but the shift is not automatically an improvement. What looks like a faster way to draft, send and assess vendor questionnaires can also weaken the quality of the answers organisations rely on to judge risk.
That tension sits at the centre of Gartner's latest thinking on third-party cyber risk management. In its Predicts 2026 research, the firm says GenAI will become deeply embedded in questionnaire workflows, with both vendors and customers using it to complete and review assessments. By 2028, Gartner expects the majority of organisations and suppliers to be using GenAI on both sides of the process. The promise is obvious: quicker onboarding, lighter admin and less pressure on already stretched security teams.
But Gartner's warning is equally clear. When AI-written responses are passed into AI-based review tools, inaccuracies can multiply rather than disappear. The result is not just the occasional wrong answer, but a gradual deterioration in the usefulness of the output itself. In practical terms, more automation can produce more noise, while obscuring the difference between a polished response and an accurate one.
That matters because questionnaires were never a strong foundation for third-party assurance in the first place. They are snapshots, not live assessments. They capture what a supplier says about its controls at a particular moment, often in language shaped to satisfy policy or win business rather than expose genuine weaknesses. In fast-moving supply chains, where relationships, systems and dependencies change constantly, that static model has always left a gap between declared and actual risk.
GenAI risks widening that gap. If organisations can generate and process questionnaires at scale with far less human involvement, they may end up multiplying a weak control rather than replacing it with a better one. The appearance of efficiency can be misleading if the underlying method still depends on self-reported information that may already be out of date by the time it is reviewed.
Gartner argues that this is why chief information security officers will increasingly treat questionnaires as a compliance exercise rather than a serious security control. The real focus, the firm says, will move towards in-flight visibility: monitoring third-party relationships as they operate, spotting changes earlier and responding before issues become incidents.
That view fits a broader pattern in Gartner's 2026 cybersecurity agenda. In a separate briefing this year, the company highlighted agentic AI, AI-driven security operations and the need for stronger governance as key themes shaping the sector. It has also predicted that AI applications will drive a large share of incident response work by 2028, reinforcing the idea that security teams will use AI most effectively when it supports detection, prioritisation and escalation rather than simple administrative throughput.
The same logic is appearing in Gartner's guidance on third-party cyber resilience, where the emphasis is on direct monitoring, targeted controls and contingency planning. The message is that organisations need to understand not just what a vendor claims on paper, but how that supplier behaves in practice and how quickly the customer can react when conditions change.
None of this means GenAI has no role in third-party risk programmes. Used carefully, it can help teams handle repetitive tasks, surface patterns across large vendor populations and direct attention towards the most material issues. It can also reduce time spent on low-value work, freeing specialists to focus on judgement, investigation and response.
The danger is in using AI to accelerate a process that was already limited. If organisations simply automate the questionnaire model, they may move faster without becoming safer. The stronger approach, Gartner suggests, is to combine AI with continuous monitoring and lifecycle-based risk management, so that technology supports resilience instead of just increasing volume.
In that sense, the real test for third-party cyber risk management in the AI era is not how many questionnaires can be completed, but how well an organisation can see risk as it emerges.
Source: Noah Wire Services