ISO 9001 Clause 6.1 Risk Management: What Auditors Actually Want, and the Failure Modes That Keep Showing Up
A practical guide to ISO 9001 Clause 6.1 risk and opportunity management — what the standard actually requires, what auditors look for, and how teams fail it.
A small electronics contract manufacturer in Ohio rolled into their first surveillance audit confident about their risk register. They had a 47-row spreadsheet — every department had contributed something, every entry had a likelihood and severity score, the colors were nicely coded green to red, and the file lived on the SharePoint site. The auditor opened it, scrolled for about thirty seconds, and asked one question: "Show me what changed in this register after last year's customer complaint about mislabeled cartons." The quality manager couldn't. The complaint wasn't in the register. Nothing in the register had been updated in fourteen months. The risk scores were the scores from the original workshop. There was no link between the risk register and the corrective action that had actually been opened on the labeling issue, and no evidence that the risk picture had shifted at all when a real risk had materialized.
The finding the auditor wrote wasn't about the format of the register. It was about the absence of risk-based thinking — the thing Clause 6.1 actually requires. The register existed; it just wasn't doing anything.
That pattern is the most common Clause 6.1 failure mode in ISO 9001 audits, and it shows up in organizations that have been certified for years. The problem isn't usually that there's no documentation. It's that the documentation is theatre — built to satisfy a checklist, not integrated into how decisions actually get made. This guide covers what Clause 6.1 actually requires, the gap between "risk-based thinking" and "a risk register," what auditors look for in practice, and the specific findings that keep showing up in surveillance audits.
What Clause 6.1 actually says
Clause 6.1 of ISO 9001:2015 is two short subclauses. 6.1.1 says the organization shall determine the risks and opportunities that need to be addressed to give assurance the QMS can achieve its intended results, prevent or reduce undesired effects, and achieve improvement. 6.1.2 says the organization shall plan actions to address those risks and opportunities, integrate the actions into the QMS, and evaluate their effectiveness.
That's it. Two subclauses, no formal methodology required, no specified template, no mandated documentation format. The standard does not require ISO 31000. It does not require a risk matrix. It does not require a risk register. It does not require quantitative scoring. The phrase "risk-based thinking" in the introduction makes clear that the intent is to embed risk consideration into the operation of the QMS rather than create a parallel risk management system that runs alongside it.
This is also where most of the trouble starts. The standard's flexibility is read by some implementers as "we don't have to do anything formal" and by others as "we should build the most elaborate risk framework we can find a template for." Both readings produce findings.
The flexibility is real but it is constrained. The standard requires the organization to retain documented information as evidence of fitness for purpose of monitoring and measurement resources, of competence, of operational planning and control, of conformance, and so on — and the actions taken to address risks and opportunities have to be integrated into those processes. So while there is no requirement for a standalone documented risk register, there is an effective requirement to show that risk thinking influenced specific decisions. If you have nothing to point to when an auditor asks how risks shaped your operational planning, you have a problem regardless of how clean your register looks.
The two failure modes: theatre and absence
Audit findings against Clause 6.1 cluster into two patterns.
Theatre. The organization has produced a risk register, often a spreadsheet with dozens of rows, scored on some likelihood-times-severity convention. The register was built once during certification or recertification preparation, has not changed materially since, and isn't referenced in any other QMS process. Risks identified in the register don't show up in operational planning. Mitigations listed in the register aren't tracked or verified. Actual problems that surface through nonconformities, customer complaints, or internal audits don't trigger updates to the register. The register exists but it's a museum piece.
Absence. The organization has decided that since the standard doesn't require a register, it doesn't need any documented evidence of risk thinking at all. When the auditor asks how risks were considered in establishing process controls, planning the audit program, or determining competence requirements, the answer is some version of "we thought about it." Nothing in writing. No record of the analysis, no link to the decisions that came out of it.
Both patterns produce findings. The theatre version typically produces a finding about the effectiveness or integration of risk management — the system exists but isn't doing anything. The absence version produces a more direct finding about objective evidence — Clause 6.1 was considered but no evidence is available to demonstrate it. Of the two, theatre is actually harder to fix, because the organization has already committed to a framework that doesn't work and feels invested in defending it.
"Risk-based thinking" vs. a risk register
A useful frame is that the standard requires risk-based thinking and the register, if there is one, is one possible artifact of that thinking. The auditor is not auditing the register. The auditor is auditing whether risk-based thinking is happening across the QMS.
Concretely, that means the auditor will look for evidence of risk consideration in places like:
- Operational planning and control records — were risks identified during the planning of a new process, product, or change, and what was done about them?
- The internal audit program — does the program prioritize higher-risk processes, suppliers, or sites?
- Supplier evaluation and re-evaluation — is supplier risk being assessed and is it driving decisions about controls and oversight?
- Resources and competence — were risks considered in determining what competence is needed for which roles?
- Management review inputs — Clause 9.3.2 explicitly requires the effectiveness of actions taken to address risks and opportunities to be reviewed.
- Nonconformity and corrective action records — when a real problem occurred, was the risk picture revisited?
- Change management — when a significant change was made to a process, was risk re-evaluated before the change went in?
If those places show evidence that risks were considered and actions were taken, the auditor has what they need. The register, if it exists, is supporting evidence — not the primary evidence.
Most teams have this backwards. They optimize the register and ignore the integration. The register passes a desk review and the integration fails the floor walk. By the time the auditor gets to the second day of a Stage 2 audit, the gap is hard to hide.
What auditors actually ask
Walking through an audit, the questions that pull at Clause 6.1 sound like:
- "What is the most significant risk to your QMS achieving its intended results, and what are you doing about it?"
- "When did you last update your risk picture, and what triggered the update?"
- "Show me a decision in the last six months that was influenced by your risk analysis."
- "How does the internal audit program reflect risk?"
- "When this nonconformity opened in March, did anything change in how you evaluate risks for that process?"
- "Where in management review inputs do I see the effectiveness of actions taken on risks and opportunities?"
Notice that none of these questions ask "do you have a risk register" or "what is your scoring methodology." The questions ask whether risk thinking is operating. A team that has a beautiful register and no answer to any of these questions is going to get a finding. A team that has no register but can walk through specific decisions, point to records of those decisions, and show the link from risk consideration to action will pass.
The opportunity half that everyone forgets
Clause 6.1 doesn't say "risks." It says "risks and opportunities." The TC 176 working group put opportunity management in deliberately, and it is one of the most consistently neglected requirements in the standard.
The skepticism is fair — the line between "opportunity" and "improvement initiative" is fuzzy, and most teams treat anything positive as belonging to Clause 10 (Improvement) and ignore it under Clause 6.1. Auditors increasingly do not let this slide. The expectation is that the organization can demonstrate it has identified opportunities to improve customer satisfaction, expand market reach, reduce process variation, adopt new technology, or otherwise strengthen the QMS — and that it has planned actions on those opportunities the same way it has planned actions on risks.
The practical fix is to keep risks and opportunities in the same artifact, evaluated through the same review cycle, with both rolled into management review. A register that has zero opportunities listed is a flag that the team is treating Clause 6.1 as risk management only. Auditors who are paying attention will ask why.
The mistakes that produce findings
The pattern of Clause 6.1 nonconformities across surveillance audits is consistent enough to be predictable.
Risks identified once and never refreshed. The register was built before the original certification audit and has not been updated despite organizational changes, new customers, new products, new regulations, or actual incidents. Auditors look at the date column. If everything is from 2022, that's the finding.
No link between risks and operational decisions. The register exists but no operational record references it. The internal audit plan doesn't cite risk as the basis for sequencing or scope. Process controls don't tie back to identified risks. Resource planning is independent of the risk picture. The register is a parallel document that never intersects the QMS it is supposed to be informing.
Generic risks copied from a template. "Loss of skilled personnel," "supply chain disruption," "data breach" — listed against every organization, scored without reference to the specific context, with mitigations that read like the template's example text. The risks aren't wrong, but they aren't this organization's risks. Auditors who have seen the same template populated by twenty different companies recognize it on sight.
Mitigations that aren't actually being executed. A risk is listed with a mitigation of "annual training," but the training records show it hasn't happened. A mitigation of "monthly review" has no review records. The plan exists in the register and not in reality. This is one of the easier findings for an auditor to write because the trail is short.
No update after a nonconformity or complaint. A real problem occurred, a CAPA was opened, the corrective action was completed, and the risk register was untouched. The implication — that the organization didn't learn anything about its risk picture from a real failure — is exactly what Clause 6.1 is supposed to prevent. Auditors look for this specifically by cross-referencing the CAPA log against the risk register's revision history.
Risks scored but not addressed. Every row has a score, the high-risk rows are highlighted in red, and there's no plan against any of the red ones. Either the scoring is wrong (in which case why are they red?) or the plan is missing (in which case Clause 6.1.2 is unmet). Either reading produces a finding.
Effectiveness never evaluated. Clause 6.1.2 requires evaluation of the effectiveness of actions taken. Most registers have a "mitigation" column and no "effectiveness verified" column. The action was planned, possibly executed, and never assessed for whether it actually changed the risk. Without that step, the loop isn't closed.
No integration with management review. Clause 9.3.2(e) requires the effectiveness of actions taken to address risks and opportunities to be a management review input. If the management review minutes don't show this discussion, the finding is mechanical to write.
Treating Clause 6.1 as separate from Clause 8 operational planning. Clause 8.1 also requires risk and opportunity considerations during operational planning and control. A team that has a Clause 6.1 register and Clause 8.1 planning records that don't reference each other is essentially running two parallel risk processes that never meet. Both will get audited, and the gap between them will show up.
Where the records have to live
The structural problem with Clause 6.1, when teams take it seriously, is that the evidence has to live across many places at once and stay connected. The risk identification has to be tied to the context analysis (Clause 4), the operational planning (Clause 8), the audit program (Clause 9.2), the management review inputs (Clause 9.3), and the corrective actions (Clause 10.2). When a nonconformity occurs, the trail has to update — and an auditor walking the trail twelve months later has to be able to follow it.
Most teams run this on a stack of disconnected documents. The risk register is a spreadsheet on SharePoint. The audit plan is a Word document. The management review minutes are in Outlook attachments. The CAPA log is in a different spreadsheet, possibly on a different drive. The context analysis is in a slide deck somebody made for the recertification audit. None of these documents reference each other by anything stronger than the shared memory of the people who built them, and the version history of any individual document is whatever the file storage system happens to retain.
When the auditor asks to see how the risk picture shifted after the labeling complaint, the answer requires pulling four different documents from four different places, comparing dates, and reconstructing the connection. If the answer takes more than a few minutes, the auditor has already noted the gap. If the version history of any of those documents has been overwritten or lost, the trail is broken in a way that no amount of explanation can fix.
This is the same structural pattern that shows up in CAPA management, supplier qualification, and document control — compliance activities that require linked, traceable, tamper-evident records of who decided what, when, and why, running on top of a stack of spreadsheets and folders that were never built to retain that kind of history. That's the gap SheetLckr is built to close: compliance-grade spreadsheets with built-in version history, approval workflows, and tamper-evident audit trail, so the risk register, the actions, the effectiveness evaluations, and the links to other QMS records all live in one place and survive the trail an auditor walks. Clause 6.1 isn't passed or failed at the register — it's passed or failed at whether the records can show the thinking actually happened and shaped the decisions that followed.
Clause 6.1 is one of the few places in ISO 9001 where the standard's flexibility actively works against teams that don't think carefully about what they're trying to demonstrate. There is no template that satisfies it. There is no scoring methodology that proves compliance on its own. What the auditor is looking for is evidence that risks and opportunities are being considered, that the consideration is changing decisions, and that the loop closes — actions get planned, actions get executed, effectiveness gets evaluated, the picture gets updated when reality changes. Teams that build the register first and worry about integration second produce theatre. Teams that focus on integration and let the documentation follow produce something that holds up at audit. The difference shows up the first time an auditor asks what changed.
Stop patching Excel. Run audits with confidence.
SheetLckr gives quality teams a spreadsheet with built-in audit trails, version locking, approvals, and CAPA tracking — so you're always audit-ready, not scrambling the week before.