AI is Altering the Standard of Care - Issue #4
In professional service firms, “standard of care” is not a theoretical concept. It is the line between competent practice and malpractice liability. It reflects what a reasonably prudent professional would do under similar circumstances, given the knowledge and tools available at the time.
That final phrase, given the knowledge and tools available at the time, is where AI quietly enters the conversation.
Artificial intelligence is not merely accelerating workflow. It is expanding perspective. AI-assisted research tools surface cases, patterns, and interpretations that might not appear through traditional search habits. Drafting systems suggest structures and considerations that a practitioner might not otherwise raise. Analytical tools synthesize regulatory updates across jurisdictions faster than manual review ever could.
When some professionals begin using these capabilities, the baseline of what is “reasonably discoverable” starts to move.
Consider the implications. If AI-assisted research identifies relevant authority that conventional methods routinely miss, does failure to use those tools become harder to defend? If certain firms implement guardrails around AI use, defining when it enhances competence and when it must be restricted, while others allow unstructured experimentation, does that divergence affect how courts, clients, or insurers evaluate professional judgment?
The standard of care has always evolved with technology. Electronic research databases once raised similar questions. So did digital accounting systems. Over time, what was optional became customary, and what was customary became expected. AI may follow a similar path but at a much faster pace.
The risk is not simply that AI will produce errors. The more subtle risk is that AI will raise expectations. If one firm’s professionals routinely see broader analytical perspectives because they are AI-assisted, and another firm’s professionals do not, the definition of “reasonable diligence” begins to shift unevenly across the market.
This does not mean that every firm must adopt every AI tool. Nor does it mean that abstaining from AI use is automatically negligent. What it does mean is that leadership must understand how both the use of AI and decisions to restrict it affect the firm's professional obligations.
Standard of care is not determined by marketing hype. It is shaped by evolving norms of competence. As AI becomes more embedded in research, drafting, and advisory functions, those norms may change.
The governance question, then, is not whether AI is efficient. It is whether the firm has deliberately defined how AI fits within its understanding of competent practice.
If AI expands perspective, the firm must decide when that expansion is required, when it is permitted, and when it is inappropriate. Without that decision, usage defaults to individual discretion. And individual discretion, applied inconsistently, is difficult to defend when outcomes are challenged.
The conversation about AI governance is ultimately a conversation about maintaining control over how the firm defines competence. When technology changes what professionals can reasonably know, it inevitably changes what they may be expected to know.
Leadership should not wait for that expectation to be defined externally.
Rex C. Anderson
AI Governance for Law Firms and Accounting Firms
__________________________________
Forwarded to you?
Subscribe for direct delivery of future issues.