A partner at the consultancy KPMG has faced a fine for employing artificial intelligence to cheat during an internal training program focused on AI.
The unnamed partner has been penalized A$10,000 (£5,200) for this misconduct, part of a wider pattern where several employees reportedly used similar tactics.
Since July, over two dozen staff members at KPMG Australia have been identified using AI tools to gain an unfair advantage on internal exams, raising alarms about cheating in accountancy due to AI advancements.
The consultancy adeptly utilized its own AI detection tools to uncover instances of cheating, as reported first by the Australian Finance Review.
In recent years, major accountancy firms have dealt with cheating scandals. In 2021, KPMG Australia was fined A$615,000 for “widespread” misconduct, which revealed that more than 1,100 partners had engaged in “improper answer-sharing” on tests designed to evaluate their skill and integrity.
The emergence of AI tools has created new avenues for unethical practices. In December, the UK’s leading accounting body, the Association of Chartered Certified Accountants (ACCA), announced that accounting students would be required to take examinations in person, citing difficulties in preventing AI-based cheating.
According to ACCA’s chief executive, Helen Brand, AI tools have reached a “tipping point,” as their adoption has outpaced the safeguards the association has implemented to combat cheating.
Firms like KPMG and PricewaterhouseCoopers are also mandating their employees to integrate AI into their work routines, likely as a strategy to enhance profits and reduce costs.
KPMG partners are expected to be evaluated on their proficiency in using AI tools during their 2026 performance reviews. Niale Cleobury, the firm’s global AI workforce lead, stated, “We all have a responsibility to integrate AI into our work.”
Some LinkedIn commentators highlighted the irony of employing AI to cheat in AI training. One user, Iwo Szapar, the creator of an organization that assesses companies’ “AI maturity,” remarked that KPMG is “fighting AI adoption instead of redesigning how they train people. This is not a cheating problem. This is a training problem.”
In response, KPMG has implemented strategies to identify instances of AI misuse among its staff and plans to monitor how many employees misuse the technology.
Andrew Yates, the chief executive of KPMG Australia, acknowledged that “like most organizations, we are working on the role and application of AI in our internal training and testing. This is a complex challenge given the rapid societal embrace of these technologies.
“Given the widespread use of these tools, some individuals may violate our policies. We take such breaches seriously and are exploring ways to strengthen our approach in the current self-reporting era.”