Categories AI

KPMG Partner Fined A$10,000 for AI Exam Cheating | Tech News

KPMG Australia has penalized a partner A$10,000 following an investigation that revealed 28 employees resorted to AI tools to cheat on internal training examinations. This incident has led to heightened scrutiny by regulators and lawmakers.




KPMG
KPMG Australia CEO Andrew Yates shared that the firm is currently grappling with the implications of AI in its training and testing processes.

Rahul Goreja New Delhi

A partner from KPMG Australia received a A$10,000 fine for utilizing artificial intelligence (AI) tools while completing an internal training course about AI, as reported by the Australian Financial Review (AFR).

The unnamed partner finished an AI training program in July that necessitated the download of a reference manual. However, the partner violated company policy by uploading this manual to an AI tool to assist with answering an exam question, as detailed in the report.

KPMG disclosed that 28 employees, including the fined partner, utilized AI tools to cheat in training assessments during the current financial year. The remainder of those involved were employees at the managerial level or below.

KPMG Australia’s CEO Andrew Yates acknowledged in an interview that the firm is facing challenges understanding the use of AI in training and examinations. “It’s quite a difficult issue to manage given the rapid adoption of it in society,” he commented.

In response to these incidents, KPMG has implemented monitoring tools to detect AI-related cheating cases in training and intends to provide a comprehensive report during its annual results announcement.

“Upon initiating monitoring for AI in internal examinations in 2024, instances of policy violations emerged. We quickly conducted a comprehensive educational campaign across the firm and have continued to introduce new technologies to restrict AI access during assessments,” Yates added.

Detection of Cheating Incidents

The cheating issue was initially raised by Australian Greens senator, Barbara Pocock, during an inquiry when she questioned a “misdemeanor” at the consultancy, according to the Financial Times.

Current Australian regulations do not mandate audit firms to report such misconduct to the Australian Securities and Investments Commission (ASIC), the nation’s corporate regulator. It is the responsibility of individual partners to self-report to their respective professional bodies. As per the AFR report, KPMG claimed it voluntarily notified ASIC as part of ongoing discussions with the regulator.

However, in response to inquiries from Pocock, the ASIC stated that KPMG had not submitted a report concerning auditors using AI to cheat prior to certain cases being highlighted in an AFR report from December 2025.

Following this, ASIC reached out to KPMG, and the consultancy willingly supplied the regulator with updated information, as mentioned in the report.

Senator Pocock has called for stricter reporting mechanisms, highlighting the unethical behavior exhibited by major consultancy firms.

“Self-reporting unethical behavior is a farce. The existing reporting system is not only inadequate, it’s laughable. We need enhanced transparency and strengthened reporting protocols,” Pocock stated.

This incident marks the latest in a series of unethical uses of AI by Big Four firms. According to the Financial Times, all four primary accounting companies have experienced fines in recent years due to cheating scandals in various countries. A recent example includes Deloitte Australia, which submitted a report to the Australian government riddled with errors. The firm later agreed to issue a partial refund after admitting that AI had been employed in the preparation of the A$440,000 report.

In summary, KPMG Australia’s recent incident highlights the challenges posed by AI misuse in corporate environments. This case has spurred discussions about the need for stricter regulations and transparency that can effectively address unethical practices in the consultancy sector. As firms navigate the complexities of AI adoption, a commitment to maintaining integrity in training and assessments will be crucial for building trust and accountability.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like