[WORLD] Artificial intelligence has firmly embedded itself in the modern workplace, evolving into a true collaborative tool rather than a futuristic novelty. That’s the key takeaway from a global survey conducted by Melbourne Business School in partnership with KPMG, drawing insights from over 48,000 participants across 47 countries.
According to the report, nearly 60% of employees now use AI tools proactively, with one-third engaging with the technology at least weekly. The advantages are clear: users report saving time, accessing information more efficiently, and driving innovation. Almost half of the respondents even say that AI has helped boost revenue-generating activities within their organizations.
However, the rapid adoption of AI is not without complications. One of the primary challenges is the absence of uniform standards for training and policy across companies. This inconsistency has created a fragmented landscape, where many employees rely on self-learning or informal peer guidance. As a result, disparities in skill and understanding are emerging, leading to uneven quality in AI-assisted work.
Skepticism also lingers. Some workers question whether using AI tools still qualifies as “real” work, while others fear being judged by colleagues for embracing automation.
Ethical concerns are also front and center. With AI increasingly involved in decision-making and content creation, issues of accountability and transparency are gaining prominence. Both employers and employees are struggling to ensure that AI-generated output adheres to legal and ethical standards. This has led to growing calls for the development of robust ethical frameworks and usage guidelines.
AI is not just reshaping workflows—it’s challenging employees to reconsider their roles, skills, and professional identity. In response, a significant trend of covert usage is emerging: 57% of respondents admit to presenting AI-generated content as their own without disclosing the involvement of such tools.
This hidden reliance raises alarm bells. By concealing AI's role, employees risk eroding trust and integrity within their teams. It undermines the value of human input and fosters a culture of dishonesty, which could have long-term consequences for workplace cohesion and credibility.
Compounding the issue is the lack of due diligence—66% of users report not verifying AI-generated content before using it. This is particularly troubling given that AI systems can produce errors or biased information. Without careful oversight, flawed outputs may lead to poor decisions and negative outcomes for organizations.
At the root of many of these issues is a shortfall in guidance. Fewer than half of those surveyed have received formal training on AI, and just 40% report that their employer has established a clear usage policy. Meanwhile, half of the respondents feel pressured to master these tools quickly or risk falling behind professionally.
To navigate these growing pains, experts recommend a more holistic approach to AI integration. This includes providing structured training, establishing clear and transparent policies, and fostering a workplace culture grounded in ethical practices and continuous learning. Organizations that invest in these areas are more likely to see improvements in employee engagement, performance, and innovation.
“AI in the workplace is clearly driving performance gains, but it’s also introducing risks related to careless and opaque use,” said Nicole Gillespie of the University of Melbourne.
The survey also reveals how unsupervised and potentially risky the use of these tools has become: nearly half of all employees admit to inputting sensitive data into public AI platforms like ChatGPT, and 44% confess to bypassing internal guidelines in favor of more accessible external tools.