(This column originally appeared in The Guardian)
Artificial intelligence sure has been taking a lot of flak lately.
Only 8.5% of the 48,000 people recently surveyed by accounting firm KPMG said that they “always” trust AI search results. Another report from Gartner found that more than half of consumers don’t trust AI searches, with most reporting “significant” mistakes.
A McKinsey study found that 80% of companies using generative AI have seen “no significant bottom-line impact”, with 42% of them literally abandoning their AI projects. An MIT study found that 95% of the AI pilot projects at the big companies they surveyed “failed”.
And now there’s workslop!
A new study published in the Harvard Business Review says that more than 40% of US-based full-time employees reported receiving AI-generated content that “masquerades as good work but lacks the substance to meaningfully advance a given task”. This “workslop” is “destroying productivity”, according to the study’s researchers.
Who is really to blame for workslop? Sure, blame big tech companies for yet again releasing untested and unproven products before they’re ready for prime time. Or the media and tech community who, for the past three years, have been writing pieces like Yahoo Japan wants all its 11,000 employees to use Gen AI to double their productivity by 2028 or AI will replace doctors, teachers, and make humans “unnecessary for most things”. All of this creates a lot of unnecessary hype and unfounded expectations.
But in the workplace, the buck always stops with the boss. The responsibility for AI’s “workslop” lies fully at the feet of the employer.
For more than 20 years, my company has implemented customer relationship and financial management applications at hundreds of small and mid-sized businesses across the country. We’ve worked with thousands of employees. We’ve had good projects and straight-out failures. As a technology consultant, we’ve made our share of mistakes. But the most common root cause of technology disappointments, failures and letdowns can always be found with the people who are buying and implementing the product.
So before throwing shade at software companies rolling out AI, I think it’s fair to ask employers a few questions.
For example, did you invest in training for your employees? Do your employees truly understand how to create the right prompts in order to get the best answers? Has your company standardized on an AI assistant or is it just a free-for-all mess of apps?
Do you have an AI policy that formalizes what AI can and cannot be used for and who can and cannot use it? Do you have a designated person in your company who is responsible for your AI-based applications? Has this person been trained and provided technical support to do this job? Are you working with a competent partner, consultant or developer to provide these kinds of services?
Most importantly, do you actually have a plan for using this technology effectively or are you just leaving it up to your employees to figure it all out? Do you have specific metrics for measuring AI’s effectiveness, or are you just relying on vague assumptions of “productivity”?
Unfortunately, many employers are duped by big tech into thinking that they just press a button and their software starts doing magical things that spew out money for their business. But, in order not to scare people away, these same tech companies don’t warn their customers of all the other things that need to happen — and money that needs to be spent — in order to maximize the use of their product. In most cases, the software is not the problem. It’s the lack of investment in the people using it.
AI can be a powerful tool if deployed the right way and with the right expectations. But in the end it’s just that: a tool. And new tools require thought, training, processes and investment. In the end, AI doesn’t produce “workslop”. Employers do.