
Telling apart the work of AI and the work of human hands
Technology#Artificial Intelligence
You might have done it yourself: run a photo through a certain app that recently made headlines across social media for recreating images in the famous style of Studio Ghibli, and proudly displayed the output on your social media. And you might have seen some of the following controversy: the discovery that the app's creator blatantly faked a cease and desist order to whip up support, the warning by an award-winning photographer that such apps will facilitate the creation, dissemination, and concealment of child sex abuse material, the overloading of ChatGPT's capacity in the sudden rush to try the app.
When technology can superficially duplicate the work of human hands, how do we differentiate the two, and how do we value them?
In the most recently released edition of People Matters Perspectives, written before the app became notorious, we asked the question:
Why pay a human when the AI can do it?
The answer we arrived at was one of judgement and critical thinking. The below list, focusing on written content, is our shortlist of criteria for identifying the lowest-value form of generative AI work, that done without any care or judgement exercised by the human using the tool:
Depth of thinking not reflective of the person’s actual capabilities: e.g. someone with 20 years of senior-level experience regurgitating the chapter summaries of a textbook.
Very broad scope and very shallow depth: e.g. 20 different sub-topics in 800 words, each one given only a few generic sentences, with no elaboration.
No contextualisation: motherhood statements with lack of data, failure to address any specific issue, no connections drawn to external events or the current environment.
No underlying thesis: no question posed or answered, no element of discovery or analysis, a general air of ‘words for the sake of having words’.
Our criteria can also apply to images created using the Ghibli-imitator app, or indeed any other tool that uses generative AI solely to duplicate existing styles: a superficiality of appearance, with little to no underlying story element, showcasing no human capabilities or even human thought. This is hardly surprising, given that the overwhelming majority of images fed into the app are stand-alone moments in time that hold meaning only for the person who took the photo - if at all - and are completely uncurated for any other purpose.
What generative AI tells us about the value of human work
The output of generative AI is worth only as much as the amount of human work that went into it. How much contextualisation to real-world experience and critical thinking, how much refinement of scope and curation. The tool is only that; a tool.
On social media, for private use, it presents its own scale of problems ranging from ethics to personal data security to the impact on cognitive capability; and one of the major problems inherent is that it devalues the work of human hands by simple scale, if only because end users lack the ability and the motivation to differentiate quality work from mediocre work from the kind of lowest-value work described above.
Take this into the workplace, for professional use, and managers do have plenty of motivation to differentiate between good quality work and poor quality work. Harvard and MIT Sloan studies have found that generative AI can elevate the quality of people's work, especially for the less skilled or less experienced. But at the same time, reliance on generative AI can cause performance to plateau - or to hit a skill ceiling - if the worker derives all their quality from the tool and fails to improve their skills to surpass what the tool can do.
It is a little like the difference between using a dollar store ballpoint and a Copic or Pentel premium sketching pen to draw. The high-end tool will make a beginner's individual lines look good, but won't do much for the overall picture. The cheap ballpoint in an expert's hand will turn chicken scratches into part of a photorealistic landscape.
This is the deeper truth hidden beneath the flashy popular appeal of the promptist trend and the cheap easy imitation of a style we like: generative AI's work, however hyped, is ultimately the work of humans. It may be trained on data that was obtained under false pretences (such as enticing people to upload a vast variety of facial images on the pretext of making them look like Studio Ghibli characters) or outright stolen from databases that were supposed to be paywalled and watermarked. But that data was also originally the work of humans.
And the output, derived as it is from human instructions, is also human work. When we place value on something that generative AI created - whether it is a professional-level essay curated and polished over the course of several hours or a secondary school word salad popped out by a five-second prompt - we are, consciously or not, placing value on the work that the human prompter put in.
Perhaps, sadly, what generative AI has been telling us all along is that we simply do not value the work of our fellow human beings enough to differentiate between thought and trend, skill and superficiality, literacy and laziness.
The author of this article does not support the use of generative AI to copy the style or content of creators for personal or professional dissemination.