Assistant Professor of Management, University of Washington, Tacoma
The consumer-facing release of OpenAI’s GPT large language model dramatically increased public attention to generative AI. Whether you want a poem about your dog or to fix broken sections of code, ChatGPT is there to not only help, but create that text for you. Numerous other generative AI systems (e.g., Midjourney and Stable Diffusion) can create artwork from text instructions, and OpenAI’s new model (Sora) shows incredible initial promise to do the same with video. We are quickly entering a world where models powered by AI can engage in many similar work tasks as humans, including writing, reading, evaluating, and creating. Writ large, generative AI carries enormous potential to reshape the process and structure of work.
Policymakers and scholars have been quick to note the different ethical issues generative AI heralds, of which there are many. Will generative AI take all our jobs or make workers so efficient that unemployment skyrockets? How acceptable is it that these systems harvested enormous amounts of data from internet users without compensating them? Will these new technologies create “winner take all” economies that aggregate and increase the power of the few companies with the enormous financial resources necessary to train and create these models?
One interesting ethical concern generative AI raises is the question of who deserves credit for AI-augmented, or even generated, work. In organizations, it can be difficult to parse who deserves credit for a particular work outcome, especially in examples of group work. How much credit should Mark get for a project he completed with Janice, Steven, and Enrique? As any university student who has completed a group project can attest to, the answer is often more nuanced and complicated than “one fourth of the credit because there were four people”.
In reviewing the literature relevant to this question, my coauthor (Glenn R. Carroll, a professor in Organizational Behavior at Stanford’s Business School) and I quickly noticed that norms of credit can vary differently across industries and organizations. For example, in glassblowing, it is extremely common for artists to utilize uncredited assistants when creating solo-authored work. Numerous Italian masters’ art studios followed a similar model in which claiming artists developed a concept and apprentices executed this vision. Ghostwriting takes this process to the extreme and involves someone claiming credit for a book written by an entirely uncredited author. Although these situations might seem strange to lay observers, all three are common and (relatively) accepted in these industries.
So, what happens when someone claims credit for work assisted by generative AI? In our research, we found that people were actually more likely to attribute credit to people whose work was augmented by AI compared to when their work was identically augmented by another person. For example, in one on-campus study conducted at the University of Washington Tacoma, we asked participants about a “Cat Cafe” business proposal developed by an entrepreneur, George. Crucially, this business wasn’t only George’s idea: we randomly assigned participants to read that either another person, or an algorithm driven by AI, helped with the proposal. Specifically, this person (or algorithm) recommended industries to operate in, locations, and pricing and staffing strategies for a small business like this. Does George get credit for this idea, given that he had help?
Across a number of studies and a variety of contexts (e.g., artistic creation, work outputs, and entrepreneurship), we found that people gave “claiming creators” (like George) more credit for work when they were assisted by AI, compared to another person. One reason for this had to do with ethics and justice. When people conceal another person’s involvement with work, it appears unfair and unethical to that other person who is being denied the social and economic benefits associated with authorship. A second reason why this happens concerns oversight. When working with AI, people tend to assume that creators have to actively manage this AI, checking inputs, outputs, and systematically evaluating its work product. Conversely, when working with a person, people assume creators are just “outsourcing” the work, potentially to get away with doing less of it. Any sort of assistance – AI or human – lowers credit attributions compared to a creator making something themselves.
Who (or what) gets credit when people work with AI is a thorny question. A recent court ruling indicated that AI-generated art cannot be copyrighted, a landmark case in what will undoubtedly be the first of many. If you use an AI art generator to make a humorous comic strip you came up with the idea for, are you an “illustrator”? If you can use a sophisticated text-to-video generator to make a romantic comedy based on your life, are you a “director”? AI is fundamentally reshaping how people can create anything they want to, which raises big questions about what “credit” or “authorship” is in the age of intelligent machines.