Generative AI like ChatGPT reveals deep systemic issues beyond the tech industry

ChatGPT has cast long shadows over the media as the latest form of disruptive technology. For some, ChatGPT is a harbinger of the end of academic And scientific integrityAnd a threat to white collar jobs and our democratic institutions.

How much should we care about generative artificial intelligence (AI)? ChatGPT developers describe it as “a model… that interacts in a conversational way” while calling it a “terrible product” for its inconsistent results.

He can write emails, summarize documents, review code and provide feedback, translate documents, create content, play games and, of course, chat. This is hardly the stuff of a dystopian future.



Learn more:
Unlike academics and journalists, you can’t verify when ChatGPT is telling the truth


We should not fear the introduction of technologies, but neither should we assume that they serve our interests. Societies are in a constant process of cultural evolution defined by the inertia of the past, temporary consensus and disruptive technologies that introduce new ideas and approaches.

We must understand and embrace the co-evolution of humans and technology by considering what a technology is designed to do, how it affects us, and how our lives will change.

Are ChatGPT and DALL-E really creators?

Along with intelligence, creativity is often considered a unique human ability. But creativity is not exclusive to humans – it is a property that has emerged across species as a product of convergent evolution.

Species as diverse as crows, octopuses, dolphins And chimpanzees can also improvise and use tools.

Despite the liberal use of the term, creativity is notoriously difficult to capture. Its features include output quantity, identify connections between seemingly unrelated things (distant associations) and provide atypical solutions to problems.

Creativity does not just reside in the individual; our social networks and our values ​​are also important. As the presence of cultural variants increases, we have a greater pool of ideas, products and processes to draw from.

Visitors see artist Refik Anadol Not monitored exhibition at the Museum of Modern Art in January 2023 in New York. The art installation is AI-generated and is meant to be a thought-provoking interpretation of the New York Museum’s prestigious collection.
(AP Photo/John Minchillo)

Our cultural experiences are resources for creativity. The more diverse ideas we are exposed to, the more new connections we can make. Studies have suggested This multicultural experience is positively associated with creativity. The greater the distance between crops, the more creative products we can observe.

Creativity can also lead to convergence. Different people can create similar ideas independently of each other, a process called scientific co-discovery. The invention of calculation and the theory of natural selection are the most striking examples.

Artificial intelligence is defined by its ability to learn, identify patterns and use decision-making rules.

If linguistic and artistic products are models, then AI – especially those like ChatGPT and DALL-E – should be capable of creativity by assimilating and combining divergent models from different artists. Microsoft’s Bing chatbot says it’s one of its core values.

AI needs people

There is a fundamental problem with such programs: art is now a given. By picking up these products through a process of analysis and synthesis, we can ignore the cultural contributions and traditions of human creators. Without citing and crediting these sources, they may be considered high-tech plagiarism, appropriating artistic products that have taken generations to accumulate. Concerns of cultural appropriation must also apply to AI.

AI may one day evolve in unpredictable ways, but for now, they still rely on humans for their data, design and operations, and the social and ethical issues they present.

Humans are always needed for quality control. These efforts often reside in the impenetrable AI black boxthese operations are often outsourced to markets where labor is cheaper.

CNET’s recent high-profile story “AI Journalist” presents another example of why skilled human interventions are necessary.

CNET began quietly using an AI bot to write articles in November 2020. After major errors were reported by other news sites, the website ended up publishing lengthy corrections for AI-written content and did a full audit of the tool.

A robotic hand and a human hand touch their index fingers together, mimicking the famous painting
AI may one day evolve in unpredictable ways, but for now, it still relies on humans.
(Shutterstock)

Currently, there are no rules for determining whether AI products are creative, consistent, or meaningful. These are decisions that have to be made by people.

As industries adopt AI, the old roles occupied by humans will be lost. Research tells us that these losses will be felt most by those who positions already vulnerable. This pattern follows a general trend of adopting technologies before understanding – or caring about – their social and ethical implications.

Industries rarely consider how a displaced workforce will be retrained, leaving these people and their communities to deal with these disruptions.

Systemic issues go beyond AI

DALL-E has been described as a threat to artistic integrity due to its ability to automatically generate images of people, exotic worlds, and fantasy images. Others claim that ChatGPT has killed the try.

Rather than seeing AI as the cause of new problems, we might better understand the ethics of AI as bringing attention to old ones. Academic misconduct is a common problem caused by underlying issues including peer influence, perceived consensus And perception of penalties.

Programs like ChatGPT and DALL-E will only facilitate such behavior. Institutions need to recognize these vulnerabilities and develop new policies, procedures and ethical standards to solve these problems.



Learn more:
ChatGPT: Students could use AI to cheat, but it’s a chance to completely rethink assessment


Questionable Research Practices are also not uncommon. Concerns about AI-authored research papers are merely an extension of improper copyright practices, such as the authorship of ghosts and gifts in the biomedical sciences. They hinge on disciplinary conventions, outdated academic reward systems and lack of personal integrity.

As the editors think AI authorship issuesthey have to deal with deeper issues, like why the mass production of academic papers continue to be encouraged.

New solutions to new problems

Before handing over responsibility to institutions, we need to ask ourselves whether we are providing them with sufficient resources to meet these challenges. The teachers are already exhausted And the peer review system is overloaded.

One solution is to fight AI with AI using plagiarism detection tools. Other tools can be developed to attribute a work of art to its creators, or detect the use of AI in written documents.

The solutions to AI are not simple, but they can be stated simply: the fault is not in our AI, but in ourselves. To paraphrase Nietzsche, if you look into the abyss of AI, it will look back at you.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top