Five experts answer the question, 'What will the rise of ChatGPT and generative AI mean for artists and knowledge workers?'

AI and the future of work: 5 experts on what ChatGPT, DALL-E and other AI tools mean for artists and knowledge workers
https://theconversation.com/ai-and-the-future-of-work-5-experts-on-what-chatgpt-dall-e-and-other-ai-tools-mean-for-artists-and-knowledge-workers-196783

Lynn Parker, vice chancellor at the University of Tennessee in the United States, points out that 'large-scale language models make creative and knowledge-based work accessible to everyone.' Tasks like turning your thoughts into appropriate sentences, images, and illustrations, or accurately summarizing and paraphrasing text, previously required specific skills and knowledge. However, with tools like ChatGPT and DALL-E 2, people can express themselves and organize vast amounts of information simply by entering simple commands (prompts).
Parker also said that there are cases in the business world where beginners can use generative AI to achieve the same level of human expert quality in just a few minutes, such as creating illustrations for business presentations or generating new program code to perform desired functions. However, because precise prompts are required to generate the desired content, 'a simpler, entirely new kind of creativity is required,' Parker said.

While Parker acknowledges the significant benefits of making generative AI accessible to everyone, he also points out significant drawbacks to the development of AI tools. He describes writing as 'one of the most important human skills that will remain crucial for years to come,' and he worries that the widespread adoption of generative AI could accelerate the loss of those skills. He also acknowledges that AI tools raise questions about intellectual property protection, and he believes related litigation could impact the design and specifications of future LLMs.
Daniel Acuna, an associate professor of computer science at the University of Colorado Boulder, shares Parker's concerns about the potential for intellectual property infringement and plagiarism. He also points out the potential for inaccuracies, both large and small, and the promotion of bias, as concerns about generative AI. Generative AI generates content that can lead to previously unseen ideas and new solutions, allowing for creative use by reviewing and assessing its quality. However, without critical scrutiny of the generated code and the logic of the text, the output can contain inaccuracies, both large and small, such as inefficient code and clearly incorrect reasoning. 'For those who aren't critical of what their AI tools produce, these tools are potentially harmful,' Acuna says.
Additionally, language models can reinforce prejudice because they interpret data from a biased perspective, learn from it, and then reproduce it. Conversational AI such as ChatGPT is designed to avoid answering inappropriate questions, but methods have been devised to circumvent such safeguards. Acuna points out that this is even more pronounced in image generation models, where images are generated from short prompts, which tends to reinforce stereotypical prejudices about race, gender, occupation, and so on. In fact, a paper on machine learning models published in November 2022 showed that 'text-to-image generation amplifies demographic stereotypes on a large scale.' Research has also shown that relearning AI-generated content can lead to misinterpretation of data and the exclusion of minorities.
Researchers warn that the rapid increase in AI artifacts is causing a 'loop of AI learning AI-generated content' and leading to 'model collapse' - GIGAZINE

'These tools are still in their infancy, considering the possibilities,' including inaccuracies, biases, and the fact that model training can be plagiarized, Acuna said. However, he expressed optimism that these are all technically solvable problems, and that the development of tools such as fact-checking, bias removal, and plagiarism detection will make them better creative tools.
Kentaro Toyama, an associate professor at the University of Michigan School of Information, acknowledges that 'technology generalizes human abilities and helps everyone produce similar results,' but argues that technological advances are also negating the claim that 'cognitive tasks require a human brain.' 'I believe we are approaching the singularity, the moment when computers meet and surpass human intelligence,' he says, predicting that human intelligence and creativity will only be appreciated to a small extent.
For example, IBM's Deep Blue chess-specific supercomputer beat the world chess champion in 1997, but that hasn't diminished the popularity of chess. Toyama suggests that because human players have personalities and dramas, there are cases where what a human does is important, even if a computer could do it perfectly well.
Even though humans can no longer beat computers, chess remains popular - GIGAZINE

by JESHOOTS.com
On the other hand, when it comes to illustrations and drawings, for example, many readers may not care whether they were drawn by a human or generated by a computer. Toyama commented on the future of AI, saying, 'Many fields will become a hybrid of human capabilities and AI, with only niche fields remaining where humans work solely, and most of the work will be done by computers. It's almost certain that advances in AI will result in the loss of many more jobs, the enrichment of a handful of creative professionals with uniquely human skills, and the creation of new super-rich people who own creative technology.'

Mark Finlayson, an associate professor of computer science at Florida International University , predicts that just as the spread of word processors led to the demise of typewriter jobs, LLMs will eliminate old jobs and create new jobs and skills. No matter how advanced LLMs become, they may still require a certain level of intelligence to create prompts that output the desired content. Furthermore, because LLMs lack an abstract, general understanding of right and wrong and common sense, they can produce inappropriate or meaningless output without warning, requiring the ability to scrutinize the results.
'The potential failure of LLM presents an opportunity for creators and knowledge workers who often believe their jobs will be taken over by AI,' Finlayson said. Furthermore, because LLM and generative AI are designed for general use, businesses that generate specialized types of content to serve specific markets will require deeper expertise. Overall, Finlayson suggested that while LLM is undoubtedly a harbinger of disruption for creators and knowledge workers, there are also valuable opportunities abound if we make the effort to adapt.
Casey Green, a professor of biomedical informatics at the University of Colorado Anschutz Medical Campus, echoes Finlayson's pointing out that the leap in new technology could lead to new skills. 'Just as the emergence of Google transformed the skill of searching for information on the internet, the skills needed to extract optimal output from language models will focus on prompt and template creation,' Green said, describing the shift in skills required. 'In an era where AI models are widely accessible, how people interact with the world will change. The question is, will society use this moment to improve equity or worsen inequality?' he said of the changes in society that will result from the spread of AI.
Related Posts:
in AI, Software, Web Service, Web Application, Posted by log1e_dh







