February 22, 2023

Lost in the Hustle

By admin

[UPDATE at bottom] Over the past 50 years we have seen automation make jobs redundant as machines replace humans in functions only humans used to do.

Initially it was robots on the assembly lines making such mass market items as automobiles and appliances. Relatively simple machines that followed very simple steps to do some function like welding or painting in a consistent, constant way for hours and days on end.

Microprocessors became widely available in the mid 1970s and slowly whole assemblies of electronics became replaced by single chips allowing many things to become much smaller at the same time they became much more powerful.

Those same microprocessors made micro-computers possible for small business and home users. Since then every field that computers have pervaded have seen massive advances in technology and progress.

Eventually we turned the computers on the whole idea of thinking. Of comprehending, recognizing and understanding the world around us. The world we have created together on the World Wide Web.

Now the computers are coming for the jobs of the thinkers, the creative types.

There is a sub-field of machine learning where the idea is to ‘read’ as much textual content as possible. Or review as many art images as possible. And use transformative techniques to derive an understanding of the content. What is there in the content? How does it relate to other things in that or other content?

Eventually to have tools that can generate useful responses to questions. Responses that appear to be ‘intelligent’, that appear to be ‘creative’ and ‘intelligent’.

ChatGPT is one of those tools. It’s now in its 3rd version and it is shaking the worlds of the written word. It’s cousins in the visual areas are doing the same for visual imagery.

By simply describing what content you want it to create and pressing a button you can generate text or imagery. Think it’s too simple? Refine the description and try again.

The image generators can be further prompted to create imagery in the style of a well known painter, illustrator or cartoonist. So close that copyright challenges are popping up.

The things people have done with ChatGPT have ranged from cringe-worthy to praise-worthy.

Some have written job application cover letters with it – ones well received by those in the industry. I’ve heard that ChatGPT has passed the Bar somewhere. There are multiple cases of homework being written by it.

One preacher tried it out and said its sermons ‘lacked soul’. A rabbi tried it out and was a little disturbed at how well received the thoughtful things it wrote were. The general feeling is a lazy or busy preacher might resort to using this and a lot of their parishioners wouldn’t notice.

And we come to Writing for novels and short stories.

Some so-called ‘influencers’ have begun pushing a side hustle: get ChatGPT to write stories and books for you and get them published and get rich. Or make money. What could go wrong?

As this article in The Guardian points out this has had a negative impact on the publishing industry. Seriously negative on some sectors. Clarkesworld Magazine is a publisher of science fiction, fantasy and non-fiction short stories. They state on their submissions page that they are not taking submissions that have been written by an AI. (actually reading their guidance would be a benefit to budding authors) And while they also take Art they’re not taking anything done by an AI. Further info on this can be read on Neil Clarke’s blog post: A Concerning Trend

In this Feb 21/23 Reuters article we learn that there are already titles on Amazon written or co-written by an AI. Over 200 of them. Those are just the ones where the author admitted to it – no one knows how many others might be lurking out there that an AI has had a part of writing that the authors have declined to admit to.

In a recent conversation I had on AI’s impact on writing and publishing I suggested that one job which will likely become even more important in the near future will be that of Literary Agent.

Why?

As I see it the heart of the immediate problem here is publishers don’t know authors.
I mean face to face.

They might know the authors whose output is a big financial stream for them but by and large the authors are not people they personally know. For smaller publishers this might not have been as much of an issue but larger ones cannot afford the personal contact. So most large publishers will not take submissions from authors directly, unlike Clarkes World. They want an Agent, a representative of the author, to deal with.

Traditionally that’s because Agents and Publishers speak the same language and share similar viewpoints of the mercantile part of writing. Usually not a POV attainable or shared by the artists who create the content. It’s always desirable to deal with someone who understands you and the world you have to live and work in – the agent bridges the worlds of business and art.

In today’s internet connected world anyone can be anybody on the Web. That was being said before ChatGPT was a thing. Now it’s even more easy to assume a fake persona (or difficult to discern humans from fakes) thanks to these AI tools.

With some of the image generators you can add ‘in the style of [some well known artist]’ and the final result will look like something done by that artist. Having textual content similarly tailored is likely just a matter of time (if not already possible).

Yes there are tools to figure out if something is AI written or plagiarized material but in this fast changing world those tools are being challenged. It’s a bit of an arms race between the fakers and the fake-spotters with some tool builders playing both sides to the detriment of the whole field.

So a publisher has to rely on a human agent they trust to bring them material created by real human beings, artists. A submission form by itself can’t provide the kind of grounding of trust required because almost everything digital can be spoofed. A smaller outfit like Clarkes could spend their entire IT budget trying to craft a front-end submission portal but their trust in it would inevitably erode with time and experience. And IT is not their business, publishing is.

So a Human who can be Trusted to only represent other Humans who create art is required to fill the gap between artist and publisher even more now than before. In my humble opinion . . .

  • Les Johnson, Feb 2023

UPDATE:

As the world uses ChatGPT more we find out more and more of its problems.

One of the largest ones is trustcan we trust the results it gives us?
The short answer is NO.

I had thought maybe I could use it for basic research. To gain a better understanding of results in areas I’m not that familiar with. And while it gives knowledgeable sounding feedback the information is more suspect than Wikipedia because apparently it is a fabulist, a liar. It makes stuff up.

As the conversations in this article from Artnet News shows it not only makes stuff up, it politely apologizes for doing that and then makes more stuff up to try and make up for its previous faux pas. And it can get circular using made up stuff to base its apologetic attempt to supply the correct information. Pushed enough it comes back to reference its original lie. I won’t say who this reminds me of, I’m sure all of you can pick someone.

And apparently that behaviour is not unique to ChatGPT – it’s so prevalent in this industry there is a term for it: Hallucination. They say that when that happens the system is hallucinating.

Microsoft has Billions invested in the future of ChatGPT and they appear to be willing to keep it up as they tinker and learn how to fix the issues. Whether or not we will all hang around for that ride only time will tell. I can at least hope that other AI offerings from other companies (such as Google’s Bard) aren’t worse. And maybe are a bit better – though in its public unveiling Bard made a factual error. Will it Hallucinate too?

And a little voice is asking me: Could you convince it of something that is NOT true in a way that pervades all the responses in a given area so everyone asking questions about it later on would get the same wrong information?

For example: In the old Star Trek series (and other science fiction stories) they used the trick of luring the computer into a logic bomb, a logical contradiction that changes your answer every time you look at the question.

Or create a fictional advance by a fictional researcher at a fictional institution. How many questions about those three aspects would it take before the system remembers all the fiction as true knowledge? Remember, it’s already hallucinating. I’m just talking about steering the fantasy . . .

It is not something I’m going to spend any time doing but it is the kind of thing humans with time on their hands get up to. How would we know if the system is lying to us with out using the system, or some other system, to verify the information? And if enough people wrote enough articles on the fake information as if it was real . . . at some point it could be very difficult to detect the real truth in all the truthy sounding fiction.

Something similar has already been done with Wikipedia.

Who was the inventor of the toaster? Not Alan MacMasters but for a decade the world swallowed that lie and helped promote it. He became as ‘folk hero of sorts’ in his home country of Scotland. The story is one of those hard to believe ones but in this case truth is stranger than fiction.

If all the AI’s and many normal humans believe a lie then is it still a lie?

I’ll leave that one for you to cogitate on . . .