Monday, February 27, 2023

AI - Why Humans will Need a Label

Also In This Edition:

  • Can ChatGPT write code?

Humans Need a Label



Since last month's diatribe it seems that AI has exploded onto the scene (at least all the tech news outlets), some of them positive ("ChatGPT can find case law faster and easier than using conventional keyword searches") (oh wait, that's a lie!), but most of them negative ("Look at all of these factual errors it's confidently spitting out!!", or "This can be used to easily create malware!").  Then I read that some people have empathy for these things, and some have been using chatbots to write books with little to no effort and self-publish them on Amazon, with the promise of exponentially amplifying the amount of crap out there.  As they proliferate, we'll need some humans to sort through what's factually correct to increase the value of its output.  Back in my day, those people were called "editors".  

It then occurred to me that there will soon be a need for certification - similar to what the farmers' associations came up with to let consumers know whether their crops were organic, or whether a food product contained preservatives.  There will be a need for people who don't want to double-check every fact spit out by a chatbot, to know that what they're consuming has had some human quality control on it to weed out demonstrable falsehoods (Flat Earth?  5G towers cause COVID?), label rumors appropriately ("Climate Change is a hoax from China"), and to give proper historical context.  Plus, it's been demonstrated that those tools that can allegedly detect if ChatGPT had been used to create something are about as reliable as the chatbot output they're trying to authenticate.