Of course, now that my experiment is conducted, I made up my mind on potential solutions (or let’s call them mitigations) to the harmful problems that AI chatbots bring.
- Create a program that can identify AI-generated content
- Let AI programs be transparent about what they generated
- Add a human verification to content
Let me elaborate.
Create a program that can identify AI content
This idea is not new and is already being applied in deep-fake generated videos. For every new AI, the creators know the characteristics of the output and can train a detector application that can indicate the likeliness of a text being generated by their language model.
However, this works well with images and video, but with text, it is much more difficult since there is more diversity in the real data.
AI programs should be transparent about what they generated
If ChatGPT would save all the content it generated, it can make an API where you can feed in text and check if it was generated by them, or not. Of course, people would try to trick it by slightly modifying the output. But in that case, maybe it is not completely generated anymore.
Add human verification to content
If we could be sure that a real person created content, most problems would be solved. However, this might be a (near) impossible task. How would you even verify that? Should someone film themselves writing, and publish that with the article? Should every computer contain a “non-AI mode”?
One of the most difficult challenges in computer software is verifying if a “user” is a human. For example, after years of effort verifying people, Twitter still struggles with this.