Most people love ChatGPT despite the app explicitly warning that:
ChatGPT may produce inaccurate information about people, places, or facts.https://chat.openai.com/
We thought, why not talk about the ugly truth about ChatGPT and what we’ve learnt so far?
You probably know more than ChatGPT
We’ve been testing ChatGPT for several weeks now, trying to understand its capabilities and limitations. At first, we thought if you’re an expert at something, chances are ChatGPT won’t know anything you don’t already know. But we soon realised that the AI has information that was last updated in September 2021, or earlier if you use the free version. A lot has happened since then, which the generative AI doesn’t know. So, you probably know more than ChatGPT.
The AI constantly gives us wrong information about many apps. It almost feels like it fabricates ideas from a general knowledge base of app structure. Its inaccurate information is misleading and dangerous to the digital transformation movement, which is worrying. Here is an example:
We asked ChatGPT to write us a step-by-step “how-to” for using the audio recording feature in Noteful (for the free Noteful course we’re creating on Paperless Humans). We know ChatGPT doesn’t know Noteful because the app launched on App Store after September 2021 (when it was last updated). Instead of doing the decent thing of admitting that it doesn’t know the Noteful app, the generative AI goes on to give us information on something it has no information on.
ChatGPT tells us to “Tap on the microphone icon located at the bottom of the screen.” The microphone in Noteful is not located at the bottom of the screen. It also says, “As you record, you will see a waveform appear on the screen to indicate that the app is picking up sound.” That is not how audio recording works in a note-taking app. Already, we know that the generative AI is taking that from other audio recorders, but it doesn’t know that this does not apply to this specific app. This is just one example, now, imagine the nonsense it is telling us about what we don’t know. Information we can’t verify.
Conclusion: don’t use ChatGPT to learn new things you don’t know, or those you can’t verify for yourself. If you must verify it, why not just skip the generative AI altogether?
ChatGPT is not passing the plagiarism test
Most people are claiming that you can use chatGPT to create new content. What are we missing? We asked the AI to write a non-plagiarised post on random topics in our niche. Both Grammarly and Quillbot picked plagiarised sentences in almost every article created by ChatGPT. Could this be a battle of AI? Seriously, if they are picking something what is Google picking up?
We feel that ChatGPT is probably recycling and paraphrasing the same information from limited resources. Soon enough, there will be nothing new to recycle. It’s probably just a question of when that will be. How many times can the AI recycle the same content before it starts reproducing them again? Already, from the few requests we’ve made, we have started seeing some patterns. The AI’s favourite word is probably “revolutionise” because it uses it with almost every prompt we’ve made so far. Its idea of a brilliant topic is adding the word to any topic, and these are some of the suggestions we have got:
- Revolutionise your morning routine with Apple Reminders
- Revolutionise your digital note-taking skills
- 5 ways to revolutionise how you manage your team projects.
Conclusion: always rewrite and test what you get from ChatGPT, not all of it is original. And we have stopped reading any articles with revolutionise in them because it’s probably AI-generated.
What ChatGPT is actually good at
So, is ChatGPT completely useless? No, of course not. Humans have perfected many subjects over many centuries; mathematics, pure sciences, etc. Any subject that has been tried and tested. Mathematics especially is quite impressive for anyone with average Maths skills like myself. We tried to calculate compound interest with percentage withdrawals to see how long it would take us to make a certain amount of money. ChatGPT did the calculations in seconds, which took us about an hour to calculate a small part (we couldn’t even finish the calculation). ChatGPT did it in seconds.
Take this with a pinch of salt because only a Maths professor can determine just how good the AI is at Maths. It goes back to our idea of verifying everything the AI gives you. I’ve also tried a few medical sciences, and ChatGPT did alright. The AI can certainly pass a lot of medical school exams. That’s not to say it can make a good doctor, though.
We have also found AI to be brilliant at brainstorming. When you have an idea, ChatGPT can help broaden that a little bit. It will have a lot of fluff that you’ll need to get rid of, but it’s probably best to use it for just that.
Without doubt, ChatGPT is learning from you. It’s gathering more information from us, than it is giving. Each time you like or dislike a response or provide feedback, you’re training the AI and adding information to its database. ChatGPT remembers all our conversations in their billions, imagine what it has learnt from us in just the last few months.
Should we be training AI or do we have a moral obligation to fight it, like all the movies have taught us? Humans against machines? We’d love to hear about your experiences with ChatGPT. Perhaps we’re missing something.