OpenAI avoided two security risks that could have had serious consequences for the company’s future.
To say it is not a good week for OpenAI is an understatement, to say the least. The American company has its hands full with two security risks. One has to do with the new ChatGPT app for macOS and the other with a hacker who stole company information.
It is common knowledge that Apple has strict security rules throughout the App Store. As a developer, you must comply with these rules if you want to release anything in the App Store. However, the ChatGPT app is not available in the App Store for macOS, but only on OpenAI’s website. And that is exactly where the problem lies.
Problems with the ChatGPT app for Mac.
Because the ChatGPT app is only downloadable through its website, it also does not have to comply with the App Store’s security rules, this increases the possibility of security risks. Developer Pedro José Pereira Vieito discovered that the conversations users had with the app were stored locally and not encrypted.
The combination between the two can make it easy for sensitive data to be out on the street and thus into the wrong hands. Meanwhile, OpenAI has released an update for ChatGPT where this problem appears to be fixed.
Twitter is not loading because you did not give permission.
The OpenAI ChatGPT app on macOS is not sandboxed and stores all the conversations in **plain-text** in a non-protected location:
~/Library/Application Support/com.openai.chat/conversations-{uuid}/
So basically any other app / malware can read all your ChatGPT conversations: pic.twitter.com/IqtNUOSql7
– Pedro José Pereira Vieito (@pvieito) July 2, 2024
Hacker steals AI technology from OpenAI
There was another troubling piece of news about OpenAI that came out this week. A hacker gained access in 2023 to a forum where OpenAI employees were discussing trade secrets with each other about ChatGPT, among other things.
The hacker did not have access to the company’s systems and therefore could not access servers or user data. However, the hacker did manage to find out what OpenAI was developing besides ChatGPT at the time. Presumably because he was able to monitor internal communications.
OpenAI’s management took the decision not to make this incident public. It judged that it was an internal matter and shared it only with company employees.
Danger dodged, but stay sharp
So OpenAI seems to have fixed all the security problems, but it doesn’t take away from the fact that these incidents can be called pretty gnarly. Encryption of user data is a standard in many apps, and the fact that a hacker gained access to your internal communications quite easily is not smart either.
That incidents like this can lead to major consequences I don’t need to explain to anyone. The danger seems evaded for now, but perhaps it would be helpful if OpenAI checked all the locks on ChatGPT’s door one more time.
Listen to Freakin’ Nerds
WANT editor-in-chief Mark Hofman celebrates in Apple Park not only the 4 million followers of Niels van Straaten (AppleDsign), but also the arrival of Apple Intelligence. Is that something to be excited about, how does it work and is it actually safe?