Hackers are selling a service that bypasses ChatGPT restrictions on malware
Hackers have devised a way to bypass ChatGPT’s restrictions and are using it to sell services that allow people to create malware and phishing emails, researchers said on Wednesday.
ChatGPT is a chatbot that uses artificial intelligence to answer questions and perform tasks in a way that mimics human output. People can use it to create documents, write basic computer code, and do other things. The service actively blocks requests to generate potentially illegal content. Ask the service to write code for stealing data from a hacked device or craft a phishing email, and the service will refuse and instead reply that such content is “illegal, unethical, and harmful.”
Opening Pandora’s Box
Hackers have found a simple way to bypass those restrictions and are using it to sell illicit services in an underground crime forum, researchers from security firm Check Point Research reported. The technique works by using the ChatGPT application programming interface rather than the web-based interface. ChatGPT makes the API available to developers so they can integrate the AI bot into their applications. It turns out the API version doesn’t enforce restrictions on malicious content.