US-based cyber threat intelligence research team Check Point Research (CPR) found that cybercriminals have been using the artificial intelligence-based tool ChatGPT for malicious purposes. The team described three examples of such misuses of ChatGPT:
- Recreating malicious strains and techniques described in research publications and write-ups about common malware.
- Creating encryption tools
- The second thread is found to perform cryptographic combinations of different signing, encryption, and decryption functions.
- Creating dark web marketplaces.
As CPR notes, although the examples given in the report are relatively basic, ‘it is only a matter of time until more sophisticated actors enhance the way they use AI-based tools for bad’.