Plenty of people had fun with ChatGPT when it released, but I’m not sure any industry had more fun than cybersecurity. When first released, it turned out that ChatGPT could write code, convert code from one programming language to another, and write malware. Sure, the coherent nonsense problem persisted, but overall, it produced solid stuff.

A recent update now rejects requests for malware — or leads to a safety prompt — when attempting to use the API to develop malicious code. Of course, after that was announced, the arms race continued, and savvy individuals identified ways to “jailbreak” ChatGPT so that it could continue to enable evil.

If ChatGPT can pass medical exams and portions of the bar exam, it can help attackers and defenders without ever needing to write any code. We’ve compiled some of our brainstorms below on how this will help people, regardless of their intent.

Attacker Scenarios

  1. Phishing: Expect phishing emails to improve. There’s an art and science to writing a great phishing email. ChatGPT adds a third dynamic to that: programmatic. ChatGPT excels at creating text based on a prompted action. Good phishing emails required good-enough communication skills, which could challenge adversaries. Adversaries — and phishing simulation vendors — can use the text generation capabilities in ChatGPT to refine and enhance their library of emails by composing them at scale and selecting the ones that are most effective — just like marketing teams A/B-test — and see which ones excel.
  2. Phishing sites: Plan for impostor websites to fool more users. Between Stable Diffusion and ChatGPT, it’s easier than ever to create compelling images, logos, and website copy for attackers. Think of all those phishing sites that adversaries use today once you click on an email; someone had to create those. ChatGPT gives adversaries an avenue to create websites and website copy that appears more believable to the user. Similar to the above, the generative AI image and language capabilities lower the barrier to entry and offer improvements for adversaries.
  3. API attacks: While we haven’t seen any specific tests of this yet, one interesting avenue is asking ChatGPT to review API documentation to identify and suss out potential avenues of attack. API attacks are just different enough and often make use of unauthorized or unintended activity but not malware. Using the capabilities in ChatGPT to craft queries, aggregate information on APIs, and potentially assist in creating valid API queries that can be used maliciously becomes an enticing avenue of exploitation.

Security Practitioner Scenarios

Those are three of our favorite attacker scenarios, but let’s talk about a couple of the easy ways that ChatGPT can help enterprise defenders scale:

Reporting: One painstaking task for penetration testers, incident responders, and security operations center (SOC) analysts is compiling reports about successful tests, attacks, and incidents. ChatGPT could provide a massive reduction in the amount of time taken to produce those reports. Turning those around faster means more time doing the other stuff — testing, assessing, investigating, and responding, all of which helps security teams scale. The fact that this could be done in real time or during an investigation is also a compelling aspect.

Recommendations: Having a bot that can suggest next potential response actions — or optimal actions — should be something that exists already, but most of our machine-learning capabilities turned to optimizing detection, not response. While standard-issue ML can create a recommendation engine, ChatGPT creates a situation in which an SOC analyst is guided through the recommended actions along with context on why that is the next best action based on the available data. If security orchestration, automation, and response is set up correctly to accelerate the retrieval of artifacts, this could accelerate detection and response and help SOC analysts make better decisions. This represents a huge potential boon to analyst experience.