2023 saw a big focus on innovation, spurred in large part by the attention focused on generative AI (genAI) and the impact it may have on business in both the short and long term. In 2024, as more organizations embrace rapid experimentation and launch new genAI initiatives (along with their interconnected risks), they will need to balance that speed of innovation with governance and greater accountability.

With this environment as a backdrop, in 2024, Forrester predicts that:

  • At least three data breaches will be publicly blamed on AI-generated code. As developers embrace AI development assistants known as TuringBots to generate code and boost productivity, the most conscientious orgs will scan that code for security flaws. Unfortunately, some overconfident dev teams will trust that AI-generated code is secure. At the same time, many technology leaders wonder about the generated code’s security — which is understandable given significant API misuse rates in large language models’ responses to Stack Overflow questions. Without proper guardrails around TuringBot-generated code, Forrester predicts that in 2024 at least three data breaches will be publicly blamed on insecure AI-generated code — either due to security flaws in the generated code itself or vulnerabilities in AI-suggested dependencies.

Predictions 2024: Security & Risk

  • An app using ChatGPT will be fined for its handling of PII. Regulators have been busy with genAI for most of 2023, and OpenAI continues to receive a lot of regulatory scrutiny in various global regions. For example, in Europe, the European Data Protection Board has launched a task force to coordinate enforcement actions against OpenAI’s ChatGPT. And in the US, the FTC is also investigating OpenAI. While OpenAI has the technical and financial resources to defend itself against these regulators, other third-party apps running on ChatGPT likely do not. In fact, some apps introduce risks via their third-party tech provider but lack the resources and expertise to mitigate them appropriately. In 2024, companies must identify apps that could potentially increase their risk exposure and double down on third-party risk management.
  • Ninety percent of data breaches will include a human element. Breach publications and industry sources estimate that up to 74% of breaches include a human element, where people are involved in the error, misuse, stolen credentials, or social engineering used in the breach. Even technically focused industry groups now acknowledge the role of humans in exploiting tech. The percentage of breaches that include a human element will increase even further in 2024 due to the impact of genAI and the prevalence of communication channels that make social engineering attacks simpler and faster. This increase will expose one of the touted silver bullets for mitigating human breaches: security awareness and training. As a result, more CISOs will shift their focus to an adaptive human protection approach in 2024 as the NIST updates its guidance on awareness and training and as more human quantification vendors emerge.

Read our full Predictions 2024: Cybersecurity, Risk, And Privacy report to get more detail about each of these predictions, plus two more bonus predictions. Set up a Forrester guidance session to discuss these predictions or plan out your 2024 security, risk, and privacy strategy.

If you aren’t yet a Forrester client, you can download our complimentary Predictions guide, which covers our top predictions for 2024. Get additional complimentary resources, including webinars, on the Predictions 2024 hub.