ComputerWorld columnist Roger Grimes recently blogged about “Security Rule No. 1: Assume You’re Hacked.” Roger, in turn, was reacting to a Forbes magazine article written by Richard Stiennon that made the same point. Both posts describe steps IT security and risk professionals should take, assuming their company computers have already been compromised.
These are well-written articles, and I recommend you read them. Here is Forrester’s take on this important issue. In short, I view accepting the inevitability of compromise as the first step in a broader risk management journey. It might seem a little odd to suggest that compromises (risks that have become tangibly expressed as threats, and successfully carried out) might have some relationship to risk management, but allow me to explain.
First, some background. In Roger’s column, he notes that every company he works with these days is compromised. The advice he gives on how to prevent compromise is generally very good:
- Lock down workstations and servers by limiting user privileges and eliminating unneeded applications.
- Monitor network traffic to look for potential intrusions (note: Richard Bejtlich owns this topic).
- Use honeypots to attract prospective hackers into networks and hosts where their activities can be monitored.
- Sprinkle around some “red herring” data to identify leakers of confidential information.
Richard, for his part, advocates a forensics-based approach to identify machines that have already been compromised. In a whitepaper sponsored by Guidance Software, he argues that when IT security takes the position that it is already infected, it will shift resources to develop skills for more quickly detecting and recovering from infections. This is smart. For what it is worth, I made a similar point two years ago, starting with my SOURCE Boston presentation in 2008. In that presentation, I argued that security is like American football: it has three phases to the game. But instead of offense, defense, and special teams, security is about prevention, detection, and response. IT security departments have been putting all of their investments in the Prevention area, and not enough in the Detection and Response areas. It is time to rebalance.
Beyond the points that both Roger and Richard made, though, it is worth considering one more. Assume that most companies have been compromised in various ways, and for several years. The important question is: does it matter? After all, the US Government continues to function, as does Google, Intel, Symantec, and other companies that have acknowledged being the target of compromise.
That companies willingly tolerate compromised PCs and servers is not a surprise. No company can afford to spend the amount of time and money that would be required to eliminate every infected host, compromised user account, and insecure web application. But what is missing from the dialogue is a frank acknowledgement from vendors and enterprises alike that perfection is not achievable. What is missing are tools and processes that help quantify the degree and type of compromise. And most important, what is missing is an explicit bargain between management and IT security teams about the level of compromise the enterprise can accept given a specified level of security investment in people, processes and technology. For example, if business users won’t give up administrator access on desktop PCs and laptops, can they also agree that 10% of their PCs will probably be infected at any one time? Is it acceptable to only fund audits and inspections of the biggest third-party suppliers, knowing that risks from smaller ones might slip through the net? These are the kinds of decisions that enterprises make every day; they just don’t always make them consciously.
Making conscious choices about what degree and type of compromises are acceptable is Risk Management, pure and simple. That is the broader issue raised by the articles by Richard and Roger: that it is time to turn the tacit risk management choices enterprises make every day into explicit ones.