Recently I’ve been reading the excellent work by Jamais Cascio and thinking about the concept of "openness." Much of Jamais’ work is focused on geoengineering, but the concept of openness has profound implications on many fields, including computer security.

For those of you who have been following the unfolding story of HBGary Federal and the Anonymous Group, this is what Hollywood movies are made of. In fact, I don’t think a script writer could have penned this any better than the real life version. If you haven’t been following the minute details of this story, this Tech Herald article is an excellent read on how the whole thing started.

A condensed version of the events is as follows:

  1. A week before RSA 2011, the CEO of HBGary Federal, Aaron Barr, said in a Financial Times interview that his firm had infiltrated and discovered the identities of the high-level operatives for the well known Internet hacktivism group Anonymous, and that he planned to publicly discuss his findings at the RSA conference.
  2. Anonymous responded in force and compromised the entire infrastructure of HBGary and HBGary Federal (HGF). They obtained confidential data, erased files, and defaced both companies’ websites.
  3. Anonymous subsequently released  4TB worth of confidential company emails. In the emails that have been disclosed to date, Barr was seen engaging in discussions with a major US bank (believed to be Bank of America) to use HGF’s offensive attack tactics to launch a cyber attack against WikiLeaks. The rumor mill at RSA had it that the said US bank was going to pay HBGary $600,000 a month to carry out this attack campaign.

Whoa, what seemed like a classic white hat-vs-black hat story just turned interesting. What’s more interesting is that prior to this whole incident, WikiLeaks had been making noise that they were about to publish data from a major US financial institution. (What? Interesting, you say?)  What apparently was also discussed in those emails was that Barr would use, among other techniques, exclusive zero-days for the attack against WikiLeaks. This will make the attack extremely dangerous.

No one came out of this looking pretty. Not only was HBGary, a company that claims malware analysis is their business, unable to properly secure their infrastructure, it turns out the “victim” is plotting a cyber war itself. HBGary is now claiming that the leaked data had been tampered with, implying that the discussion between BofA and Barr isn’t authentic, while Anonymous (and other security researchers) is saying that Barr’s initial research (which you can read here in PDF) was flawed in that some of the identities of the individuals that he claimed to be part of Anonymous group had nothing to do with the group. Anonymous argued that if Barr’s research was allowed to continue, it could put innocent individuals in jail (as Barr was supposedly working with the FBI).

At RSA last week, HBGary was noticeably absent from the conference; their booth instead displayed a sign that read: “A group of aggressive hackers known as 'Anonymous' illegally broke into computer systems and stole proprietary and confidential information from HBGary, Inc. … In addition to the data theft, HBGary individuals have received numerous threats of violence including threats at our tradeshow booth…”  

This event ignited an Internet debate storm; is it ethical for security companies to engage in offensive tactics? Traditionally, security’s role is to defend, not offend. But as modern warfare migrates from physical battlefields to the digital frontier, more and more nation-states and companies engage in offensive campaigns. Persons with deep security expertise are hot commodities in this game—it can be an extremely lucrative undertaking. But as you go down this road, is there really a difference between black hats and white hats anymore?

This is where the link to openness (or the lack of it) comes in: As we all know, and the execs at BofA and HGF reinforce, zero-days can be powerful weapons. Exclusive knowledge of zero-days gives the possessor incredible power, and in cases such as these, almost always lead to corruption and misuse. It can be argued that we are better off as an industry if openness is employed as a means of elevating collective knowledge and also as a way to enforce checks and balances, so that no one company or individual is significantly more powerful in its knowledge and expertise than others. In such an industry, cyber offense is only a distant possibility, as you will be on a level playing ground with your adversaries.

Creating such an open culture for the security community requires a shift in thinking, because this is an industry that thrives on secrecy and obscurity. It requires that we recognize that secrecy, obscurity, and the act to restrict information can ultimately do more harm than good. It requires that we promote open research and build an ecosystem that rewards openness.

How to achieve this open culture is the question on the table. Let’s discuss one specific example of how some form of openness is achieved — a bug bounty program. I was a skeptic in the beginning of the merits of such bounty programs, but I have come around. Indeed, I’ve come to realize that economic incentives maybe one way we can achieve openness. In a bug bounty program, the researcher is encouraged to share his/her findings, through economic incentives with the software vendor and ultimately with the entire community.

Economic incentives alone don’t always work, as that is one card the dark side can play as well. Other means, such as increasing collaboration, technological transparency, etc. must be explored. But the steps we take today to promote an open culture will shape the course of the industry and help to determine whether we head toward a scenario of digital apocalypse (as Eddie Schwartz of NetWitness calls it on a recent RSA panel) or a more responsible, democratic, and open model for computer security.  

Other sources of note: