Andrew Jaquith

Today, I received an e-mail from eWeek's Brian Prince asking about how Google might make good on its promise to make their upcoming Chrome OS more secure than the operating systems we know and love. Here's my long reply to him, lightly edited.

Google is starting from a clean sheet of paper, so they have a lot of freedom to design the OS the way they want. From a security perspective, Google has a lot of options, ranging from evolutionary to radical.
 
On the more evolutionary side of things, Google could have chosen to make an OS that looks and acts a lot like today's operating systems, with a windowing system, local file storage, multi-threaded processes, a Web browser, and locally installed applications written in native code. Windows, GNU/Linux, and OS X are all like that. The difference is that Google would seek to do some or all of these aspects more securely. For example, they could implement something like SELinux, a package that adds mandatory access control to Linux. This would allow the OS to sandbox malicious processes so they don't infect the rest of the system. The browser, too, could be done better. Safe bets of what Chrome OS will include are a browser with multi-process tab support (like the Chrome browser), which isolates each browser window process, and is therefore more secure than other browsers.
 
At the extreme end of what Google might do is the iPhone model, circa version 1.0 of the OS. That is, the OS is a totally sealed box with no third-party app support. All "apps" are Web apps, with a trusted bootloader that verifies software integrity of core OS files at time of bootup. The first iPhone OS was really like a toaster: there wasn't anything to mess with, so the users couldn't get themselves in trouble. Apple later added native app support in iPhone OS 2.0, which required new apps to be digitally signed. This is something Google could do also if third-party native apps were supported.
 
My personal hunch is that Chrome OS will be closer to a toaster. The applications it will run will be primarily, if not exclusively, Web applications like GMail, Google Docs, and Picasa. That means the primary application they will need to secure will be the Chrome browser, which they have demonstrated an ability to do already. For those cases where the browser needs to run a native code plugin, they will use the Google Native Client APIs. The research I've read on NaCl is quite encouraging; it runs native code, but the code is "verified" beforehand so that it can't do naughty things, or at least is less likely to. For an overview of the goals and security posture of NaCl, see this excellent, prescient Matasano post: http://www.matasano.com/log/1674/the-security-implications-of-google-native-client/
 
Next, if we make the assumption that Google wants things to live in the cloud, we can probably assume that there won't be a user-accessible file system. That, plus the desire to lock down the operating system, suggests that all file storage will be in the cloud. For the core OS, I consider it likely that the OS itself will have a trusted bootloader that verifies the integrity of the OS at bootup. The trusted bootloader, combined with a MokaFive-style auto-wiper, could ensure that the OS is always "clean" at bootup, and that the user files stay separate from the operating system. For those of you with long memories, the idea of a trusted OS isn't new. Indeed, Microsoft Research implemented process isolation and trusted bootloading in its experimental Singularity OS in 2005. What's new is that Google will actually try to commercialize it.

In summary, if I were king, I'd make the "OS" as we know it a lot thinner and compact, without support for 3rd party applications, and cloud-based storage. Combine that with process isolation, mandatory access control, trusted bootloader and a primarily browser-based user interaction model, and you'd see an OS that is indeed a lot more secure.