UnWired! Rick Farrow, Metasploit, and My iPhone Security Interview
November 20th, 2007
Daniel Eran Dilger
In response to my criticism of Wired’s panicky article claiming that “the iPhone is as insecure as Windows!” Adam Penenberg of Fast Company asked me some questions about iPhone security and smartphones in general. His article presented a very scary sounding iPhone exploit performed by consultant Rik Farrow, and cited a comment from me at the end. Here’s the rest of the story on iPhone security.
Kim Zetter took exception with my criticism of her Wired article, and unfortunately has since repeatedly portrayed my comments as a personal attack against her. They were not.
I did award Zetter a Zoon Award for the article, but the emphasis was clearly upon the details and context Wired left out and the faulty supporting evidence left in. The Wired article conveyed a false impression that the iPhone is unique in being inherently insecure because of its software architecture. This is simply not true, as I detailed using real world examples.
Zetter blamed the sensationalist tone and the specious comparisons to Windows in the article on her publisher Kevin Poulsen, who subsequently launched an attack on my name that shoehorned in heated references to religion and Ron Paul. That nonsense was taken apart in a followup article, which also clarified exactly what Wired was getting wrong.
Fast Company: Hacking the iPhone.
In an article entitled “Hacking the iPhone,” Fast Company’s Penenberg presented a related article on iPhone security issues with the teaser line “Just how vulnerable is your iPhone if someone wants to intercept your email or record your conversations? Pretty vulnerable.”
Penenberg presented claims by H D Moore of BreakingPoint Systems that the iPhone’s architecture made it possible to “intercept a target’s voice mail and e-mail, hijack its Safari browser, and even surreptitiously record conversations, all without the owner’s knowledge,” and contrasted this with Apple’s advertising about Macs being more secure than Windows.
After Penenberg published his article, Zetter pointed to it as proof that I was wrong all along, and that iPhones were indeed vulnerable to spy attacks, and that it didn’t even matter that such a fantasy was impractical or financially senseless, because people like US vice president Dick Cheney have an iPhone, and there’s no doubt such high profile iPhones should concern the free world considering how easy it is to hack them.
What Wasn’t Said.
Penenberg consulted with Rik Farrow to demonstrate the Metasploit tool. Farrow noted that “Physical access to an iPhone is not required,”but what he didn’t say in the article is even more interesting.
First, the libtiff (a software library of open source code used for graphics rendering) exploit he demonstrated isn’t unique to the iPhone. The same flaw has also allowed users of the Sony PlayStation Portable to break into that device.
The result? Not a plague of malware and viruses for the PSP, but rather a selection of homebrew games. Sony has since patched the flaw, which had the side effect of killing third party game potential. That is one of the points I had outlined: security is the opposite of convenience. Security is also an enemy of openness, but we like having things secured from potential malware.
The threat of too much security is a total lack of freedom. Bill Gates’ vision for Palladium was a new generation of locked down PC hardware that would only run software approved by Microsoft. Fortunately, the rest of the industry refused to support the idea, and it was eventually dropped along with other features of Windows Vista.
Frankly, I’m Concerned!
The second point missing from Farrows’ scary iPhone video comes just after the introduction where he notes, “when I learned how quickly people were learning how to build and install applications on the iPhone, I became concerned!”
Farrow showed how he could use the Metaploit tool to use the libtiff exploit to install malicious software on the iPhone. Remember, that’s the same issue “plaguing” Sony’s PSP with homebrew games. The problem with Farrow’s grave concern is that Apple has already patched the libtiff flaw in the iPhone’s OS X 1.1.2, so his demonstration no longer works.
But there’s something else too: Farrow notes in passing that he’s installed SSH on the iPhone (a remote access shell), which enables him to be able to upload software to it. That means his demonstration only works on an iPhone that has been hacked and has been compromised to include an attack vector that is not on the iPhone you buy from Apple. It can only be put there by leading the user into a webpage that enables software installation, and then targeting their phone afterward. It also requires the hole to be unpatched.
In other words, Farrow demonstrated a corner case that was obsolete before he could even publish his concerns. If theoretical exploits to the iPhone are getting patched before they can be publicized–let alone actually being used to develop malicious software–that means all the security experts lined up to give their statements about how dangerous and fearsome use of an iPhone is must involve a good deal of overblown grandstanding.
Malware is Software Gone Bad.
There’s no viral distribution nor worming replication going on, just the manual installation of software on a device that requires some degree of expertise in order to install any software on it. Doing so also requires the existence of exploitable holes that have not been patched.
Farrow isn’t demonstrating the flawed architecture of the iPhone, he’s only showing that software can be malicious, and explaining why Apple isn’t allowing users to install their own software willy-nilly, as Windows users can and do. Rather than explaining what he’s doing in accessible language, Farrow and other experts are sensationalizing the mundane in order to suggest that the iPhone is somehow uniquely dangerous.
There is no magical gate that can reliably filter out malicious software while letting through good software, nor is there a security practice that magically “solves” the efforts required to maintain the security of a software platform. Instead, you either have to allow everything and simply try to be careful (an Mac and Windows users must), or allow nothing to be installed and therefore reduce risks dramatically. Apple is working to allow nothing on the iPhone until it can offer a system for securely installing third party apps via iTunes.
In the case of an exploit like the now patched libtiff, users could gain access to install their own software. This is a good thing for people who want third party software, and potentially a bad thing for users who don’t want to incur any risk of downloading something malicious. Interestingly, while there were a variety of third party apps designed for the iPhone, the only “malware” affecting the iPhone was hypothesized by researchers applying useful software tools to perform potentially undesirable things.
Can you imagine panicked “researchers” publishing claims that PC users were at risk for losing all their data if they were to delete their files? Even more shocking, none of these experts will ever admit that the iPhone inherently has the most security of any smartphone mobile device, simply because there is no easy way to install suspicious software that could perform potentially undesirable tasks.
The situation also represents a can’t win situation for Apple. On one hand, the company has homebrew software advocates imagining a world with useful mobile freeware that has no impact on the iPhone’s usability, and on the other, security experts who imagine any possibility of threat as an actual problem. There is no way to make both sides happy, but in the current circumstances, both really should be content with the status quo if they weren’t so intent on being unreasonable so as to get airtime for their imagined grievances.
The Penenberg Letters.
In writing his article, Penenberg asked me some background questions about realistic mobile security. His article included a quote noting that attacks on the iPhone lack a compelling business model. Here’s the other information I presented to his questions:
1. How do you want to be referred to in the article? Researcher? If so, what affiliation, etc.? I see you write about Apple a lot. Can I assume you are not affiliated with Apple in any way? (I have to ask.)
I’m a technical consultant and writer. I’m a contributing editor to AppleInsider, which Apple occasionally sues. I’ve never worked for Apple, but a lot of my clients use Macs. I also have significant experience in MIS/IT in Windows and Solaris shops in medium to large enterprises in the private sector, in Internet startups, and in large university and city projects here in SF.
2. Let’s place the security threat in context. Are all smart phones vulnerable to hacking–ie security vulnerabilities? (I would imagine anything running Msft code would be vulnerable.) Are some smart phones better protected than others?Are regular old cell phone vulnerable? If all smart phones are vulnerable, why hasn’t someone come up with a fix? What makes a fix so difficult?
That’s a good question. Any phone that can run software is vulnerable to malicious software. The reason why we don’t have outbreaks of malware on phones to the degree of PCs is that there is little business model for doing so: no practical way to roll out spambots or popup advertising schemes. Part of the reason is that there is no common platform at all; it’s hard enough to design a small Java game that will reliably run on a variety of phones, let alone devise a way to roll out viral adware, particularly since its tough to install software on phones, intentionally or nefariously.
So everything is vulnerably in theory, but there are few actual exploits happening because there’s a) no money in it, b) it’s not as easy as infecting PCs, c) infections could be easily cleaned up because they’d only affect a specific group of phones due to the lack of portability. Some Symbian phones were infected with a Bluetooth virus that could spread itself. The infection was mostly a proof of concept design, but the result was an immediate fix that shut down that entire angle of attack.
Compared to the PC problem, Microsoft allowed spyware and adware to get out of control on Windows PCs because it had a financial interest in data mining itself. It bundled Alexa software on Windows starting with the first versions of IE, and was in talks to acquire Claria, the maker of the notorious Gator spyware. Microsoft didn’t act to stop the problems when they were beginning to take off, because it hoped to own the market. Once it became entrenched, the problem is much more difficult to attack.
With the iPhone, Apple has the opposite circumstances: it wants to market its own product in a tightly controlled way, not use software to data mine consumers of the PC maker’s products. That gives Apple a financial motivation to stop any outbreaks before they become serious.
The problem is that security is the opposite of convenience. It is very likely that when Apple launches its SDK (software dev kit) for the iPhone, planned in February, that third party software will only be made available through a secure downloading mechanism in iTunes, just like today’s iPod games, and that Apple will keep the web available as its more open API for shareware developers who don’t want to deal through Apple. This will allow the company to tightly manage software and prevent malware or malicious software from gaining any business model or distribution mechanism.
PCs, by comparison, can download software from the web without the user even being aware that software is being installed.
3. Have you heard of any substantiated smartphone hacks? Not the issuance of vulnerabilities, etc. but real hacks where a hacker or other third party was able to take control of someone else’s phone? Or is this science fiction?
The value of “taking over” a smartphone isn’t very great. Taking over a PC allows you to install spam distribution servers that shoot out ads. Malware on a mobile phone is more likely to just crash the phone. There’s no real business model behind the kind of spy surveillance imagined by many writers. As the technology develops to allow video conferencing and similar features, those will need to be secured to ensure they can’t be intercepted or spoofed, but today’s smartphones simply don’t offer much of an alluring target. If somebody wants the data on your phone and has tens of thousands of dollars at their disposal, there are far more cost effective ways to get it, including mugging you, breaking into your house, or using surveillance to spy on your PC, WiFi, or Internet traffic. Mounting a development effort to infect your smartphone with spy bugs is a silly thing to worry about, not because its impossible, but because its grossly impractical.
4. As you read in Kim Zetter’s piece, some say that because iPhone apps run at root, that implies if you exploit one app, you can exploit them all (correct me if I’m wrong). So is the the iPhone vulnerable via HD Moore’s Metasploit program/techniques? Why do you think Apple would have the apps run as root (assuming this info is correct)? Is there a good reason for it that you can think of?
The main idea for multiuser systems with levels of permissions is to segregate untrusted users from trusted ones. In a work environment, you might share a system with other employees, and want to keep logins secured from each other, preventing low level employees from accessing financial records, for example. As you scale the idea of business down, there is less compelling need to keep users segregated. If you were in a small partnership, you’d put more focus on sharing information than in preventing sharing.
The scaled down version of OS X running on the iPhone appears to use the same Unix security model as the full version, it’s just that Apple didn’t set up a complex system of multiple users. The iPhone only ever has one user account, and a password is optional (and somewhat inconvenient) to use. You typically “login” by simply swiping the screen. The user supplies much of the security by controlling access to the phone. If you were worried about misplacing your phone, you could set up a password for additional security, but the fact that Apple doesn’t make this mandatory is an example of a security vs convenience tradeoff.
In the same way, a system is certainly theoretically more secure if internal processes are all running with limited user accounts of their own. However, deploying this on the iPhone would not necessarily solve any real security problems. Properly architected systems that isolate root permissions can still be attacked, often by exploiting flaws in the security model that allow the attacker to bypass the model or simply give themselves root permission. Additionally, many functions need root permission. On a PC, you frequently give applications root permissions when installing them, in order to copy files to system locations. As long as root permissions can be gained, they will be taken.
That means the breathless posturing about the iPhone “running all apps as root” is really taken out of context and not very useful information to spread panic about. The Wired article kept repeating that idea to associate it with Windows, but the problems it identified for Windows weren’t examples of root escalation attacks, they were user attacks. Having a more complex security architecture wouldn’t really make the iPhone safer in practice. Having it the way that it is helps make the system more reliable and faster, which consumers care about more than theoretical features that don’t really impact real world concerns.
5. Does Apple’s upgraded OS mitigate some of these potential security problems? Is the iPhone still vulnerable after the recent patch Apple issued? Was the iPhone ever vulnerable to third party attacks?
The current 1.1.1 software can be “attacked” via the web to allow third party software. This could theoretically also be used to install bad software. Any time you can install good software, you can install malicious software. There is a business model and consumer demand behind third party software; there really isn’t one behind iPhone malware yet. Apple keeps patching these exploits, and people who want third party apps will keep trying to find ways to break in. However, the malware market isn’t poised to take off because the moment anyone develops a spyware/adware trojan, the infected party will only need to reset their phone from iTunes and the infection will be neutralized. There is no easy way to do this on a Windows PC, so infected machines quietly generate spam and money.
6. Have I missed anything? If so, please enlighten me.
The only other thing I’d point out is that the same people who are playing up the iPhone’s supposed security problems are also complaining that you can’t install whatever random software you want on it. Those two things are two sides of the same coin. It’s like saying you don’t want to deal with having to lock the doors on your car, but you also don’t want people sleeping in it or rummaging through your CDs. One is the solution to the other, so complaining about both seems transparent.
And one other thing about context: nobody points out that every other smartphone is inherently more insecure than the iPhone because they nearly all offer much more liberal ways to install software. The complaints about the iPhone seem to suspiciously lack any comment about the known open flaws in Palm OS, Windows Mobile, and Symbian, making it sound a lot like Greenpeace reporting that scraping the inside parts of the iPhone might harvest microscopic traces of chemicals banned for use in children’s teething toys.
Like reading RoughlyDrafted? Share articles with your friends, link from your blog, and subscribe to my podcast! Submit to Reddit or Slashdot, or consider making a small donation supporting this site. Thanks!