Amazon Web Service S3 Outage – An Own Goal or User Wakeup Call?

The outage of Amazon Web Service S3 service in their US-EAST-1 region at the end of February not only caused a massive impact on its clients but threw up a twitter, blog and press storm as to the implications of how users view placing all their data in the S3 cloud model.

These implications were further fuelled by the release of information by Amazon themselves giving details of what went wrong, which clearly pointed to gaps in their operational processes and by their own admission, a lack of understanding of their own infrastructure, which is a staggering admission and perhaps one that could only be weathered by a company with the power and money such as AWS. Their full explanation can be read here:

https://aws.amazon.com/message/41926/

However you read it, it’s not a great endorsement of using a full public cloud model and highlights that unless you are prepared to spend the same sort of money to ensure resilience as you would in your own private cloud model; you need to expect some disruption to your business when these outages occur.

Cameron McKenzie of TechTarget called it a “Fukushima moment for cloud computing” comparing the Amazon Web Service S3 failure with the massive meltdown of the Fukushima nuclear plant in Japan and the subsequent decisions by many countries to immediately announce the decommissioning of their nuclear power programmes regardless of the cost.

McKenzie goes on to say “Before the S3 outage, people invested in Amazon cloud because they were confident in both the technology used and the manner in which it was managed. With the S3 outage, what was once confidence has been replaced with faith – and faith isn’t a compelling attribute when it comes to IT risk assessment” and he’s right, many users of S3 will “hope” Amazon will put this issue right but will be crossing their fingers that there aren’t other parts of the service that have been equally poorly managed. The TechTarget article can be found here:

http://www.theserverside.com/opinion/Amazon-S3-outage-a-Fukushima-moment-for-cloud-computing

But before we sling some well-earned mud in Amazon’s direction, let’s also look at why so many clients were affected by a single person typing in a routine simple instruction to achieve what was billed as basic service maintenance.

Could these users have switched their services over to a more reliable region within the AWS network? Why of course they could, one of the key reasons these companies have convinced their boards and shareholders to use Amazon Web Service S3 service is that it’s in the cloud right? We don’t have to worry about infrastructure ourselves – it’s all taken care of. In addition, it will cost less than a private cloud or running it ourselves.

And Amazon, their BIG! They also have data centres split into regions across the globe, so we have access to all this infrastructure all over the world that we don’t have to buy and manage as well as it being immediately available and on demand – all we need is a credit card.

So Amazon Web Service S3 – What went wrong?

Ever heard of a silver bullet? Well many of those who were affected and didn’t have an option other than to wait for Amazon to correct the problem thought they were getting one of those.

The real issue for these companies who are using this service is surprisingly the cost. Yes, putting your data into Amazon Web Service S3 probably attracts a smaller per MB charge than a smaller MSP but unless the company puts the right architecture in place on the S3 platform to be able to switch to another region if an outage occurs then they are open to the issue – and spinning this second data instance with replication isn’t going to be cheap.

I for one can’t believe that users of S3 who bemoan the service and point to outages affecting their core business haven’t assessed the risk in using a service that doesn’t have a strong SLA and whose record of uptime isn’t amazing, and either consciously ignored the risk in favor of cost or are blissfully unaware.

Cameron at TechTarget goes on to say that the future of cloud computing has changed now that users realise that a full-scale, daylight-hours crash is always a possibility and organisations will start bringing more of their systems back into the local data centre – which is pretty radical if it’s actually happening.

He concludes by saying “The other big move will be for organisations to leverage cloud bursting technologies while making use of the cloud with their in-house systems approach capacity. But using the cloud exclusively will become a thing of the past. The Amazon Web Service S3 outage was a Fukushima moment for cloud computing, and it will forever taint the way organisations view the cloud”.

I for one hope not, as cloud represent a major leap forward in the way computing, solutions and services computing offers can be consumed.

What is clear is that the decision making process of moving or using cloud services doesn’t remove the responsibility of companies and their management teams to ensure that they fully understand how the cloud model they want to adopt will actually work, and to ensure themselves that the costs of the service that they are being proposed covers an on demand and always available service that supports the business and their customers – if that’s what the business needs.

This focus on responsibility is something that boards can’t legitimately continue to dodge as the new General Data Protection Rules (GDPR) legislation which if you’re like me is constantly filling up your inbox, is coming into force next year and makes it clear that no one is beyond the reach of the courts if data security and residency is not protected.

Why does this matter?

Well, the defense of “sorry M’lord, our data storage system had a big failure and we lost all the records your looking for” will not cut the mustard as the say in my local nor will it suffice that you are using one of the big players such as Amazon Web Service S3 service give you credit.

This will further complicates the cloud storage market and will certainly begin to bring pressure on companies to make the right commercial decisions as to what data they place into public clouds and what they retain in their own controlled private or on premise systems. It will also make IT managers think about the contingency of data access when selecting where a company’s information and systems reside and may well result in the financial considerations of moving or not moving to a public cloud being assessed in a different way when the resilience of a platform is taken into account.

The blame game is always something that can get people out of the line of fire in the heat of the moment, but the longer term damage in not designing and architecting a system with the safeguard that your business requires will always come back to roost in the end.

My advice if you’re not sure, talk to an expert.

Houston we have a cyber problem!

I’m a big space fan, and in my era man went to the moon on Apollo – but as the film Apollo 13 reveals, only one moon-shot after Buzz and Neil kicked the dirt around 247,000 miles from home Americans were already bored of the coverage.

Maybe this is a straw in the wind for the way that important events or information is played to the masses regarding security of the internet. A BBC news article this week which coincided with the chancellor Philip Hammond’s speech on cyber security at the Microsoft decoded event in London, highlighted that relentless cybersecurity warnings have given people “security fatigue” and this is leading to people becoming even more complacent than before about their role in keeping their own or their company’s information safe.

The report cites a US National Institute of Standards and Technology (NIST) survey that suggests many respondents from a wide range of social and economic backgrounds and ages ignored warnings they received and were “worn out” by software updates and by the number of passwords they had to remember.

However, users frustrated by the extra security steps they had to go through to get at “their stuff” in online bank accounts or on other websites, should note that fraudulent use of accounts and fraud is increasing as we expand how we access this data. Switching off from the warnings is just not an option.

Barclaycard in the UK have just started a different tack, reporting on a user’s monthly statement that it has applied fraud checking against the users account and provides a thumbs up that things are looking ok. I think this is a great idea as it is focusing the attention to cybercrime at the point where we are focused on a specific element of our personal data -the credit card statement in this case, which in turn encourages us to stop and check we believe everything is oK too.

The challenge has to be to ensure people don’t “tune out” from security due to the barriers security measures put up when we are simply trying to access information. This is highlighted by statistics that show how ingrained both the problem and the solution is in today’s cyber landscape:

The average Briton has over 20 separate passwords and typically access at least four separate websites with the same credentials (Source: NCSC).

One million new malware variants are being created each day. One in 113 emails contains malware (Source: Symantec Security Insights report).

So Philip Hammond’s recent pledge to spend £1.9bn on cyber security is a good thing, the trick is understanding where he is going to spend it.

Certainly, a proportion of it is going to be spent on “awareness” campaigns – but will these be just like the ones in the US that have now built up the complacency that is as dangerous as ignorance?

He will also have to address the need for the SME sector to invest more in Cyber security as today there still is an incredible reluctance at CxO level to spend anything like the right amounts of money to set an organisation on a path that will deliver the highest levels of protection available.

Many still don’t get that it is a layered approach that bears the best results, where process, control, monitoring, software and the right oversite are mixed with technical capabilities to defeat the widening threat surfaces that are presented to an organisation.

The internet has become the petri dish for cyber-crime and even the inventor of the web, Tim Berners-Lee, warns that people need to be aware that although instant and powerful, this power is being turned against the establishment and individuals to break into public and private data on an unprecedented scale. Berners-Lee goes on to say the “warfare on data” at the user level is being waged on us with our own devices such as those that control our heating, the fridge, security cams and that securing these is a top priority.

He urges people not to just take the webcams out of their boxes and start using them, but to change the password as soon as possible before they are hacked by an automated bot and the power and connectivity of the device assembled is part of a botnet attack capable of bringing down some of the most high profile internet facing companies. All of which is oblivious to the owner unless they check.

Many of the disbelievers of the need for strong cyber security suggest they can’t possibly be expected to protect themselves or their companies when big organisations are hacked on a regular basis. The truth is many of these organisations are spending money but are complacent too and leave enough chinks in the armour to allow an attack to be mounted. Just appointing someone to be CISO (Chief Information Security Officer) doesn’t fix the problems which are sometimes deep rooted and is akin to someone being appointed to be the office first aider which has had no medical training.

Mr Hammond may well be organising our cyber-crime stance with the National Crime Agency and GCHQ at the forefront of this battle and with the means to strike back at the attackers, which will in the minister words “make Briton a safer place to do business in” but it won’t complicate companies in the UK who think that Mr Hammond and his forces will solve the problem at their level.

Planning, vigilance and careful monitoring of the equipment that generates, processes and stores data is an ongoing task which needs to have an evolving plan that mirrors an organisations evolving use of data. Individuals can protect themselves more by doing the same thing companies need to do by assessing how they access their data and looking at if that access is being done differently from the last time you thought about security.

So when was the last time you checked the logs on your Dog or granny cam?  And are you sure you and your devices are not inadvertently part of a cyber criminals botnet estate?

Microsoft’s  Global Good Cloud – A Brave New World?

What happens when one of the giant vendors in our industry turns visionary? Well with the release of Microsoft’s “A Cloud for Global Good” policy positioning document I think we are at the start of finding out.

Microsoft is taking a bold step in setting their aspirations for how we, as users might use cloud technology in the future and also calling out the ground rules to those who seek to supply us with the service that live on these platforms.

It’s a fairly big document and a concentrated read, but if you are remotely interested in where the giant vendors in this market are taking this technology it’s well worth investing the time. It couches what it believes are the challenges and answers to them in high level Government lead regulation and compliance which for me was the only curious part of the whole announcement because I believe that Governments should only intervene with legislation if and industry can’t sort it out themselves.

If this is Microsoft’s hope, then I think it will be a tall order. If we can’t get all the Governments in the world agreeing on something as critical as climate change then it’s going to be a long haul to get an agreement on the use of Cloud!

Having said that, here are some stand out parts that if can be developed, I feel would improve the prospects and prosperity of everyone that traverses the World Wide Web be that in their work, commerce or socially.

Dealing with data cross borders

For UK companies, it’s a really important topic, even more so post Brexit. Microsoft explains in a piece of research by the McKinsey Global Institute:  the international flow of data contributed will rise from 2.8 trillion U.S. dollars to an estimated 11 trillion U.S. dollars by 2025, so dealing with cross boarder data traffic is definitely up there as a global requirement.

Microsoft couch the challenges with data across boarders as the need to “strike a balance” between the smooth flow of data and the need to protect privacy at all levels. This also includes the need to preserve it in part of the statement, which focuses down on the best practices of handling and storing data whilst maintaining the security around stored data that still eludes some of the biggest cloud providers out there, for example, Yahoo.

But they also cite old laws created before the capabilities of data transfer that we enjoy today as being part of the problem and think ultimately that these should be removed. In reality there is a substantial consensus that allowing foreign governments’ access to local in country data, via legislation such as the Patriot act, should be prevented at all costs.

The recently signed EU – US Privacy Shield puts the onus of security firmly on the company holding the data but allows the Federal Agencies access to this data: “Following the appropriate oversite” which I consider muddies the pond even further.

Digital transformation

The Microsoft document likens this era of digital transformation to that of the invention of the steam engine and its hand in the industrial revolution. The dominant feature of this chapter is that it places analytics, mobility, interconnected sensors and the Internet of Things along with all the other emerging technologies as a catalyst for humans to look at old problems in new ways and with modelling, genomics, 3-D printing and geolocation providing the new steam engine to envision capabilities that until now, were impossible to imagine.

However, the caution in their opening statement: “History tells us that the full impact of an industrial revolution takes years to unfold” and we are now only beginning to understand the global cost of a rapid and enthusiastic advance to industrialisation in the late 18th and early 19th centuries. The clean-up of the last industrial revolution will  transcend and impact the digital revolution as we all come to term that we can’t consume natural resources at the rate we are used to.

But digital transformation in education and health has the opportunity to bring hugely positive results for students all over the world. Check out Jamie Smith’s blog on our site about how technology can change the way education is provided as it echoes what Microsoft discusses in the document. http://www.vissensa.com/digital-road-jamie-smith-guest-blog/

The ability to break free from the limits of traditional teaching by the leaders of education embracing cloud computing as one of the vehicles to connecting students around the world with first class teaching resources provide everyone with access to great educational opportunities.

It’s very telling that with all the connection Microsoft has into educational organisations throughout the world they comment that up to now the impact of cloud computing on education has mostly focused on cost and efficiency.

In healthcare, the expanding use of digital technologies has now reached the point where it is considered an essential component of healthcare policy in the European Union, a key part of the Affordable Care Act in the United States, and a pillar of the World Health Organization’s long-term approach to improving health around the world.

An inclusive cloud

Picking one area of policy singled out by Microsoft which is fundamental to the ubiquitous success of the digital economy is to ensure the benefits are broadly shared and equitably accessible to everyone, everywhere, regardless of location, age, gender, ability, or income.

It’s probably one of the most profound statements in the entire read and should be the cornerstone mission statement of every company that wants to provide value to those who traverse the internet.

Microsoft called this out as acknowledgment that in a time of rapid technology innovation, inevitable disruption will occur and it is this aspect that Microsoft warns the market against developing services that don’t have the ability to encompass all users.

In reality there is a long road ahead before we can hail the success of many of the policies that Microsoft have been bold enough to outline and we should all congratulate them on starting the journey for us.

Up in the Clouds – The Full Service vs Budget Airline Model

I don’t know about you but I am not that enamoured with the budget airline concept of providing you transport from A to B but where B isn’t as a convenient stopping point as you might have thought and the journey is potentially more expensive than the internet price may have led you to believe.

I suppose the airline’s counter to that is its cheap and it does safely fly you to a destination, and we have probably all used these types of products when we just needed to get somewhere quickly and didn’t care about the hassle or service.

So we’ve bought the ticket and are now the captive audience for the “essential extras” which can be applied: “Did you have any checked in luggage today?” – “Would you like an allocated seat, a coffee, or perhaps the use of our on-board toilet? No problem that will be … let’s just call it the price of the ticket again, plus 10%.

What’s this got to do with cloud? The public cloud revolution marches on unabated and it’s here to stay. How cloud companies get to their revenue goal is highly dependent on the budget airline model where the ‘get in’ costs are not necessarily the overall cost of the service, and like the budget airlines, you are self-selecting the products you want to consume yourself from the menu –  so anything you purchase is down to you.

Of course the public cloud services provide premium versions like the airlines, where you start to get access to more technical assistance and larger usage limits on certain components, which when you add up the savings in deploying in to a public cloud can sometimes reveal the jump wasn’t that cost effective as first thought.

Another top reason for using a budget airline, is the ease of booking. It’s also a top reason cited for moving to a public pay as you go service with its flexibility and portability.

The ability to spin up a service on the public clouds has changed the way IT is seen, from IT developers who now have a limitless bucket of resource to play with, to the line of business managers and directors who see a quicker way of getting innovation through IT into their business.

In the 35 years of my IT career this revolution of how IT is consumed by the business has occurred at least three times. The first was when punch cards ruled the world and took too long to write and test programmes to support all the parts of the business, the distributed computer was born and the line of business took their budget and spent it themselves – sometimes without the help of traditional IT – sound familiar? As that concept became outmoded and the opportunity to have the computer on our desk tailored to our specific requirements arrived, hail the PC, the model evolved again.

But like all of the preceding revolutions, users should adopt these innovations with a clear vision of what the pros and cons are. One of the emerging concerns following the migration to a particular cloud service is vendor lock-in, where once the application is running in the cloud it becomes more and more difficult to move it away if you need to.

James Walker, president of the Cloud Ethernet Forum (CEF) told Cloudpro recently: “Because cloud is a relatively immature concept users can find themselves opting for a solution that fulfils a specific function no other services provider can – a common scenario cloud users find themselves in and which is really a form of voluntary lock-in with nobody to blame but yourself if you end up getting addicted to that feature and can’t move away.”

An example of service providers developing their own proprietary toolsets on their cloud platform is Amazon with their Aurora Database product, which is pitched against Microsoft’s MySQL. Nothing wrong with either and the Amazon product is wire compatible with MySQL using the InnoDB storage engine. But each new feature the provider introduces makes it that little bit harder to move away. Although I singled out an AWS product, Microsoft and Google are implementing many of the same features for the same reasons.

AstraZenica CIO David Smoley told Fortune recently: “Vendor lock-in is a concern, it aways is. Today’s leading-edge cloud companies are tomorrow’s dinosaurs.”

Another highly discussed topic is “Data Gravity”. Data gravity is a tech term that means that once data is inside a given repository, it is difficult and expensive to move it. Most Public cloud providers levy fees to download data away from the platform and these, like the cost for the coffee and sandwich on the budget airline, are hidden from your buy in price until you try to do it. Interestingly, the market is now waking up to the issues around lock in, and there are several articles that highlight the more common pitfalls.

One bright spot on the cloud horizon is the surge in the use of Containers and Docker as a way of splitting application workloads and sharing these out across multiple cloud providers. It is still in its infancy but could offer a gateway to better portability if you decided that the chosen cloud provider is not for you anymore. It’s an important step forward as Tom Krazit commenting on the Structure event in San Francisco recently said: “That means that hybrid cloud customers could use public cloud services only for specific applications or workloads that they know will be easy to transfer to another service provider, or for spikes in demand. And then if something changes and one’s public cloud vendor became annoying, you’d still have your own datacentres to rely upon.”

Whichever way you decide to fly, it’s a good plan to check out how easy it is to get to and from your destination not just the airport and if what the cheaper flight option costs does what you’re expecting without having to spend more for the service. Would you like recovery with that Sir?