Follow tried-and-true steps
It is another example of the elasticity of the public cloud market. It changes almost daily and in many respects with the introduction of new services actually does, which is why an organization that is in discussion on whether to commit to Microsoft Azure or AWS both today and in the future, must follow a series of tried-and-true steps.
Since one size does not fit all when it comes to the cloud, all dialogue that occurs must be bias-free. Decision-makers need to have an open mind and think of it from the perspective of what is actually going to be achieved by going with either, and what are the goals? Are they looking to innovate and bring in new business, realize cost savings or a combination of both?
Whatever it may be, communication between IT and the business units is critical as is articulating and aligning each and every goal.
It is important to do your research but at the same time don’t just read the glossy marketing material. Having some sort of proof-of-concept that allows you access to the environment in order to conduct a more in-depth review of the features available is more important than ever.
From an Infrastructure-as-a-service perspective, each may be on par with one another. What starts to differentiate them is the type of platform-as-a-services that are available.
AWS was the first to the market. They invented the space from nowhere. They are innovating as fast as their customers. The challenge here is this: How do you stay on top of that innovation curve in order to take advantage of these advances and make use of them?
Help with the navigation
That is a real concern and a primary reason why having a provider such as Aptum will help an organization navigate extremely complex environments where the platform itself is changing. It allows it to take advantage of those features as part of its own set of applications.
Azure is more popular with enterprises that have an abundance of applications that are running Windows and are very familiar with the environment. It is a Microsoft product, after all, and is going to have a big draw with a Microsoft-bias audience.
When it comes to our managed services on Azure, we take care of the environment -- similar to how we take care of the environment in one of our own private clouds -- all the way up to the hyper-visor and operating system level. We take care of everything: configuration assistance, trouble shooting, active monitoring of all those components, installing of anti-virus measures and determining that backup policies are set up correctly for what you are looking to achieve.
The latter is paramount for if a system administrator makes a mistake there is a backup that you can restore from.
Those aspects are so important for standard best practices relating to infrastructure management do not vanish simply because an organization embraces a hyper-scale cloud.
Best practices still apply
To that end, a born-in-the-cloud company, likely never had traditional IT initiatives and as a result been exposed to concepts surrounding access control and backup and disaster recovery. If it did not have those processes already pre-established because there was never an infrastructure to manage, all these concepts are foreign.
To get past this hurdle, it needs an advisor and a partner that has run its own infrastructure for years and understands every single nuance. No organization wants their name published on the latest tech blog about how their critical data was left unsecured because an engineer forgot to set up the access policy on an AWS S3 storage server.
There have been numerous examples of critical data leaks that were not a hack, but a case of someone forgetting to set up the right security policy and therefore, the data was sitting there completely exposed to the internet.
It is the same thought process for either AWS or Azure.
Best practices that are part of any large IT organization still apply even when you are going to the cloud. They simply do not and cannot change.