Get the report
MoreSecurity Trends in the Cloud
Learn more about the security and IT challenges global organizations face as they adopt modern applications in the cloud, download this free report!
April 19, 2018
Cloud computing has been around for so long now that cloud is basically a household word. Yet, despite how widespread cloud computing has become, continued adoption of the cloud is now being challenged by new types of use cases that people and companies are developing for cloud environments.
In particular, modern cloud applications are creating novel cloud security, data and resiliency challenges. These need to be addressed in order for the cloud to continue to evolve.
Let’s review some of these issues that are pushing the evolution of cloud computing paradigms.
Learn more about the security and IT challenges global organizations face as they adopt modern applications in the cloud, download this free report!
The National Institute of Standards and Technology (NIST) defines cloud computing as a “model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” This model has its roots in five key characteristics that all cloud providers offer as part of their Infrastructure as a Service (IaaS) package:
This basically means that consumers of cloud services can ask for computing resources (CPU time, network storage, etc.) at any time, without the need to contact human support, from a variety of connected endpoints and/or platforms. Cloud providers will serve these resources in a “pooled” way (resources—physical as servers or virtual as software—are dynamically aggregated/dropped based on the instantaneous needs of the customer), and elastically, usually without an up-front commitment. The final cloud computing premise is that the use of all those resources will be measured in a very specific and traceable way.
Based on this value proposal, cloud computing should include Platforms as a Service (PaaS), with platform engineers as the means of support over the complete lifecycle required to build a cloud-ready service or application. This implies that not only the basic hardware-software components and interactions are included (servers plus operating systems and network interfaces) but also tool sets which include batteries like Application Program Interfaces (APIs) to automate interaction with the cloud, high-availability schemes, interoperability mechanisms with the customer’s existing infrastructure or support of development tools, and testing practices and deployments schemes required to iterate a software solution.
IaaS and PaaS cloud computing solutions provided an amazing opportunity for people and companies—the Software as a Service (SaaS) business model. With it, companies or developers suddenly were able to concentrate on the value delivered to customers, delegating infrastructural and fixed platform administration to cloud providers. But the market has evolved again, and more sophisticated tools and products imply more interesting challenges that need to be tackled by those that want to use the full potential of the cloud.
Cloud Security
Cloud security threats are the main concern today. Malicious or unintended actions can cause damage at many levels in a company. This is a fact in cloud as well as non-cloud environments, but as the sophistication of applications and services increases, the security risks also grow. The attack surface of cloud services is higher than traditional service models, as the related components have many endpoints and different protocols to manage in different ways. A variety of approaches to identify and address both known and new threats is required.
Distributed-Denial-of-Service (DDoS) attacks are just one common threat to security in cloud computing. Providers usually offer several architectural options to configure a safe environment to prevent these kinds of threats, like traffic isolation (the capability to isolate groups of virtual machines in groups of virtual separate networks) or access control lists to create rules that define the permissions of a component with several levels of granularity. Researchers are now also proposing automated systems to detect intrusions in cloud deployments.
Cloud Data Management
The second interesting challenge to traditional cloud paradigms is Data Management. Not so long ago, data use to be thought of as a homogeneous entity that could be gathered and archived in silos, usually relational databases (groups of tables that contain part of the data and link each row to other rows in different tables).
But a whole new set of needs appeared with the SaaS business model. Now we have document databases, big table databases, and graph and columnar databases for an almost innumerable set of cases. This new reality where the data can drive the way a company measures its future opportunities imposes a number of challenges—For example, data consistency must be maintained, and data will typically need to be aggregated, extended, transformed and analyzed in several contexts.
One aspect of data management to be addressed is the consistency of the data. This means that all the users of a cloud application should see the same data at the same time. This is not a trivial task, as the data resources (hardware-software) are pooled according to the physical restrictions of networks, the transactional status of the operations—and, the client’s connection could have an impact on consistency. A good analysis of this issue was presented recently by Rick Hoskinson. Unifying the way that time is measured in all the millions of simultaneous clients, regardless of their hardware or connection, is an incredible engineering effort.
Resiliency and Cloud Availability
No system is free of failures. No matter how good the process or the measures taken to address the risks, we live in an uncertain world. Resiliency is the ability to handle failures gracefully and recover the whole system. This is a huge challenge for services and applications where the components compete for resources, and depend on other internal or external components/ services that fail, or may rely on defective software. Planning the way that those failures will be detected, logged, fixed and recovered involves not only developers but all the teams as part of a cloud strategy. To help, tools are available to simulate random failures, from hardware issues to massive external attacks—including failed deployments or unusual software behavior.
The three basic techniques that are used to increase the resiliency of a cloud system are:
Note, too, that there are many different levels of fault tolerance in cloud computing. The balance between resources, costs and acceptable resilience level should achieve the best results possible. A scenario with multiple machines used as replicas in the same cluster assumes that the tolerable resilience level is in the cluster. A more reliable approach is cluster replication in the same data center which separates the replicas in independent clusters. With this configuration, the data center is the single point of failure. A more complicated but reliable scenario is the replication of systems in different data centers. This way, even with large outages, the resilience of a system could be guaranteed.
Cloud computing is here to stay, but be careful—Traditional approaches might not be enough to address the challenges that modern cloud workloads present. Cloud security, data management and cloud availability are particular challenges that must be taken into account as part of a modern cloud strategy.
Reduce downtime and move from reactive to proactive monitoring.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial