Insights > Blog

Building a Secure Cloud

By Cloud Adoption Team | Posted on March 3, 2017 | Posted in Cloud, Operations, Emerging Tech


Better safe than sorry.

It's something your mom used to say. When it comes to cloud infrastructure, she was totally right. If you've recently migrated applications or services from an on-premise datacenter to the public cloud, hybrid-cloud infrastructure, or are thinking about either, cybersecurity should be top of mind. Regardless of your chosen cloud solution, there are several actions you can take to secure the data flowing through your infrastructure. Amazon Web Services (AWS) will be used to illustrate examples, but these are patterns that apply to most public clouds - only the names of the individual product offerings will change. At a minimum, there are five practices that should be followed in every cloud environment handling secure data:

  1. Data-encryption at rest and in transit.
  2. Multi-factor Authentication.
  3. A CDN to help buffer your web services from DDoS attacks.
  4. Adopt CI/CD infrastructure and application security and penetration testing.
  5. A web-application firewall or WAF to safeguard your application network traffic.

In today's world of mass-scale identity theft and instant electronic transactions, an Internet user's personal data has become valuable currency among hackers. A number of federal regulations govern the securing of these types of data, including HIPAA for healthcare data and PCI for electronic transactions. These regulations specify that personally identifiable information must be encrypted when at rest in your infrastructure, and while in network transport. Securing data at rest includes file storage (AWS's S3), and encrypting databases either via drive volume encryption or through the database API (MySQL). If your storage mechanism doesn't directly support encryption, there are pre-infrastructure options such as AWS's encryption service KMS, or its client encryption libraries found here. Securing data during transit can sometimes be challenging - all services won't support Secure Socket Layer (SSL) or its successor Transport Layer Socket (TLS). These protocols work at the port level to encrypt communications between services. Additional options include encryption at the network layer, but that can be expensive and leave you exposed on switches and routers that may not support hardware-based encryption.

Another security layer available on most clouds is a multi-factor authentication setup on your storage mediums to make sure you have an added layer of security, as shown in the AWS S3 (from the S3 service documentation) bucket policy shown below:

Another great way to both improve the overall performance of your website and ability to handle spikes and surges while adding a number of available safeguards is through the use of a Content Delivery Network (CDN). You can use it to push your web app content to the network edge routers of a public cloud, and thus much closer to your eventual end user. CDNs also serve another purpose - they give you a buffer that can absorb denial of services attacks without you having to maintain a large bank of servers in the event of said attack. Not too shabby. In the network security realm, you can also specify that the only proper access to your site or application's assets is from a CDN that directly accesses the assets via a secure security role. In AWS's Cloudfront CDN service, this is accomplished though SSL on dedicated IPs on your edge routers (which can be expensive on AWS's 50+ edge locations!), or with modern browsers you can share a common IP with the server name indicator in your tcp/ip traffic header to route multiple applications traffic. This gives you a hardened site with elasticity as needed but with privileged access to your core assets or on-premise application servers. Setting up a CDN isn't as hard as many think. It does, however, take some thought into determining where your traffic is going to be most focused, and then strategically standing up services in an efficient manner. The below example from AWS's cloudfront docs's shows a setup script in AWS's Cloudformation format with a CDN origin at a on-premise (custom) datacenter. Please refer to the AWS::CloudFront::Distribution documentation for detailed explanation of the below properties.

With the advent of DevOps and Continuous Integration / Continuous Delivery (CI/CD), it's never been more convenient to surface potential security holes before before issues arise in production. One such service that AWS provides is the Amazon Inspector. Amazon inspector offers a Application Program Interface (API), Command Line Interface (CLI) and a Software Development Kit (SDK). Inspector integrates with may CI/CD systems such as Jenkins. It's always a good idea to delegate work to a specialist "“ your mom would be proud.

Some of the benefits of Amazon Inspector include:

Quick and easy security assessment of your AWS resources for forensics, troubleshooting, or active auditing.Empowers you to offload the overall security assessment of your infrastructure; allows you to focus on more complex security concerns.Allows you to gain a deeper understanding of your AWS resources' security footprint.Amazon inspector functions by assessing collections of AWS resources for potential security threats and vulnerabilities by comparing against defined rules packages.

Some of the rules evaluated in the security assessment include:

Common vulnerabilities and exposures (CVE).Center for Internet Security (CIS) benchmarks.Security best practices.Runtime behavior analysis.

Finally, in today's rapidly changing world, precaution is not enough. You need to be able to be proactive and take action when attacks target your infrastructure. One of the modern tools that can assist you in this is a Web Application Firewall (WAF). A WAF is a piece of software running on a set of machines beyond your application/network load balancers. The software monitors the ports that are listening on your elastic load balancers, and if a large volume of data appears or is not in the right format, the WAF is designed to expand to absorb the attacks and then scale back down once they are over or otherwise mitigated. The most common type of cloud architecture used to mitigate these attacks is referred to as a sandwich configuration since it wedges between the cloud and your infrastructure. The following diagram re-created from the best-practice security design from AWS's June 2015 white paper shows how a WAF is deployed within an application cloud environment:

Today's ever-connected world means that your application environment is open to attack 24/7/365. No solution is going to provide a 100% guarantee against attacks. It's important to understand the tools that protect your cloud environments, as well as the their strengths and limitations. Create security best practices that apply to all of your environments and deploy and test them in an automated fashion to increase the chances that your applications will be prepared for the next cyber-attack. It's also important to know that cloud security is not a one-and-done proposition. As your infrastructure and Internet footprint expands, so does your exposure to attack and attractiveness as a target. Once the attacks against your cloud become sophisticated, it might be time to invest in a qualified security partner (*cough Redapt cough*) and start using more advanced cloud security tools such as Intrusion Protection Services (IPS via agents on your instances) and Intrusion Detection Services (IDS via analytics and monitoring of your network infrastructure). The good news is that taking precautions early on with your application environments is inexpensive and will start building good security practices sure to benefit you throughout all stages of growth. Just remember, the day that your site is attacked is a day too late to start your homework on the proper cloud security policies and guidelines. As per usual your mom was correct!