In our previous post, we discussed how to minimize security risk and data loss by securing the AWS environment. In this installment of our series, we will continue exploring this subject on the server level and discuss some best practices to follow to help strengthen your infrastructure.
A common misconception is that vendors are responsible for managing the security of their cloud environments. Cloud providers will typically implement network monitoring around their physical devices that run the cloud; however, it is usually the customer’s responsibility to implement monitoring in their cloud environment. Virtual cloud instances need to be addressed with the same level of security controls as if they were located on the enterprise’s physical network.
This post will provide an overview of several common methods for adding extra layers of security onto virtual machines running in the cloud.
In May 2014, Amazon made Elastic Block Store (EBS) encryption available to all AWS users. This technology encrypts volumes at rest on the physical machines on which they reside. Encrypted EBS volumes can be detached and re-attached to supported EC2 instances, providing for a great deal of flexibility. This can also be challenging for administrators. EBS encryption is accomplished through using a master encryption key, which is created by default for each EC2 region on the account. While initially convenient, this can eventually lead to a security concern as it does not require the creation of different keys for each volume. If this master key were to be compromised, then all of your encrypted EBS volumes within that region would also be compromised. As such, it is a best practice to generate a new encryption key for each volume.
Furthermore, this type of encryption only affects the EBS volumes — and not the default volumes for an EC2 instance. If encryption is a concern, then the default partitions should also be encrypted through the use of encroot or other utilities. Note that AWS does not support tools that require a decryption password to be entered as soon as the machine is booted up.
The principle of least privilege is frequently mentioned when discussing security best practices. The core concept of this principle is that, simply put, users only need access to the limited set of privileges that they require to do their job. Amazon’s AWS platform provides several tools for enabling this within the cloud environment. In the first part of our series, we discussed advantages of using the Identity and Access Management (IAM) module to provide administrative capabilities around user access to the AWS console. IAM, however, does not extend to manage specific privileges in your organization’s cloud instances.
To control account privileges and access to servers, ensure that SSH keys are configured on a per-user basis and that they only provide access to the appropriate servers. From there, users should be configured within the server’s operating system to only have the level of access necessary for fulfilling their responsibilities. In most cases, there is absolutely no reason to grant all users sudo or root-level privileges.
When users must have their privileges temporarily increased to complete tasks such as updates or deployments, they should follow their organization’s process for temporarily gaining elevated privileges, thus allowing for better accountability and monitoring of administrative activities. This is commonly accomplished by assigning users standard-use accounts, along with a separate account with elevated privileges.
The principle of least privilege goes well beyond the recommendations outlined above. Least privilege should permeate the enterprise in order to secure everyday activities performed by network engineers and other IT personnel.
Like other servers on your network, cloud instances need sufficient logging configured to effectively monitor for suspicious activity. Ideally, these logs should be sent to a central server for aggregation and correlation to aid with the identification and alerting process. Resource and performance monitoring can be accomplished by using Amazon’s CloudWatch service.
As far as security monitoring is concerned, EC2 instances should adhere to the same process by which in-house servers are required to follow. CloudTrail is another product provided by Amazon, which allows an enterprise to monitor all activity that occurs within the AWS console.
The tools outlined above, though, won’t be useful unless you log important data. When configuring services and applications to log data, ensure that they report any configuration changes and logins from admin-level users. Establish an accepted baseline of expected behavior for each server and application to identify abnormal spikes or other suspicious trends. It’s also important to tune log configurations over time as to avoid any false positives and unnecessary noise. These logs can be fed to the above-mentioned tools, or to your organization’s currently deployed logging and monitoring solution.
As part of your standard hardening process, EC2 instances should be regularly scanned for vulnerabilities. Just inform and get approval from Amazon beforehand, so your scans aren’t flagged as suspicious activity.
The most important aspect of hardening AWS instances is to treat them as if they were hosted on your company’s physical network. Amazon provides security monitoring for its infrastructure, but their security controls are not tailored for every enterprise’s needs. By following your organization’s specific hardening guidelines, you can ensure that your cloud instances meet your enterprise’s security needs.
Have an AWS question or a story to share? Talk with us on Twitter.
Our recommendations should not be considered comprehensive; rather, they are meant to address common mistakes that system administrators need to avoid when deploying infrastructure to AWS. For a comprehensive list of security best practices, check out this Amazon whitepaper.