Best DevOps

10 ways to protect your data on the AWS platform

Source – cloudcomputing-news.net

One of the worst data breaches in US history recently made the headlines, and it’s a powerful reminder of the importance of protecting data on platforms such as Amazon Web Services (AWS).

The personal details of nearly four out of every five adult Americans, including virtually every registered voter, were recently exposed online thanks to sloppy security practices. The data not only included contact details and birth dates, but also information on the perceived political views of individuals. Media analytics company Deep Root Analytics had stored the data in an AWS S3 bucket and mistakenly left it exposed for two weeks.

Deep Root Analytics failed to use protected access settings, meaning that the data was accessible to anyone who knew, found, or guessed the six-character subdomain name that Amazon uses to identify an individual bucket.

While in this case it was a very simple user error that was to blame, it’s also a reminder of the wider risks of IaaS platforms such as AWS. Amazon has put a great deal of time and money into AWS’s security, but it’s always possible that either a security misconfiguration by an AWS customer or a particularly powerful attack could create a breach.

Organizations that use AWS also need to take into account the risk of malicious or mistaken activity from their own staff and users along with third parties such as business partners or vendors that require some degree of access to data. The average business faces around 11 threats every month where somebody inside the organization is responsible, whether deliberately trying to compromise security or acting negligently. Meanwhile, third parties with access to data are often to blame for security breaches.

Simply leaving AWS security up to Amazon is neither legally or practically sensible. That’s because AWS uses a shared responsibility system when it comes to security. Amazon itself takes full responsibility for the protection of its cloud systems, including both the software set-up and the physical computers, servers and connections. It’s also in charge of detecting and blocking any intrusions or fraudulent attempts to gain access.

However, the customer is responsible for managing and configuring everything that happens inside AWS. This includes any applications it runs using AWS’s identity and access management system (IAM), along with password protection of the data. The customer organisation is also responsible for protecting its own systems and connections to AWS, including any firewall.

How to use AWS infrastructure safely

This isn’t an all-encompassing list, but it covers the main points and lays a good groundwork for a sensible and effective approach.

Enable CloudTrail everywhere you use AWS: This creates comprehensive logs of all user activities in an AWS service and provides an audit trail for compliance purposes. Remember to do this on all services, including non-geographic services such as CloudFront. You should also switch on multi-region logging as this will pick up any activity in regions you don’t use, a strong sign of a security breach.

Enable multifactor authentication (MFA) on your root user account: This is absolutely key as this account can access all your AWS resources. Use a dedicated device for this MFA rather than have the requests sent to a personal mobile device. That cuts the chances that a lost device or a change in personnel can lead to a breach.

Enforce a strict strong password policy: A good minimum threshold is 14 characters with at least one uppercase letter, one lowercase letter, one number and one symbol. Set passwords to expire after no more than 90 days and don’t let staff reuse passwords.

Keep CloudTrail log access as tight and narrow as possible: This will reduce the number of staff who could compromise security by falling prey to a phishing attack or being blackmailed.

Make sure multifactor authentication is required to delete CloudTrail buckets: This will reduce the chances of a hacker covering their tracks after getting unauthorized access.

Never use access keys on root accounts: Doing so is simply too big a risk given how much access somebody could gain after compromising an account.

Restrict access for commonly used ports: These can include CIFS, DNS, FTP, MongoDB, MSSQL and SMTP.

Set accounts to automatically expire after 90 days without any use: An inactive account brings you no benefits but increases the number of potential points of entry for somebody trying to breach your setup.

Turn on access logging for your S3 buckets: This compiles the log data from CloudTrail and makes it much easier to track access requests, authorized and otherwise. If the worst happens, these logs can help with breach investigations.

Use restricted access on any EC2 security groups: This cuts out less sophisticated attack tactics such as Denial of Service, man-in-the-middle or brute force. Make sure access is done via IAM roles rather than through individual credentials that can easily be compromised.

As well as taking care of the restrictions and policies of your AWS infrastructure, you should also follow best practice when you use any custom applications deployed in AWS. By following these best practices, enterprises and users alike can safely store their data in the cloud without compromising security, creating a more secure AWS environment.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.