Virtual Private Cloud — or VPC — is something that you will encounter whenever you create a new resource on AWS. The purpose of having a VPC around your AWS resources is to define and control your resource groups and their accessibility from the outside world. By default, there is always at least one VPC present if you have at least one active resource, or have created one in the past. This is because Amazon generates this for you automatically.
Your default VPC comes with an Internet Gateway that has a public default subnet. This means that anyone from anywhere can access your resources based on the configurations inside your Internet Gateway. In addition to this, the default VPC comes with a private IPv4 address that gives you accessibility to the individual resources within the VPC.
While on the surface, this does not sound too bad, especially if the applications your instances are public-facing ones, it comes with its own security risks. In this piece, we are going to go over what these security risks are and how to solve them by increasing the security of your AWS VPC.
AWS VPC DDoS Attacks
DDoS — or distributed denial of service attacks — is a common occurrence used by malicious users to flood a network or system with more traffic and connections than it can handle. This will often result in the application or network breaking down by becoming unresponsive due to the overloading of requests.
Our organization may have auto-scaling implemented to mitigate spikes in traffic, or to create an elastic and automated infrastructure that can cater to different loads based on the time of the day. While this may seem like a good thing to do, DDoS attacks can end up costing an organization a significant amount of money as the automation works to keep up availability in response to the increased network activity.
One way to reduce the impacts of DDoS in AWS is to restrict the type of traffic that can reach your applications. For example, web applications only need to open TCP ports 80 and 443. This will prevent additional incoming traffic from other vectors or ports that are not in use by your application.
A common way that DDoS attacks are performed is through a technique called reflection attack. How it works is that the attacker scans the Internet for servers hosting User Datagram Protocol (UDP) services such as Simple Service Discovery Protocol (SSDP), Network Time Protocol (NTP), Simple Network Management Protocol (SNMP), and Domain Name System (DNS). The attacker then sends a large volume of small requests with spoofed source addresses using common UDP services.
To prevent this from occurring, you can control the inbound and outbound traffic through your AWS VPC via a bastion. To do this, set up an SSH bastion via EC2 and only allow administrators to connect to TCP port 22 from the Internet for a specific range of Internet addresses. The admin can then connect the web application servers and database instances to the bastion. In the event of a DDoS attack against this particular port, your application will not be impacted — only the bastion. Your TCP 8080 continues to work via elastic load balancing and your web application cannot be overwhelmed.
To prevent unwanted traffic from entering your VPC, you can also create a NAT gateway (Network Address Translation) that enables instances in a private subnet that can initiate outbound traffic but prevents receipt of inbound traffic from the Internet.
This is achieved through iptable rules, where you can restrict outbound traffic with your NAT instance based on predefined destination ports or specific IP addresses. You can also create security policy rules that only allow inbound traffic to be from AWS endpoints only, further restricting external traffic without putting your current resources and their ability to communicate with one another at risk. This will block fraudulent external accesses, which is often hard to control through iptable rules alone. However, by restricting it to only AWS endpoints that you control, you are limiting the surface area for accessibility.
PCI DSS Requirements for Data Encryption
PCI (Payment Card Industry) requirements for data in transit are different between public and private networks. The general rule is that encryption of data during transmission is often achieved using TLS (Transport layer Security) between two endpoints. However, the issue here is that end-to-end encryption during transmission can impact performance and increase management overheads. For example, an application designed with ELBs (elastic load balancing) with data transmitting between tiers can end up with up to five encryption and decryption points.
Example of simple web application setup with VPC
If you add firewalls, the number of encryption and decryption points increases to seven.
Example of simple web setup with additional security layers
With each encryption and decryption point, you are also tasked with SSL certificate and keys management. To prevent excessive overheads, the trick is to limit the number of public subnets available on your AWS VPC. Then route any outgoing traffic to the Internet via your NAT, whilst configuring your connections for traffic between private subnets to be non-TLS connections.
The purpose of TLS is to protect data from being maliciously transformed or hijacked. However, once it passes the tests and makes it into your VPC, it is not necessary to keep up with the layers of encryption as required in an external environment. This is because you control your instances and resources, and your software applications should also have safeguards to protect unwanted transactions from slipping through.
Securely Connect to Linux Instances Inside AWS PVC
The purpose of using SSH agent forwarding is to allow admins to securely connect directly to Linux instances, such as your AWS EC2 resources launched inside your VPC. The connection may be to deploy, patch or update applications and systems, or to configure automated scripts and CHRON jobs. Whatever the reason you need for directly accessing your Linux resources, an SSH agent lets you do so in a secure manner.
It should be noted that if you create an SSH agent to forward traffic, it should be done with caution. It is good practice to never place your SSH private keys on the bastion instance itself. This is because if your bastion gets compromised, the private keys are readily available to the malicious user. Rather, keep your private keys on your physical machines that are configured with secure sharing methods and services such as Vault.
Another good practice is to configure your security group to allow SSH connection, that is — TCP/22 connections — only from known and trusted IP addresses. This means that you will also need to set up a static IP address on your physical location such as your office or authorized location. The next step is to configure your Linux instances in your VPC to only accept SSH connections only from bastion instances.
Conclusion: Where To From Here?
Securing your AWS VPC takes more than just creating rules around who has access. Bastion hosts can act as your secondary gateway to your resources and limit the surface area an attacker can have on your orchestrated applications.
Beyond limiting your VPC external access, you can also create VPC peering connections — that is, connecting up multiple VPCs to prevent traffic from exiting your AWS infrastructure until it is necessary. This can help mitigate further risks of exploitation when data is transferred between different spaces you control. You can do this for VPCs within the same region, or create inter-regional VPC peerings for cross-regional resources. This, however, is beyond the scope of this piece — but is a good topic to explore if you want to create resiliency into your infrastructure’s architecture.