Serverless is not a new concept — but for many developers, it can feel like it. For a long time, we’ve relied on servers and structures that we can personally configure and orchestrate. Some of us have built entire careers on our ability to maintain and secure servers for applications and enterprises.
Then comes along a little big thing called serverless and everyone seems to be picking it up, left, right, and center. For a software engineer, serverless can be fun to play and tinker with. You might even end up with a prototype that makes it into production.
Sure, we know serverless works. You put your code up and everything just appears to smoothly run along without a glitch. There is nothing to maintain. You don’t have to worry about elastic load balancing or scaling. Startups love it due to its low maintenance requirements. But the big question is: how secure is serverless?
Is using Serverless an advantage or disadvantage to security?
While serverless may feel like a new security challenge, it provides a lot more security advantages than a traditional infrastructure orchestration approach. The three main security issues that are attached to infrastructure are as follows:
- migration risks and auto-scaling
- technology obsolescence
- introduction of new technology and its impact on software fragmentation
Serverless deals with migration risks and auto-scaling as part of its service built-in features. Depending on who your provider is, such as AWS, Google Cloud, or Microsoft Azure, auto-scaling is pre-configured for a seamless experience. The point of a service being ‘serverless’ is to mitigate the hands-on need for teams to deal with the risks that come with auto-scaling such as excessive infrastructure costs, or uneven implementation of security protocols across the different services.
Due to the way serverless works, new opportunities for security implementations are automatically applied. There is no need for manual intervention or scheduling of updates from the system administrator or development team, reducing overall operational costs for the business. This reduces the potential costs of running obsolete technology and security practices.
Finally, the third advantage of using Serverless over traditional infrastructure is the reduction of software fragmentation over time. This is due to the way serverless applications are constructed — modular by design and independent pieces that work together to produce the desired outcome. You cannot run a monolithic application as a serverless implementation. When new features and technologies are introduced, they are required to work independently rather than forced to rely on any legacies or artifacts that are produced by a monolithic application.
Common serverless practices that can turn into a security risk
While on the surface, serverless sounds great. But is there a way to guarantee that a serverless application is secure? Most vulnerabilities occur due to programming error and human negligence. It might be a misplaced app secret that you were supposed to encrypt, or accidentally pushing secret keys into a public space such as a public GitHub repo.
Serverless may reduce overheads for security implementation and overheads, but the human risk of mistakes can never truly be mitigated — only reduced through awareness and implementation of coding practices.
Here are 5 common programming mistakes that can reduce serverless’ security effectiveness.
1. Broken Authentication
Setting up IAM permissions is not hard in AWS. When it comes to serverless, stateless authentication with Auth0 or JWT is often the way to go. However, the issue is not with the authentication methods used, but deploying insure settings that contain components with public read access.
Why do we do this? Because it feels like it is the only way to get things to work and connect. But in reality, you can leverage your serverless Lambdas with Elasticache and have them both running on the VPC. Once that’s done, all you have to do is add AWSLambdaVPCAccessExecutionRole to your Lambda’s IAM statements.
2. Insecure deployment settings
AWS comes with client-side encryption of files before it gets sent. However, for server-side encryption, this feature needs to be added to your S3 target bucket. Why? Because your serverless lambdas are viewed as independent pieces of code that have no knowledge or access to other services or lambdas. Its function is to do one job. To keep your data and contents secure, you also need to encrypt the buckets that your data is going through.
3. Over-privileged permissions and roles
It’s easy to give admin rights to a user. We tend to set a single permission level for a service rather than fine-tuning it to only the things that need it. Serverless functions should only have permissions needed to fulfill their purpose.
For example, if a function only needs to fetch from the database, it shouldn’t also have additional permissions as write permissions. This logic applies to interactions with your S3 buckets and any other services within your serverless provider’s ecosystem. The best practice is to follow one function, one use case, one permission rule.
4. Insufficient monitoring
While serverless guarantees uptime, auto load balancing, and scaling, it doesn’t mean that it’s automatically protected from errors. There’s more to monitoring than just keeping an eye for server uptime. Critical errors can occur due to code exceptions. Not keeping logs can increase your downtime.
On the flip side, keeping logs with sensitive information can increase potential security breaches for your serverless deployments. A good practice is to keep just enough logs to give you a bird’s eye view of what happened in your Lambdas and send alerts out if an exception occurs.
5. DoS attacks
DoS attack is one method of cybersecurity attacks that we should know how to mitigate by now. It’s been around long enough as a security risk. However, the perk of serverless Lambda being able to auto-scale means that it can become very costly to keep running your deployments.
While the DoS attacks won’t overwhelm your app’s ability to function, they can exponentially increase the financial burden of keeping your serverless deployments live. This is because serverless cost structure runs on pay-per-use — with each DoS attack sending waves of requests that could bring up your final bill.
One way to mitigate this is to throttle incoming API calls through API Gateway, rather than directly to the serverless function.
Serverless is secure by design. Using serverless can help increase security whilst implementing a security-first approach on the infrastructure layer. Serverless is a new-ish technology. It’s been around long enough for there to be enough exploration into its limitations and how to mitigate risk. However, it is still a maturing methodology, meaning that all the benefits of serverless are still yet to emerge.
What’s currently letting us down is our practices and implementation.
A robust serverless function does one thing and one thing only. It’s easy for us to slip into a code architectural space where serverless functions become tightly coupled, leaving it with the potential to cascade into a disaster if an exception or security breach occurs. This is because if something goes wrong and there is a strong dependence on it from other applications, it may become too risky to remove. Downtime of a faulty serverless function that’s too connected or too dependent by other serverless functions or applications can result in a flow-on effect of downtime. The smaller your functions, the easier it is to define what they will do. This in turn helps reduce any incoming attacks that may occur.