We're now live! Signup now

Passlock's AWS setup

Infrastructure

How we built our serverless Passkey platform on AWS

Tech is all about building blocks. We offer developers a simple, scalable serverless passkey platform, which you can use to secure your web apps. In turn, we have no interest in building our own infrastructure, given players like AWS, Google and Azure are much better at it than we could ever be.

For now, we run on AWS - and we’re pretty happy. Here’s how we utilise the AWS stack to deliver our serverless platform.

Serverless vs Containers

The first big decision we had to make was whether to go down the serverless route ourselves. Serverless is generally cheaper, and has a much lower ops overhead than docker/kubernetes.

The National Cyber Security Centre (Britain’s answer to the US CISA) recommends serverless for secure computing so that’s a big tick in it’s favour.

However the developer experience is, in my experience, much better with containers. We can get everything running locally, mix and match runtimes and frameworks and iterate quickly.

Time is money, and developers are not cheap, so containers were the answer … until we discovered SST.

SST

Firstly a big shout out to the folks at SST. You’ve done an amazing job in creating a truly developer friendly serverless platform. SST is core to our development and we can’t recommend it enough. If you’re deploying serverless apps on AWS, do take a look at SST.

SST is essentially a wrapper around Amazons CDK framework, albeit with a wealth of enhancements. SST allows us to easily wire up the AWS products we use, makes development quick(ish) and offers great flexibility.

I still believe containers offer a better developer experience, however SST closed the gap significantly, whilst delivering all the operational and security benefits of serverless.

AWS IAM

Given that we’re developing a serverless authentication platform, our own authentication and access control needs to be in order. One advantage of a serverless architecture is the ability to enforce granular segregation across the codebase.

We use IAM not only to control access to AWS services, but also to control access to specific functions (Lambdas). This is something we couldn’t easily do if we’d gone down the container route.

AWS Key Management Service (KMS)

Our JWTs are signed using our own RSA private keys, specifically RS256 (RSASSA-PKCS1-v1_5 using SHA-256). Given you guys (hopefully!) trust our JWT claims, protecting our private keys is paramount. We use Amazon’s KMS to sign our JWTs, and the private keys never leave the KMS.

Lambda & API Gateway

To support scalable compute, AWS Lambda was the obvious choice, although we do also use AppRunner in a limited way. When a user uses Passlock to register or authenticate with a Passkey, ultimately they’re invoking an AWS lambda.

We use a mixture of runtimes:

  • NodeJS (Typescript) - Node accounts for around 60% of executable code.
  • Go - Accounting for approximately 30% of code.
  • Python - Makes up the remaining 10%.

Why three runtimes/languages? It’s about choosing the right tool for the job. We could have written everything (at least the backend stuff) in Typescript or Python, and these days there’s a lot of overlap between languages.

However some runtimes just excel in certain areas. Python is great for quick, ad hoc stuff, Javascript is the obvious choice for an API given it’s native support for JSON. Go’s performance and memory management is superb (at least compared to Node and Python).

I also take the view that any developer worth their salt should be comfortable coding in more than one language. Understanding patterns and best practices for multiple languages undoubtedly makes you a better programmer.

Dynamo DB / ElectroDB

We store most of our data in AWS DynamoDB, adopting the single table design pattern. ElectroDB makes this patten less challenging than it would otherwise be, although I’ll admit we did go down a few rabbit holes during the early days!

S3 & Cloudfront

The static aspects of the codebase including parts of this site are deployed on S3 and fronted by Cloudfront. I’ve never been a great fan of Cloudfront. Unless you have a very high traffic site, it seems to purge it’s edge caches too frequently for my liking. Nevertheless, we weren’t going to employ another CDN for a relatively minor part of our stack.

AWS VPC

Of course we use VPCs to segregate resources. However, as with our WAF deployment (see below), we don’t attach too much weight to VPCs. We believe in a defence in depth approach to security, applying granularaccess controls and encryption between services. We see VPCs as a bonus, but are wary of the “inside good/outside bad” mindset.

AWS Web Application Firewall (WAF)

We don’t rely on WAF to protect against common web exploits, as we aim to defend against these ourselves. We couldn’t credibly claim to take security seriously if we relied on a WAF to prevent SQL injection! However we do use [Amazon’s WAF][https://aws.amazon.com/waf/] to defend against bots flying around the internet, using resources and polluting log files.

AWS Cloudwatch

To ensure everything is up and running as it should be, we employ AWS Cloudwatch. We primarily use their logging features, along with Lambda and API Endpoint monitoring and alerting.

Toby Hobson

Toby Hobson

Founder

Want product news and updates?

Sign up for our newsletter

We care about your data. Read our privacy policy .