Subcribe for tips on building serverless apps with React & AWS
Many AWS AppSync customers will have noticed a change in the AWS Console during the last 24 hours with the release of support for multiple authentication methods. It’s probably the single most requested feature for AppSync. While we’re still waiting for the offical blog announcement I had a chance to explore it earlier today by adding API Key authentication to an existing application that previously only supported Cognito User Pools.
A few weeks ago I responded to a question about API Gateway custom authorizers and how it caches the policy with.
With custom authorizers you have two options:
Most of the time you will want to return list of policy statements covering every resource the user needs to access. This will allow the API Gateway to cache the policy. While you can use a wildcard you can also list each resource as its own policy statement.
Change the TTL to 0 so the policy is never cached. This will cause the custom authorizer to be executed for each request. Depending on your authentication mechanism this may allow you to cut someone off immediately.
Where possible I would go with option 1 over option 2.
Yesterday I was asked why option 1 was preferable. It’s a very good question and one that I think needs a detailed answer. Beyond the superficial response that setting the TTL to 0 will trigger two Lambda for every API call there is a more fundemental reason for using option 1.
I love GraphQL and serverless architectures so it shouldn’t be a surprise that I’m a huge fan of AWS AppSync. It’s quickly become my default starting point for any new application. While AppSync is awesome it does have a few rough edges and one of those is the endpoint hostname.
When you create a new AppSync API you’ll receive an endpoint with a URL that looks like
It’s not particularly attractive but that’s not the issue. The first part of the hostname is randomly generated by AWS when you create a new API. Having our application depend on a randomly generated URL for the API is potential disaster for the business.
A common question on the Serverless Framework forums goes something like Why are my environment variables replaced during Serverless deployments? or How can I stop Serverless from replacing environment variables during deployment?. In this article I’m going to provide you with some techniques to help mitigate the problem.
To understand what is happening and how to mitigate the problem you need to know that the Serverless Framework is an abstraction layer on top of CloudFormation. Serverless takes the
functions section of your
serverless.yml and expands it into a full CloudFormation template creating additional resources as required and making sure they are all connected correctly. By building Serverless on top of CloudFormation it removes the complexities around managing change sets but it also means that Serverless has all of the limitations of CloudFormation and that is the root cause of this problem.
A couple of years ago I started building Shopify apps using serverless architectures. My first attempts were React applications using REST API built using the AWS API Gateway, Lambda and DynamoDB but I quickly moved to GraphQL once I discovered how powerful it was.
When I saw AppSync and Amplify in the Re:invent 2017 videos I knew immediately that I wanted to use them. Unfortunately it became obvious very quickly that this wasn’t going to be smooth sailing. AppSync required all requests to be authenticated but none of the supported authentication methods worked with Shopify OAuth2. In fact most OAuth2 providers aren’t supported unless they also implement OpenID Connect which rules out providers like Twitter.
With the Node 8.10 runtime AWS added a new
async handler syntax
Occasionally you need to know the API Gateway URL for your services inside your Lambda. This happened to me recently when one of my Lambda’s needed to provide a callback URL to a third party service that it was using.
It seems that a lot of people are solving this problem by deploying their API’s using Serverless, then copying the URL and redeploying again with that URL hard coded into their
serverless.yml as an environment variable.
If you’re one of those people then STOP. This is both the hard way and it has two very negative side effects.
NOTE: This has been tested using Node 6.10, NPM 4.5 and Serverless 1.11. I don’t know the first version of those packages that this works with.
Most developers will find themselves with a number of dependencies in their
package.json that are only required for development. In the serverless world it’s not uncommon to require a compiler/transpiler (TypeScript/Flow/Babel), type definitions, a unit testing framework, additional plugins, etc.
Even though they’re included in your
devDependencies they’re put into the same
node_modules folder as your regular runtime dependencies. This means that when Serverless creates your deployment package you often end up shipping a large number of packages that are only required for development. Obviously this is less than ideal for a number of reasons.
In a previous post I wrote about using per stage environment variables with the Serverless Framework. That article showed how you could set different values for environment variables depending on the stage you were deploying to.
There still seems to be a lot of misunderstaning about how powerful the environment variable implementation in Serverless is. Today I want to show you how to keep your secrets out of version control.
One question that I keep hearing is “Can I have more than one handler in my AWS Lambda function?”. This usually in the context of someone wanting to implement a services pattern using the API Gateway where all traffic for a resource is handled within a single
As developers we all know that it’s best practice to keep configuration outside of the application. This serves two purposes:
Subscribe via RSS
I'm writing a course on building applications with Amplify and AppSync. Subscribe today to receive updates and preview videos.