You build Serverless applications. You know that you should use environment variables instead of putting things like secrets, configurations, and environment-specific logic in your application code, because you follow the 12-Factor App Methodology.
But configuring Serverless environment variables for projects is a pain.
What if it was not a pain? What if you had a straightforward way to configure your Serverless environment variables?
I’ve got you covered:
Overview
- Serverless Environments Variables on AWS Lambda and Node.js
- Environments Variables in serverless.yml vs Environment Variables in your Serverless functions
- Why Use Environment Variables?
- AWS Lambda, Node.js, and Environment Variables
- Local, Staging, & Production Serverless Environment Variables
- Serverless Environment Variables Based on Serverless Stage
- Serverless Environment Variables and Git (version control)
- Tests and Serverless Environment Variables
- Production Serverless Environment Variables and Secrets
- Sharing Serverless Environment Variables
- Function-specific Serverless Environment Variables
- Recap
Serverless Environments Variables on AWS Lambda and Node.js
This guide is going to cover configuring Serverless environment variables for AWS Lambda running the Node.js 10.x runtime.
The process may be different for vendors other than AWS and runtimes other than Node.js, but hopefully, you can apply these same principles to your chosen Serverless vendor and environment.
Environments Variables in serverless.yml vs Environment Variables in your Serverless functions
If Serverless vs. serverless was not confusing enough, our discussion of environment variables is also rife with confusion.
Let me explain.
When working with Serverless (with a capital “S”), there are two kinds of environment variables we need to be concerned about:
- The environment variables used by your functions in their runtime. In our case, Node.js.
-
The environment variables in your local shell (Bash, Zsh, etc.) that are used by Serverless when building, packaging, and deploying your Serverless functions.
Case #2 is straightforward and you will probably encounter less than case #1, so we will only touch on it here briefly. Serverless provides a convenient way for you to reference environment variables that are set in your local environment in your serverless.yml
file.
custom:
editor: ${env:EDITOR} # In my case, EDITOR is set to 'vim'
In this example, ${env:EDITOR}
will be replaced with by whatever the value of EDITOR
is in you local shell. In this case, it would equal vim
. This approach is useful if you have an environment variable in your own local environment that you want Serverless to interpret when, for example, you want to deploy your code to Lambda.
For the purposes of this guide, however, we will focus on case #1. We want to specify environment variables for our Node.js Lambda functions to use.
Let’s take a look.
Why Use Environment Variables?
First, if you are like me, you hate committing environment-specific code into your application logic. This type of code is part of the configuration of your application, not the application itself. It is best to separate your configuration from your application.
In addition, environment code can change frequently, become stale, or contain information, like credentials, that should not be kept in version control (unlike your application code).
What do you do instead of committing environment-specific code into your application? Use environment variables.
Let’s see an example of how this would work with an AWS Lambda function and Node.js.
AWS Lambda, Node.js, and Environment Variables
Let’s say, we need a Lambda function that fetches some data from a third-party API.
The API has two specific endpoints, one for testing: http://www.example.com/api/test
and one for production: https://www.example.com/api/prod
We will use the testing endpoint for local development, test test suite, and for our staging Lambda functions. The production endpoint will only be used for our production Lambda functions.
To keep environment-specific logic out of our application code, we use an environment variable called API_ENDPOINT
to set the API endpoint for our application to use.
To get an environment variable in Node.js, we use process.env.NAME_OF_VARIABLE
. So, in our case, to get API_ENDPOINT
, we use process.env.API_ENDPOINT
.
require('axios');
module.exports.fetchData = async (event, context) => {
const response = await axios.get(process.env.API_ENDPOINT)
}
Above, we have a very basic Lambda function that simply makes a GET
request (using the axios JavaScript library) to the API endpoint specified via an environment variable.
But how do we specify the API_ENDPOINT
environment variable for our Lambda function, so that Node.js will pick it up in our code?
Lambda has a special section inside AWS where you can specify the environment variables available to that Lambda function.
Of course, if you need to specify the same environment variables across multiple Lambda functions, entering them via the UI this way is not sustainable or scalable. Every time you need to change an environment variable that is shared across Lambda functions, you would have change the values of every function. Not very fun.
This is where Serverless environment variables are best.
Serverless will allow you to specify environment variables for a function via the environment
key within the function specification.
fetchData:
handler: ...
environment:
API_ENDPOINT: "https://www.example.com/api/prod"
}
By specifying API_ENDPOINT
under the environment
key, Serverless will now add API_ENDPOINT
to the list of environment variables in the function when we deploy our function to AWS via sls deploy -s production
.
Let’s try it and see:
sls deploy -s production
OK, great. Now, we can access API_ENDPOINT
inside the Lambda function.
But wait, what about our test suite, or local development, or staging functions? Where do we specify our test endpoint: https://www.example.com/api/test
?
Local, Staging, & Production Serverless Environment Variables
You may be thinking, “OK, I’ll use some kind of conditional, maybe an ‘if’ statement to check the environment that is needed”. You have the right idea, but unfortunately YAML, the markup language used by Serverless, does not support conditional statements.
Serverless Variables
To get around the lack of conditionals in YAML, Serverless has its own variables. They can be used with the special syntax ${}
anywhere within the serverless.yml
file.
Using a Serverless variable, we can leverage the Serverless command line option called stage
to switch which environment variable we use depending on the stage that is provided.
You can get the stage provided via the command line options anywhere in serverless.yml
using:
${opt:stage}
But what if this option is not set? Luckily, Serverless has thought of this and created a way to set defaults for your Serverless variables. We can use a default when using a Serverless variable by supplying the default as the second argument to the ${}
function. The syntax looks like this:
${customVariable, defaultValue}
In this example, if customVariable
is not set, then Serverless will fallback to using defaultValue
instead.
In our case, if the stage
option is not provided via the command line, we can ensure that a default value is used instead. It makes sense to have our variable default to the provider
stage. We can get the provider stage using self:provider.stage
.
${opt:stage, self:provider.stage}
Great, now our variable has a default. However, if we have more than one Serverless function in our Serverless application, the likelihood that we will need to access this variable in multiple times would be great, and this variable is long and verbose.
Luckily, Serverless allows us to set our own custom keys using the custom
key as a root element. You can define your own keys under custom
, and access these Serverless variables in the file via the self:custom
accessor.
custom:
stage: ${opt:stage, self:provider.stage}
Anywhere in our Serverless configuration files (serverless.yml
in our case), we can get the stage via ${self:custom.stage}
So, when we specify the stage via the command line:
serverless invoke local -s staging
${self:custom.stage}
will equal staging
anywhere in the file.
Serverless Environment Variables Based on Serverless Stage
OK, we now have the ability to read what stage is specified. But how does that help us with Serverless environment variables?
Well, we can switch which Serverless environment variables should be set depending on the stage that is specified.
Let’s go back to our third-party API endpoint example. Let’s say we want to specify the test API endpoint for our staging
serverless functions and the production API endpoint only for our production
serverless functions.
We can use custom variables to specify which endpoint to use:
custom:
stage: ${opt:stage, self:provider.stage}
apiEndpoint:
production: https://www.example.com/api/prod
staging: https://www.example.com/api/test
How do we set these custom variables as environment variables in our Serverless functions? We reference them like this:
fetchData:
handler: ...
environment:
API_ENDPOINT: %{self:custom.apiEndpoint:${self.custom.stage}}
}
Note: I’m using the environment
key located under the function `fetchData name key. Serverless does also support global environment variables that can be set, in which case, this method works exactly the same.
Great! Now, when we pass the -s
option via the serverless
command, it will set API_ENDPOINT
as either https://www.example.com/api/prod
or https://www.example.com/api/test
depending on what we set as the -s
option.
As an example:
serverless deploy -s staging -f fetchData
will deploy the new code to the fetchData
Lambda function.
Now, if we go to the fetchData Lambda function under the AWS Lambda Management Console, we can see, in the Environment Variables section, our key API_ENDPOINT
with the value https://www.example.com/api/test
Here’s what our serverless.yml
file looks like now…
custom:
stage: ${opt:stage, self:provider.stage}
apiEndpoint:
production: https://www.example.com/api/prod
staging: https://www.example.com/api/test
...more code here...
fetchData:
handler: src/fetchData/handler.js
environment:
API_ENDPOINT: %{self:custom.apiEndpoint:${self:custom.stage}}
}
OK, so now we can specify the correct API endpoint depending on the environment we want to run.
There’s a problem with our file however. The way that we have written Serverless environment variables is fine for variables that are not sensitive in nature.
But what about data that is sensitive? Let’s say our third-party API requires a API secret key with each request.
The way that serverless.yml
is currently written, we’d have to add the API secret key under custom
. If you save your serverless.yml
file to version control (and you should), your API secret key is now committed to version control history and accessible to anyone who has access to your repository (I’m going to assume Git for version control).
Serverless Environment Variables and Git (version control)
A better approach is to keep our serverless environment variables out of the Git repository. We can do this by specifying the environment variables in a separate file and then loading them into serverless.yml
.
Let’s take a look at how this works:
# serverless.yml
custom:
stage: ${opt:stage, self:provider.stage}
fetchData:
handler: src/fetchData/handler.js
environment:
%{file(./environment_variables.json):${self.custom.stage}}
}
OK, a lot has changed here, so let’s break it down.
First, we no longer put our environment variables under the custom
key. Instead, we are going we specify under the environment
key under the function name, .
file()
is a function provided by Serverless. You can probably guess that the file()
function accepts a path to a file that we want to read into our serverless.yml
file. In our case, we are reading a JSON file called environment_variables.json
. The fact that we are using a JSON file is important for the next part.
The :
symbol tells Serverless that after reading the contents of environment_variables.json
we want to retrieve the value under the key name that follows the :
.
For example %{file(./environment_variables.json):API_ENDPOINT}
would read the environment_variables.json
file and then look for a key called API_ENDPOINT
.
Let’s look again at our example:
fetchData:
handler: src/fetchData/handler.js
environment: %{file(./environment_variables.json):${self.custom.stage}}
}
In our example, first, we read the environment_variables.json
then we use the Serverless variable custom.stage
to find out what stage we are running in. Depending on the value of custom.stage
, we then look for a key in environment_variables.json
with that name.
Let’s see what environment_variables.json
looks like:
# environment_variables.json
{
"production": {
"API_ENDPOINT": "https://www.example.com/api/prod",
"API_SECRET_KEY": "productionsecretkey"
},
"staging": {
"API_ENDPOINT": "https://www.example.com/api/test",
"API_SECRET_KEY": "stagingsecretkey"
}
}
You can see that environment_variables.json
is a JSON object where the keys are the names of the stages we are using. The value of the stage names are JSON objects containing key-value pairs of environment variables to be set within that stage.
When Serverless loads environment_variables.json
, it will have access to the main JSON object. It will then look for the key that corresponds to ${self:custom.stage}
and load the JSON object associated with that key. Any keys and values in that object will become the environment variables set for our fetchData
function.
One more thing. Obviously, the whole idea behind separating out of environment variables into a separate file is to protect potentially sensitive data from getting added to the repository.
Therefore, we need to add environment_variables.json
to our `.gitignore’ file.
environment-variables.json
Great! Now our environment variables and sensitive secrets are out of our application repository.
OK, so we have a great way to set our environment variables for our staging and production environments. But…
Tests and Serverless Environment Variables
What happens when we try to run our tests?
You are writing tests for your functions…right? Right??
In our example, we are using Node.js for our Lambda functions. If you have a written unit tests for your code with something like Jest, right now, anywhere we read an environment variable with process.env
, the value will be undefined
. Not good.
Here in lies our next problem. While the serverless.yml
file allows us to specify the environment variables we want to set in AWS, it does not set them when we try to run the JavaScript of our Serverless functions in our unit tests.
The way to set environment variables for your JavaScript tests is the same way we set local environment variables for any local commands.
There are many ways to set environment variables for command line functions. My preferred way right now is to use direnv. direnv will allow us to create a new file, .envrc
at the root directory of our project, and will automatically load any environment variables set in the file.
Here’s an example of an .envrc
file for our case:
API_ENDPOINT=https://www.example.com/api/test
API_SECRET_KEY=stagingsecretkey
When we run npm run test
, any process.env
in our tests will be populated with API_ENDPOINT
and API_SECRET_KEY
.
Different Serverless Environment Variables for Different Serverless Environments
I don’t know about you, but in this age of web security, we need to hold ourselves to a higher standard than simply keeping all of our application secrets all together in the same file (and unencrypted no less).
Instead of putting all of our secrets and environment variables in one file, like environment_variables.json
, I propose that we keep a separate file for each environment and load the specific file when we need to use it.
For example, say we plan to have 3 environments, dev
for running our functions locally with sls invoke local
, and staging
and production
in Lambda.
Let’s create a file for each of our environments in the root of our project. Our development environment variables file will be: dev.env.json
, our staging environment variables staging.env.json
and our production environment variables production.env.json
.
NOTE: In this case, the “env” is added to the file extension to make it more obvious in the name that these are environment configuration files, but this is only my convention and is completely optional.
We have created separate files for each our of environments, so let’s move production, staging, and dev environment variables out of environment_variables.json
and into their respective JSON files, so that they look like this.
# dev.env.json
{
"API_ENDPOINT": "https://www.example.com/api/fake",
"API_SECRET_KEY": "fakesecretkey"
}
# staging.env.json
{
"API_ENDPOINT": "https://www.example.com/api/test",
"API_SECRET_KEY": "stagingsecretkey"
}
# production.env.json
{
"API_ENDPOINT": "https://www.example.com/api/prod",
"API_SECRET_KEY": "productionsecretkey"
}
You can also remove environment_variables.json
and remove it from .gitignore
. We no longer need it.
We’re going to change our fetchData
function in our serverless.yml
file to reference the file of the stage we specify when we choose to run or deploy our functions.
fetchData:
handler: src/fetchData/handler.js
environment: ${file(./${self:custom.stage}.env.json)}
}
So now, when we deploy fetchData
to AWS for the staging environment with sls deploy -s staging -f fetchData
, Serverless will look for the staging.env.json
file in the root of our project directory and put the keys and values in that file into the environment
key of our fetchData
function.
Production Serverless Environment Variables and Secrets
I don’t know about you, but even keeping my staging and production environment variables and secrets in a file on my local machine is not secure.
We need to take our design one step further. Instead of placing them in an unencrypted file on our local machine, we can use AWS Secrets Manager to manage our staging and production environment variables and secrets.
AWS Secrets Manager and Serverless
I don’t want to do a whole tutorial on AWS Secrets Manager (SM), as this guide is not how about to use AWS, but in order to understand the next part of our code, you need to understand a little bit about AWS SM and how it works.
AWS SM is a simple key-value store that charges based on the number of “secrets” or keys that you store and the number of requests for those keys.
The service is not inexpensive. If we have a lot of sensitive environment variables that we need to store, it would get expensive quickly.
However, instead of storing each environment variable as a separate “secret” in AWS SM, we can take advantage of the fact that AWS SM can accept JSON objects as values.
And luckily for us, we already have all of our staging and production environment variables separated into different files as JSON objects. Each of those files contains an already formatted JSON object.
So, how do we save our JSON objects for each environment to AWS SM, so Serverless can easily retrieve them when we deploy to Lambda?
AWS SM allows us to specify a path in SM where would like to save the key. We can use this option to specify the environment in the path.
Let’s try it. Let’s say we want to save the JSON objects for staging and production to keys called envs
. We obviously cannot save them to the same path or SM would overwrite each of them.
Instead, let’s include the environment names: staging
and production
in the paths.
/sampleApplication/staging/envs~true
/sampleApplication/production/envs~true
Note: The ~true
is an option provided by SM to say that our secrets are encrypted, which is obvious to me, but whatever.
Save each JSON object to their respective path. We now have a place to securely store our staging and production environment variables.
Let’s look at how we access them when we want to deploy our functions to Lambda.
Accessing Encrypted Environment Variables in Serverless
CloudFormation (and by extension, Serverless) provides an nice interface for accessing encrypted secrets in AWS Secrets Manager in our Serverless templates. We can access our environment variables/secrets by using the ssm
function in Serverless and passing in a path to our key. Here’s an example, for our own production secrets:
${ssm:/aws/reference/secretsmanager/\
sampleApplication/production/envs~true}
Great, now we can tell Serverless where to load our environment variables for production. How do we handle staging
and dev
? Let’s modify this line to handle all stages.
${ssm:/aws/reference/secretsmanager/sampleApplication/\
${opt:stage, self:provider.stage}/envs~true}
With this change, Serverless will read the stage from the command line and add it to the path. If no stage
option is provided via the command line, it will use the default provided by the provider.
If we wanted to deploy to our production
environment with sls deploy -s production
, ssm
would look for a key at: /aws/reference/secretsmanager/sameApplication/production/envs
.
The best part of this change is that it scales. We can add as many different environments as we want without having to change the serverless.yml file. With this setup, we can easily create another environment called admin
, create a new JSON object with new credentials, and save it to SM at: /aws/reference/secretsmanager/sameApplication/admin/envs
.
We’ll never have to change serverless.yml. All we have to do is specify the stage via the command line with -s admin
.
Let’s see how we can use this line now to declare the environments variables for our fetchData
function:
custom:
ssm: ${ssm:/aws/reference/secretsmanager/\
${opt:stage, self:provider.stage}/sampleApplication/envs~true}
fetchData:
handler: src/fetchData/handler.js
environment: ${self.custom.ssm.fetchData, \
file(./config/${self:custom.stage}.env.json)}
}
There’s a lot going on in with this change, so let’s break it down.
custom:
ssm: ${ssm:/aws/reference/secretsmanager/sampleApplication/\
${opt:stage, self:provider.stage}/envs~true}
Why are we setting the value returned by ssm
to a custom variable? Well, if you end up having more than one function that needs to check ssm
for environment variables or secrets, using a custom variable will significantly ‘DRY’ up our code by not having to use that line whenever we want to grab a variable or secret out of ssm
.
Now, we come to the second part:
fetchData:
handler: src/fetchData/handler.js
environment: ${self:custom.ssm, file(./${self:custom.stage}.env.json)}
}
There are two important things happening with the environment
key.
First, we are getting the custom variable we set with self:custom.ssm
.
Second, we can use default variables to our advantage. When setting our environment variables for the fetchData
function, we can first tell Serverless to check ssm
for our the key for our environment. If the stage we are deploying is production
or staging
, then Serverless will see the path to the location of the fetchData
file and use that for the environment variables.
But, if we invoke the function locally with the dev
stage, then Serverless will not be able to locate the file through ssm
and it will instead fall back to looking for a file on our local machine at the location ./dev.env.json
.
Using this approach, we can keep our production and staging secrets encrypted and off our local machine, while still allowing us to invoke the function locally with a different configuration.
This approach allows us to experiment with different environments and configurations locally. If we wanted to try different configurations locally with sls invoke local
all we have to do is create a new environment configuration file, admin.env.json
for example, and then specify the stage when invoking locally, sls invoke local -s admin -f fetchData
. We never have to change our serverless.yml
file or commit these configurations to AWS SM.
Sharing Serverless Environment Variables
Now is a good time to pause momentarily and talk about ways to share environment variables among your development team.
Most of us work with other developers on our applications, and even if you don’t, you may have a Serverless application that you one day hope will grow big enough to require more than just one developer.
If you are working with multiple developers, then you know that at some point, you’ll have to share your Serverless environment variables with other members of your team.
Sharing production environment variables is the most critical and also the most sensitive, so we’ll talk about that first. In our example, we are using AWS Secrets Manager to save our production environment variables.
Using Secrets Manager, we can manage access to our production environment variables like we manage access to any other AWS resource, using IAM. If you are already familiar with AWS IAM, then you will be familiar with how to handle the permissions for Secrets Manager. For example, using IAM, we give access only to the devs on our team that need to be able to deploy to production.
If you are using something like a build server, you could create an IAM role for the build server that allows it to access the production secrets.
The point being that using AWS Secrets Manager, access to production environment variables can be highly controlled and monitored, so production environment variables are not being shared insecurely.
Now, whether you choose the same level of security access for your other non-production environments is up to you and your team. In our example, I’m going to say that our dev environment does not require the same level of security, so we’ll use a different method for sharing environment variables among a development team.
Remember from our example, that in order to invoke our functions locally with the dev
stage, we created a local file called dev.env.json
. In this file, we set our environment variables that we wanted to use in our Lambda functions locally.
One big question we haven’t asked ourselves is “should we commit this file to version control?” Thoughts on this topic have changed over the years. In the past, teams seemed to be OK with committing non-production environment config settings in their version control.
Personally, I don’t believe in committing this dev.env.json
file to version control and it seems to the trend with software development has gone this way as well. We’ll add this to our `.gitignore’ file.
# .gitignore
config/dev.env.json
What do we do instead? How do we share the dev environment variables with the other developers on our team?
Well, one convention is to create an “example” file that has the same variable names but with fake or example values instead of real ones. Let’s take look at an example:
# dev.env.json.example
{
"API_ENDPOINT": "FAKE_API_ENDPOINT",
"API_SECRET_KEY": "FAKE_API_SECRET_KEY"
}
We can now commit this file, and not a real dev.env.json
file, to the Git repository. When any other developer downloads the repository, they can see the format and keys required for the application, copy it, and they can fill-in their own values.
In our case, we also have the environment variables that need to be populated when running our tests. Remember, in our case, we are using direnv
to populate our terminal ENV with the correct variables by specifying the keys and values in a .envrc
file.
Because .envrc
may contain sensitive information for our own local computer, we obviously don’t want to commit this file to the repository or have to share it with the other developers on our team.
We can use the same strategy with .envrc
as we did with dev.env.json
. We can create an example file called .envrc.example
and populate with the keys that are required, but replace the values with ones that are fake.
# .envrc.example
API_ENDPOINT=TEST_API_ENDPOINT
API_SECRET_KEY=TEST_SECRET
We can add .envrc
to our .gitignore
file.
# .gitignore
config/dev.env.json
.envrc
And then commit our .envrc.example
file instead.
Using this approach, we can avoid committing any Serverless environment variables to our git repository, while still giving the other developers on our team an idea of the kind of variables that need to be set in our different environments.
Ok, now, that we have taken a minute to talk about how to share Serverless environment variables, let’s continue our discussion with how to narrow the scope of our environment variables files even more.
Function-specific Serverless Environment Variables
The setup we have created works well for our one Serverless function application. But Serverless applications that only use one function are not very useful.
Let’s expand our application to include another function which takes some data and saves it to an RDS MySQL database/instance. Let’s call the function saveData
.
Obviously, in order to connect to the database, saveData
is going to need the database credentials.
Again, we can set these credentials as environment variables in our Node.js code, keeping the credentials out of our application code.
Let’s create the saveData
function:
const mysql = require(‘mysql’)
const connection = mysql.createConnection({
host: process.env.MYSQL_HOST,
username: process.env.MYSQL_USERNAME,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DB
})
module.exports.saveData = async (event, context) => {
connection.query(…)
}
Here, we have environment variables for the host, username, password, etc. for our MySQL database. Using the same approach we outlined earlier, we need to set these variables in AWS Secrets Manager for production and staging or in our dev.env.json
file for our development environment.
Let’s change our the JSON objects for all of our environments to include our MySQL environment variables.
# dev.env.json
{
"API_ENDPOINT": "https://www.example.com/api/test",
"API_SECRET_KEY": "fakesecret",
"MYSQL_HOST": "localhost",
"MYSQL_DB": "sample",
"MYSQL_USERNAME": "SampleUser",
"MYSQL_PASSWORD": "SamplePassword"
}
# staging
{
"API_ENDPOINT": "https://www.example.com/api/test",
"API_SECRET_KEY": "stagingsecretkey",
"MYSQL_HOST": "staging.aws.url",
"MYSQL_DB": "staging",
"MYSQL_USERNAME": "StagingUser",
"MYSQL_PASSWORD": "StagingPassword"
}
# production
{
"API_ENDPOINT": https://www.example.com/api/prod,
"API_SECRET_KEY": productionsecretkey,
"MYSQL_HOST": "production.aws.url",
"MYSQL_DB": "production",
"MYSQL_USERNAME": "ProductionUser",
"MYSQL_PASSWORD": "ProductionPassword"
}
OK, so we’re done right? Well, technically, this works. If you were to deploy your two functions fetchData
and saveData
to the staging
stage with sls deploy -s staging
, the saveData
function would now have the environment variables set.
However, as you can see from the screenshot above, saveData
also has the API_ENDPOINT
and API_SECRET_KEY
variables from the fetchData
function set.
I don’t know about you, but I prefer that my code only have access to the variables that it absolutely needs and no more. In essence, these environment variables are scoped “globally”.
Instead of keeping all our environment variables in a global scope, let’s scope them only to the functions that need them. We can do this by adding a fetchData
and saveData
key to staging
and moving them under their respective functions.
# staging
{
"fetchData": {
"API_ENDPOINT": "https://www.example.com/api/test",
"API_SECRET_KEY": "stagingsecretkey"
},
"saveData": {
"MYSQL_HOST": "aws.some.rds.url.com",
"MYSQL_DB": "staging",
"MYSQL_USERNAME": "StagingUser",
"MYSQL_PASSWORD": "StagingPassword"
}
}
We also need to change the functions in our serverless.yaml
file to reflect this change. In addition to loading ssm
, we also need to specify the key we want to use. We can do this by adding .nameOfFunction
to ssm
or using :nameOfFunction
if we are loading a file locally.
fetchData:
handler: src/fetchData/handler.js
environment: ${self.custom.ssm.fetchData, \
file(./config/${self:custom.stage}.env.json):fetchData}}
saveData:
handler: src/saveData/handler.js
environment: ${self.custom.ssm.saveData, \
file(./config/${self:custom.stage}.env.json):saveData}
For the fetchData
function, we added .fetchData
after calling our ssm
custom variable. In the case of our dev
staging, we call :fetchData
after we load the dev.env.json
file. This tells Serverless only set the environment variables that are listed under the fetchData
key and not the saveData
key and vice-versa.
Using this approach, fetchData
has no knowledge or access to the variables in saveData
.
Recap of Testing, Staging, and Production Configuration
That’s it. We have organized and set the environment variables that we want to use in our Node.js Serverless functions. We’ve covered a lot of ground, so in case you forgot, or you skipped ahead and just want a tl;dr, here’s the rundown.
How to set Serverless Environment Variables for Testing
If you are using something like Jest, simply set your environment variables the way you normally would in your terminal/local environment. Node will pull them in when you run your tests.
I use direnv, so I create a .envrc
file in my project directory and it automatically loads the variables that I set in that file into my terminal ENV.
# .envrc
API_ENDPOINT=https://www.example.com/api/test
API_SECRET_KEY=stagingsecretkey
How to Use Your Local Environment Variables In Your Serverless.yml File
There is an important distinction to be made here. We are talking about your local ENV variables that you want to use within your serverless.yml
NOT setting environment variables to be used by your functions.
To use your own local environment variables in your serverless.yml
file, you simply reference them with the ${env:VARIABLE_NAME_HERE}
syntax. For example,
custom:
editor: ${env:EDITOR} # In my case, EDITOR is set to 'vim'
How to Set Serverless Environment Variables For Your Running Functions Locally
For dev, our serverless.yml
file takes the name of the stage that we have passed in a looks for a file with that name with the extension .env.json
It looks for dev.env.json
and loads the values of the JSON file into memory. We also specify :FUNCTION_NAME
to tell Serverless to only get the environment variables that are set within that function name.
# serverless.yml
fetchData:
handler: src/fetchData/handler.js
environment: ${self.custom.ssm.fetchData, \
file(./config/${self:custom.stage}.env.json):fetchData}}
saveData:
handler: src/saveData/handler.js
environment: ${self.custom.ssm.saveData, \
file(./config/${self:custom.stage}.env.json):saveData}
# dev.env.json
{
"fetchData": {
"API_ENDPOINT": "https://www.example.com/api/test",
"API_SECRET_KEY": "fakesecret"
},
"saveData": {
"MYSQL_HOST": "local",
"MYSQL_DB": "sample",
"MYSQL_USERNAME": "SampleUser",
"MYSQL_PASSWORD": "SamplePassword"
}
}
The dev.env.json
contains an JSON object of keys which are our Serverless function names. The function names contain keys and values which represent environment variables that we want to set within our functions.
How to Set Serverless Environment Variables For Your Staging and Production Functions
For our staging and production environments, we are going a step beyond for security and not keeping our staging or production environment variables in a file on our local machine or in our git repository.
Instead, we leverage AWS’s Secrets Manager to keep our secrets encrypted and access highly controlled. In this case, our production environment variables are kept as a JSON document in the exact format of dev.env.json
but created in AWS Secrets Manager.
Then when we deploy to production, with a command like sls deploy -s production
, Serverless will first check AWS Secrets Manager for our production secrets before falling back to a local file.
# serverless.yml
custom:
ssm: "${ssm:/aws/reference/secretsmanager/\
${opt:stage, self:provider.stage}\
/sampleApplication/envs~true}"
fetchData:
handler: src/fetchData/handler.js
environment: ${self.custom.ssm.fetchData, \
file(./config/${self:custom.stage}.env.json)}
That’s all folks!
Hope this guide on Serverless environment variables has been hopeful for you. We covered a lot of ground in this guide. We talked about how to set environment variables for our functions in our serverless.yml
file. We looked at how to keep our environment variables for different stages organized and separated. We even talked about a secure way to keep our production secrets out of our code.
If you have any questions, please feel free to let me know in the comment section below.
Happy programming!
Hey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it.
You can also follow me on the Twitters at: @DeLongShot