Serverless Environment Variables – A Comprehensive Guide

You build Serverless applications. You know that you should use environment variables instead of putting things like secrets, configurations, and environment-specific logic in your application code, because you follow the 12-Factor App Methodology.

But configuring Serverless environment variables for projects is a pain.

What if it was not a pain? What if you had a straightforward way to configure your Serverless environment variables?

I’ve got you covered:

Serverless Environment Variables

Overview

Serverless Environments Variables on AWS Lambda and Node.js

This guide is going to cover configuring Serverless environment variables for AWS Lambda running the Node.js 10.x runtime.

The process may be different for vendors other than AWS and runtimes other than Node.js, but hopefully, you can apply these same principles to your chosen Serverless vendor and environment.

Environments Variables in serverless.yml vs Environment Variables in your Serverless functions

If Serverless vs. serverless was not confusing enough, our discussion of environment variables is also rife with confusion.

Let me explain.

When working with Serverless (with a capital “S”), there are two kinds of environment variables we need to be concerned about:

  1. The environment variables used by your functions in their runtime. In our case, Node.js.

  2. The environment variables in your local shell (Bash, Zsh, etc.) that are used by Serverless when building, packaging, and deploying your Serverless functions.

Case #2 is straightforward and you will probably encounter less than case #1, so we will only touch on it here briefly. Serverless provides a convenient way for you to reference environment variables that are set in your local environment in your serverless.yml file.

custom:
  editor: ${env:EDITOR} # In my case, EDITOR is set to 'vim'

In this example, ${env:EDITOR} will be replaced with by whatever the value of EDITOR is in you local shell. In this case, it would equal vim. This approach is useful if you have an environment variable in your own local environment that you want Serverless to interpret when, for example, you want to deploy your code to Lambda.

For the purposes of this guide, however, we will focus on case #1. We want to specify environment variables for our Node.js Lambda functions to use.

Let’s take a look.

Why Use Environment Variables?

First, if you are like me, you hate committing environment-specific code into your application logic. This type of code is part of the configuration of your application, not the application itself. It is best to separate your configuration from your application.

In addition, environment code can change frequently, become stale, or contain information, like credentials, that should not be kept in version control (unlike your application code).

What do you do instead of committing environment-specific code into your application? Use environment variables.

Let’s see an example of how this would work with an AWS Lambda function and Node.js.

AWS Lambda, Node.js, and Environment Variables

Let’s say, we need a Lambda function that fetches some data from a third-party API.

The API has two specific endpoints, one for testing: http://www.example.com/api/test and one for production: https://www.example.com/api/prod

We will use the testing endpoint for local development, test test suite, and for our staging Lambda functions. The production endpoint will only be used for our production Lambda functions.

To keep environment-specific logic out of our application code, we use an environment variable called API_ENDPOINT to set the API endpoint for our application to use.

To get an environment variable in Node.js, we use process.env.NAME_OF_VARIABLE. So, in our case, to get API_ENDPOINT, we use process.env.API_ENDPOINT.

require('axios');

module.exports.fetchData = async (event, context) => {
  const response = await axios.get(process.env.API_ENDPOINT)
}

Above, we have a very basic Lambda function that simply makes a GET request (using the axios JavaScript library) to the API endpoint specified via an environment variable.

But how do we specify the API_ENDPOINT environment variable for our Lambda function, so that Node.js will pick it up in our code?

Lambda has a special section inside AWS where you can specify the environment variables available to that Lambda function.

AWS Lambda Environment Variables Section

Of course, if you need to specify the same environment variables across multiple Lambda functions, entering them via the UI this way is not sustainable or scalable. Every time you need to change an environment variable that is shared across Lambda functions, you would have change the values of every function. Not very fun.

This is where Serverless environment variables are best.

Serverless will allow you to specify environment variables for a function via the environment key within the function specification.

fetchData:
  handler: ...
  environment: 
    API_ENDPOINT: "https://www.example.com/api/prod"
}

By specifying API_ENDPOINT under the environment key, Serverless will now add API_ENDPOINT to the list of environment variables in the function when we deploy our function to AWS via sls deploy -s production.

Let’s try it and see:

sls deploy -s production

Lambda Environment Variables Section Populated

OK, great. Now, we can access API_ENDPOINT inside the Lambda function.

But wait, what about our test suite, or local development, or staging functions? Where do we specify our test endpoint: https://www.example.com/api/test?

Local, Staging, & Production Serverless Environment Variables

You may be thinking, “OK, I’ll use some kind of conditional, maybe an ‘if’ statement to check the environment that is needed”. You have the right idea, but unfortunately YAML, the markup language used by Serverless, does not support conditional statements.

Serverless Variables

To get around the lack of conditionals in YAML, Serverless has its own variables. They can be used with the special syntax ${} anywhere within the serverless.yml file.

Using a Serverless variable, we can leverage the Serverless command line option called stage to switch which environment variable we use depending on the stage that is provided.

You can get the stage provided via the command line options anywhere in serverless.yml using:

${opt:stage}

But what if this option is not set? Luckily, Serverless has thought of this and created a way to set defaults for your Serverless variables. We can use a default when using a Serverless variable by supplying the default as the second argument to the ${} function. The syntax looks like this:

${customVariable, defaultValue}

In this example, if customVariable is not set, then Serverless will fallback to using defaultValue instead.

In our case, if the stage option is not provided via the command line, we can ensure that a default value is used instead. It makes sense to have our variable default to the provider stage. We can get the provider stage using self:provider.stage.

${opt:stage, self:provider.stage}

Great, now our variable has a default. However, if we have more than one Serverless function in our Serverless application, the likelihood that we will need to access this variable in multiple times would be great, and this variable is long and verbose.

Luckily, Serverless allows us to set our own custom keys using the custom key as a root element. You can define your own keys under custom, and access these Serverless variables in the file via the self:custom accessor.

custom:
  stage: ${opt:stage, self:provider.stage}

Anywhere in our Serverless configuration files (serverless.yml in our case), we can get the stage via ${self:custom.stage}

So, when we specify the stage via the command line:
serverless invoke local -s staging

${self:custom.stage} will equal staging anywhere in the file.

Serverless Environment Variables Based on Serverless Stage

OK, we now have the ability to read what stage is specified. But how does that help us with Serverless environment variables?

Well, we can switch which Serverless environment variables should be set depending on the stage that is specified.

Let’s go back to our third-party API endpoint example. Let’s say we want to specify the test API endpoint for our staging serverless functions and the production API endpoint only for our production serverless functions.

We can use custom variables to specify which endpoint to use:

custom:
  stage: ${opt:stage, self:provider.stage}
  apiEndpoint:
    production: https://www.example.com/api/prod
    staging: https://www.example.com/api/test

How do we set these custom variables as environment variables in our Serverless functions? We reference them like this:

fetchData:
  handler: ...
  environment: 
    API_ENDPOINT: %{self:custom.apiEndpoint:${self.custom.stage}}
}

Note: I’m using the environment key located under the function `fetchData name key. Serverless does also support global environment variables that can be set, in which case, this method works exactly the same.

Great! Now, when we pass the -s option via the serverless command, it will set API_ENDPOINT as either https://www.example.com/api/prod or https://www.example.com/api/test depending on what we set as the -s option.

As an example:

  serverless deploy -s staging -f fetchData

will deploy the new code to the fetchData Lambda function.

Now, if we go to the fetchData Lambda function under the AWS Lambda Management Console, we can see, in the Environment Variables section, our key API_ENDPOINT with the value https://www.example.com/api/test

Serverless Environment Variables Staging

Here’s what our serverless.yml file looks like now…

custom:
  stage: ${opt:stage, self:provider.stage}
  apiEndpoint:
    production: https://www.example.com/api/prod
    staging: https://www.example.com/api/test

...more code here...

fetchData:
  handler: src/fetchData/handler.js
  environment: 
    API_ENDPOINT: %{self:custom.apiEndpoint:${self:custom.stage}}
}

OK, so now we can specify the correct API endpoint depending on the environment we want to run.

There’s a problem with our file however. The way that we have written Serverless environment variables is fine for variables that are not sensitive in nature.

But what about data that is sensitive? Let’s say our third-party API requires a API secret key with each request.

The way that serverless.yml is currently written, we’d have to add the API secret key under custom. If you save your serverless.yml file to version control (and you should), your API secret key is now committed to version control history and accessible to anyone who has access to your repository (I’m going to assume Git for version control).

Serverless Environment Variables and Git (version control)

A better approach is to keep our serverless environment variables out of the Git repository. We can do this by specifying the environment variables in a separate file and then loading them into serverless.yml.

Let’s take a look at how this works:

# serverless.yml

custom:
  stage: ${opt:stage, self:provider.stage}

fetchData:
  handler: src/fetchData/handler.js
  environment: 
    %{file(./environment_variables.json):${self.custom.stage}}
}

OK, a lot has changed here, so let’s break it down.

First, we no longer put our environment variables under the custom key. Instead, we are going we specify under the environment key under the function name, .

file() is a function provided by Serverless. You can probably guess that the file() function accepts a path to a file that we want to read into our serverless.yml file. In our case, we are reading a JSON file called environment_variables.json. The fact that we are using a JSON file is important for the next part.

The : symbol tells Serverless that after reading the contents of environment_variables.json we want to retrieve the value under the key name that follows the :.

For example %{file(./environment_variables.json):API_ENDPOINT} would read the environment_variables.json file and then look for a key called API_ENDPOINT.

Let’s look again at our example:

fetchData:
  handler: src/fetchData/handler.js
  environment: %{file(./environment_variables.json):${self.custom.stage}}
}

In our example, first, we read the environment_variables.json then we use the Serverless variable custom.stage to find out what stage we are running in. Depending on the value of custom.stage, we then look for a key in environment_variables.json with that name.

Let’s see what environment_variables.json looks like:

# environment_variables.json

{
  "production": {
    "API_ENDPOINT": "https://www.example.com/api/prod",
    "API_SECRET_KEY": "productionsecretkey"
  },
  "staging": {
    "API_ENDPOINT": "https://www.example.com/api/test",
    "API_SECRET_KEY": "stagingsecretkey"
  }
}

You can see that environment_variables.json is a JSON object where the keys are the names of the stages we are using. The value of the stage names are JSON objects containing key-value pairs of environment variables to be set within that stage.

When Serverless loads environment_variables.json, it will have access to the main JSON object. It will then look for the key that corresponds to ${self:custom.stage} and load the JSON object associated with that key. Any keys and values in that object will become the environment variables set for our fetchData function.

One more thing. Obviously, the whole idea behind separating out of environment variables into a separate file is to protect potentially sensitive data from getting added to the repository.

Therefore, we need to add environment_variables.json to our `.gitignore’ file.

environment-variables.json

Great! Now our environment variables and sensitive secrets are out of our application repository.

OK, so we have a great way to set our environment variables for our staging and production environments. But…

Tests and Serverless Environment Variables

What happens when we try to run our tests?

You are writing tests for your functions…right? Right??

In our example, we are using Node.js for our Lambda functions. If you have a written unit tests for your code with something like Jest, right now, anywhere we read an environment variable with process.env, the value will be undefined. Not good.

Here in lies our next problem. While the serverless.yml file allows us to specify the environment variables we want to set in AWS, it does not set them when we try to run the JavaScript of our Serverless functions in our unit tests.

The way to set environment variables for your JavaScript tests is the same way we set local environment variables for any local commands.

There are many ways to set environment variables for command line functions. My preferred way right now is to use direnv. direnv will allow us to create a new file, .envrc at the root directory of our project, and will automatically load any environment variables set in the file.

Here’s an example of an .envrc file for our case:

API_ENDPOINT=https://www.example.com/api/test
API_SECRET_KEY=stagingsecretkey

When we run npm run test, any process.env in our tests will be populated with API_ENDPOINT and API_SECRET_KEY.

Different Serverless Environment Variables for Different Serverless Environments

I don’t know about you, but in this age of web security, we need to hold ourselves to a higher standard than simply keeping all of our application secrets all together in the same file (and unencrypted no less).

Instead of putting all of our secrets and environment variables in one file, like environment_variables.json, I propose that we keep a separate file for each environment and load the specific file when we need to use it.

For example, say we plan to have 3 environments, dev for running our functions locally with sls invoke local, and staging and production in Lambda.

Let’s create a file for each of our environments in the root of our project. Our development environment variables file will be: dev.env.json, our staging environment variables staging.env.json and our production environment variables production.env.json.

NOTE: In this case, the “env” is added to the file extension to make it more obvious in the name that these are environment configuration files, but this is only my convention and is completely optional.

We have created separate files for each our of environments, so let’s move production, staging, and dev environment variables out of environment_variables.json and into their respective JSON files, so that they look like this.

# dev.env.json
{
  "API_ENDPOINT": "https://www.example.com/api/fake",
  "API_SECRET_KEY": "fakesecretkey"
}
# staging.env.json
{
  "API_ENDPOINT": "https://www.example.com/api/test",
  "API_SECRET_KEY": "stagingsecretkey"
}
# production.env.json
{
  "API_ENDPOINT": "https://www.example.com/api/prod",
  "API_SECRET_KEY": "productionsecretkey"
}

You can also remove environment_variables.json and remove it from .gitignore. We no longer need it.

We’re going to change our fetchData function in our serverless.yml file to reference the file of the stage we specify when we choose to run or deploy our functions.

fetchData:
  handler: src/fetchData/handler.js
  environment: ${file(./${self:custom.stage}.env.json)}
}

So now, when we deploy fetchData to AWS for the staging environment with sls deploy -s staging -f fetchData, Serverless will look for the staging.env.json file in the root of our project directory and put the keys and values in that file into the environment key of our fetchData function.

Production Serverless Environment Variables and Secrets

I don’t know about you, but even keeping my staging and production environment variables and secrets in a file on my local machine is not secure.

We need to take our design one step further. Instead of placing them in an unencrypted file on our local machine, we can use AWS Secrets Manager to manage our staging and production environment variables and secrets.

AWS Secrets Manager and Serverless

I don’t want to do a whole tutorial on AWS Secrets Manager (SM), as this guide is not how about to use AWS, but in order to understand the next part of our code, you need to understand a little bit about AWS SM and how it works.

AWS SM is a simple key-value store that charges based on the number of “secrets” or keys that you store and the number of requests for those keys.

The service is not inexpensive. If we have a lot of sensitive environment variables that we need to store, it would get expensive quickly.

However, instead of storing each environment variable as a separate “secret” in AWS SM, we can take advantage of the fact that AWS SM can accept JSON objects as values.

And luckily for us, we already have all of our staging and production environment variables separated into different files as JSON objects. Each of those files contains an already formatted JSON object.

So, how do we save our JSON objects for each environment to AWS SM, so Serverless can easily retrieve them when we deploy to Lambda?

AWS SM allows us to specify a path in SM where would like to save the key. We can use this option to specify the environment in the path.

Let’s try it. Let’s say we want to save the JSON objects for staging and production to keys called envs. We obviously cannot save them to the same path or SM would overwrite each of them.

Instead, let’s include the environment names: staging and production in the paths.

/sampleApplication/staging/envs~true
/sampleApplication/production/envs~true

Note: The ~true is an option provided by SM to say that our secrets are encrypted, which is obvious to me, but whatever.

Save each JSON object to their respective path. We now have a place to securely store our staging and production environment variables.

Let’s look at how we access them when we want to deploy our functions to Lambda.

Accessing Encrypted Environment Variables in Serverless

CloudFormation (and by extension, Serverless) provides an nice interface for accessing encrypted secrets in AWS Secrets Manager in our Serverless templates. We can access our environment variables/secrets by using the ssm function in Serverless and passing in a path to our key. Here’s an example, for our own production secrets:

${ssm:/aws/reference/secretsmanager/\
sampleApplication/production/envs~true}

Great, now we can tell Serverless where to load our environment variables for production. How do we handle staging and dev? Let’s modify this line to handle all stages.

${ssm:/aws/reference/secretsmanager/sampleApplication/\
  ${opt:stage, self:provider.stage}/envs~true}

With this change, Serverless will read the stage from the command line and add it to the path. If no stage option is provided via the command line, it will use the default provided by the provider.

If we wanted to deploy to our production environment with sls deploy -s production, ssm would look for a key at: /aws/reference/secretsmanager/sameApplication/production/envs.

The best part of this change is that it scales. We can add as many different environments as we want without having to change the serverless.yml file. With this setup, we can easily create another environment called admin, create a new JSON object with new credentials, and save it to SM at: /aws/reference/secretsmanager/sameApplication/admin/envs.

We’ll never have to change serverless.yml. All we have to do is specify the stage via the command line with -s admin.

Let’s see how we can use this line now to declare the environments variables for our fetchData function:

custom:
  ssm: ${ssm:/aws/reference/secretsmanager/\
    ${opt:stage, self:provider.stage}/sampleApplication/envs~true}

fetchData:
  handler: src/fetchData/handler.js
  environment: ${self.custom.ssm.fetchData, \
    file(./config/${self:custom.stage}.env.json)}
}

There’s a lot going on in with this change, so let’s break it down.

custom:
  ssm: ${ssm:/aws/reference/secretsmanager/sampleApplication/\
    ${opt:stage, self:provider.stage}/envs~true}

Why are we setting the value returned by ssm to a custom variable? Well, if you end up having more than one function that needs to check ssm for environment variables or secrets, using a custom variable will significantly ‘DRY’ up our code by not having to use that line whenever we want to grab a variable or secret out of ssm.

Now, we come to the second part:

fetchData:
  handler: src/fetchData/handler.js
  environment: ${self:custom.ssm, file(./${self:custom.stage}.env.json)}
}

There are two important things happening with the environment key.

First, we are getting the custom variable we set with self:custom.ssm.

Second, we can use default variables to our advantage. When setting our environment variables for the fetchData function, we can first tell Serverless to check ssm for our the key for our environment. If the stage we are deploying is production or staging, then Serverless will see the path to the location of the fetchData file and use that for the environment variables.

But, if we invoke the function locally with the dev stage, then Serverless will not be able to locate the file through ssm and it will instead fall back to looking for a file on our local machine at the location ./dev.env.json.

Using this approach, we can keep our production and staging secrets encrypted and off our local machine, while still allowing us to invoke the function locally with a different configuration.

This approach allows us to experiment with different environments and configurations locally. If we wanted to try different configurations locally with sls invoke local all we have to do is create a new environment configuration file, admin.env.json for example, and then specify the stage when invoking locally, sls invoke local -s admin -f fetchData. We never have to change our serverless.yml file or commit these configurations to AWS SM.

Sharing Serverless Environment Variables

Now is a good time to pause momentarily and talk about ways to share environment variables among your development team.

Most of us work with other developers on our applications, and even if you don’t, you may have a Serverless application that you one day hope will grow big enough to require more than just one developer.

If you are working with multiple developers, then you know that at some point, you’ll have to share your Serverless environment variables with other members of your team.

Sharing production environment variables is the most critical and also the most sensitive, so we’ll talk about that first. In our example, we are using AWS Secrets Manager to save our production environment variables.

Using Secrets Manager, we can manage access to our production environment variables like we manage access to any other AWS resource, using IAM. If you are already familiar with AWS IAM, then you will be familiar with how to handle the permissions for Secrets Manager. For example, using IAM, we give access only to the devs on our team that need to be able to deploy to production.

If you are using something like a build server, you could create an IAM role for the build server that allows it to access the production secrets.

The point being that using AWS Secrets Manager, access to production environment variables can be highly controlled and monitored, so production environment variables are not being shared insecurely.

Now, whether you choose the same level of security access for your other non-production environments is up to you and your team. In our example, I’m going to say that our dev environment does not require the same level of security, so we’ll use a different method for sharing environment variables among a development team.

Remember from our example, that in order to invoke our functions locally with the dev stage, we created a local file called dev.env.json. In this file, we set our environment variables that we wanted to use in our Lambda functions locally.

One big question we haven’t asked ourselves is “should we commit this file to version control?” Thoughts on this topic have changed over the years. In the past, teams seemed to be OK with committing non-production environment config settings in their version control.

Personally, I don’t believe in committing this dev.env.json file to version control and it seems to the trend with software development has gone this way as well. We’ll add this to our `.gitignore’ file.

# .gitignore

config/dev.env.json

What do we do instead? How do we share the dev environment variables with the other developers on our team?

Well, one convention is to create an “example” file that has the same variable names but with fake or example values instead of real ones. Let’s take look at an example:

# dev.env.json.example
{
  "API_ENDPOINT": "FAKE_API_ENDPOINT",
  "API_SECRET_KEY": "FAKE_API_SECRET_KEY"
}

We can now commit this file, and not a real dev.env.json file, to the Git repository. When any other developer downloads the repository, they can see the format and keys required for the application, copy it, and they can fill-in their own values.

In our case, we also have the environment variables that need to be populated when running our tests. Remember, in our case, we are using direnv to populate our terminal ENV with the correct variables by specifying the keys and values in a .envrc file.

Because .envrc may contain sensitive information for our own local computer, we obviously don’t want to commit this file to the repository or have to share it with the other developers on our team.

We can use the same strategy with .envrc as we did with dev.env.json. We can create an example file called .envrc.example and populate with the keys that are required, but replace the values with ones that are fake.

# .envrc.example

API_ENDPOINT=TEST_API_ENDPOINT
API_SECRET_KEY=TEST_SECRET

We can add .envrc to our .gitignore file.

# .gitignore

config/dev.env.json
.envrc

And then commit our .envrc.example file instead.

Using this approach, we can avoid committing any Serverless environment variables to our git repository, while still giving the other developers on our team an idea of the kind of variables that need to be set in our different environments.

Ok, now, that we have taken a minute to talk about how to share Serverless environment variables, let’s continue our discussion with how to narrow the scope of our environment variables files even more.

Function-specific Serverless Environment Variables

The setup we have created works well for our one Serverless function application. But Serverless applications that only use one function are not very useful.


Let’s expand our application to include another function which takes some data and saves it to an RDS MySQL database/instance. Let’s call the function saveData.

Obviously, in order to connect to the database, saveData is going to need the database credentials.

Again, we can set these credentials as environment variables in our Node.js code, keeping the credentials out of our application code.

Let’s create the saveData function:

const mysql = require(‘mysql’)

const connection = mysql.createConnection({
  host: process.env.MYSQL_HOST,
  username: process.env.MYSQL_USERNAME,
  password: process.env.MYSQL_PASSWORD,
  database: process.env.MYSQL_DB
})

module.exports.saveData = async (event, context) => {
   connection.query(…)
}

Here, we have environment variables for the host, username, password, etc. for our MySQL database. Using the same approach we outlined earlier, we need to set these variables in AWS Secrets Manager for production and staging or in our dev.env.json file for our development environment.

Let’s change our the JSON objects for all of our environments to include our MySQL environment variables.

# dev.env.json
{
  "API_ENDPOINT": "https://www.example.com/api/test",
  "API_SECRET_KEY": "fakesecret",
  "MYSQL_HOST": "localhost",
  "MYSQL_DB": "sample",
  "MYSQL_USERNAME": "SampleUser",
  "MYSQL_PASSWORD": "SamplePassword"
}

# staging
{
  "API_ENDPOINT": "https://www.example.com/api/test",
  "API_SECRET_KEY": "stagingsecretkey",
  "MYSQL_HOST": "staging.aws.url",
  "MYSQL_DB": "staging",
  "MYSQL_USERNAME": "StagingUser",
  "MYSQL_PASSWORD": "StagingPassword"
}

# production
{
  "API_ENDPOINT": https://www.example.com/api/prod,
  "API_SECRET_KEY": productionsecretkey,
  "MYSQL_HOST": "production.aws.url",
  "MYSQL_DB": "production",
  "MYSQL_USERNAME": "ProductionUser",
  "MYSQL_PASSWORD": "ProductionPassword"
}

OK, so we’re done right? Well, technically, this works. If you were to deploy your two functions fetchData and saveData to the staging stage with sls deploy -s staging, the saveData function would now have the environment variables set.

Staging Environment Variables Lambda

However, as you can see from the screenshot above, saveData also has the API_ENDPOINT and API_SECRET_KEY variables from the fetchData function set.

I don’t know about you, but I prefer that my code only have access to the variables that it absolutely needs and no more. In essence, these environment variables are scoped “globally”.

Instead of keeping all our environment variables in a global scope, let’s scope them only to the functions that need them. We can do this by adding a fetchData and saveData key to staging and moving them under their respective functions.

# staging
{
  "fetchData": {
    "API_ENDPOINT": "https://www.example.com/api/test",
    "API_SECRET_KEY": "stagingsecretkey"
  },
  "saveData": {
    "MYSQL_HOST": "aws.some.rds.url.com",
    "MYSQL_DB": "staging",
    "MYSQL_USERNAME": "StagingUser",
    "MYSQL_PASSWORD": "StagingPassword"
  }
}

We also need to change the functions in our serverless.yaml file to reflect this change. In addition to loading ssm , we also need to specify the key we want to use. We can do this by adding .nameOfFunction to ssm or using :nameOfFunction if we are loading a file locally.

fetchData:
  handler: src/fetchData/handler.js
  environment: ${self.custom.ssm.fetchData, \
    file(./config/${self:custom.stage}.env.json):fetchData}}

saveData:
  handler: src/saveData/handler.js
  environment: ${self.custom.ssm.saveData, \
    file(./config/${self:custom.stage}.env.json):saveData}

For the fetchData function, we added .fetchData after calling our ssm custom variable. In the case of our dev staging, we call :fetchData after we load the dev.env.json file. This tells Serverless only set the environment variables that are listed under the fetchData key and not the saveData key and vice-versa.

Using this approach, fetchData has no knowledge or access to the variables in saveData.

Recap of Testing, Staging, and Production Configuration

That’s it. We have organized and set the environment variables that we want to use in our Node.js Serverless functions. We’ve covered a lot of ground, so in case you forgot, or you skipped ahead and just want a tl;dr, here’s the rundown.

How to set Serverless Environment Variables for Testing

If you are using something like Jest, simply set your environment variables the way you normally would in your terminal/local environment. Node will pull them in when you run your tests.

I use direnv, so I create a .envrc file in my project directory and it automatically loads the variables that I set in that file into my terminal ENV.

# .envrc
API_ENDPOINT=https://www.example.com/api/test
API_SECRET_KEY=stagingsecretkey

How to Use Your Local Environment Variables In Your Serverless.yml File

There is an important distinction to be made here. We are talking about your local ENV variables that you want to use within your serverless.yml NOT setting environment variables to be used by your functions.

To use your own local environment variables in your serverless.yml file, you simply reference them with the ${env:VARIABLE_NAME_HERE} syntax. For example,

custom:
  editor: ${env:EDITOR} # In my case, EDITOR is set to 'vim'

How to Set Serverless Environment Variables For Your Running Functions Locally

For dev, our serverless.yml file takes the name of the stage that we have passed in a looks for a file with that name with the extension .env.json

It looks for dev.env.json and loads the values of the JSON file into memory. We also specify :FUNCTION_NAME to tell Serverless to only get the environment variables that are set within that function name.

# serverless.yml

fetchData:
  handler: src/fetchData/handler.js
  environment: ${self.custom.ssm.fetchData, \
    file(./config/${self:custom.stage}.env.json):fetchData}}

saveData:
  handler: src/saveData/handler.js
  environment: ${self.custom.ssm.saveData, \
    file(./config/${self:custom.stage}.env.json):saveData}
# dev.env.json
{
  "fetchData": {
    "API_ENDPOINT": "https://www.example.com/api/test",
    "API_SECRET_KEY": "fakesecret"
  },
  "saveData": {
    "MYSQL_HOST": "local",
    "MYSQL_DB": "sample",
    "MYSQL_USERNAME": "SampleUser",
    "MYSQL_PASSWORD": "SamplePassword"
  }
}

The dev.env.json contains an JSON object of keys which are our Serverless function names. The function names contain keys and values which represent environment variables that we want to set within our functions.

How to Set Serverless Environment Variables For Your Staging and Production Functions

For our staging and production environments, we are going a step beyond for security and not keeping our staging or production environment variables in a file on our local machine or in our git repository.

Instead, we leverage AWS’s Secrets Manager to keep our secrets encrypted and access highly controlled. In this case, our production environment variables are kept as a JSON document in the exact format of dev.env.json but created in AWS Secrets Manager.

Then when we deploy to production, with a command like sls deploy -s production, Serverless will first check AWS Secrets Manager for our production secrets before falling back to a local file.

# serverless.yml

custom:
  ssm: "${ssm:/aws/reference/secretsmanager/\
    ${opt:stage, self:provider.stage}\
    /sampleApplication/envs~true}"

fetchData:
  handler: src/fetchData/handler.js
  environment: ${self.custom.ssm.fetchData, \
    file(./config/${self:custom.stage}.env.json)}

That’s all folks!

Hope this guide on Serverless environment variables has been hopeful for you. We covered a lot of ground in this guide. We talked about how to set environment variables for our functions in our serverless.yml file. We looked at how to keep our environment variables for different stages organized and separated. We even talked about a secure way to keep our production secrets out of our code.

If you have any questions, please feel free to let me know in the comment section below.

Happy programming!


Freelance Elixir & Ruby on Rails DeveloperHey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Serverlesss and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot

Updating Dynamic Maps in Elixir

Maps are used extensively in Elixir. Updating nested maps in Elixir is straightforward if you already know the structure of your map beforehand (in this case, you may want to use a Struct instead).

But updating dynamic maps in Elixir, especially if they are nested, can be a bit difficult. EDIT: After publishing this post, I realized that I had forgotten about the incredibly helpful Map.update/4 which can make updating maps which have 1 layer of nesting easy.

However, it does not work if your map is nested several layers deep. In which case, the following trick still applies. I have updated the post accordingly.

I’ll show you a little talked about and documented way of updating a dynamic nested map in Elixir.

Elixir Dynamic Maps

Dynamic Nested Maps in Elixir

Let’s say you have a nested map:

inventory = %{
  cameras: %{}
}

Because the “cameras” key already exists in inventory map, updating the “cameras” map is straightforward in Elixir.


get_and_update_in(inventory.cameras, fn(cameras) ->
  {cameras, put_in(cameras, [10_001], "Nikon D90")}
end)
// %{cameras: %{10_001 => "Nikon D90"}}

or 

Map.update(inventory, :cameras, %{10_001 => "Nikon D90"}, &Map.put(&1, 10_001, "Nikon D90"))
// %{cameras: %{10_001: "Nikon D90"}}

But what if we want to add a key that doesn’t exist?

Let’s say we want to add our first lens to our inventory using the :lens key under inventory.

iex(1)> put_in(inventory, [:lenses, 10_002], "Nikon 50mm F1.4")
** (ArgumentError) could not put/update key 10002 on a nil value

Uh-oh. The :lenses key doesn’t exist in inventory yet, so put_in/3 will not allow us to add to it.

Now, there’s one way we could it if we know that lenses doesn’t exist yet. By using Map.merge/2:

Map.merge(inventory, %{lenses: %{10_002: "Nikon 50mm F1.4"}})
// %{
  cameras: %{
    10_001 => "Nikon D90"
  }, 
  lenses: %{
    10_002 => "Nikon 50mm F1.4"
  }
}

However, this solution won’t work if :lenses already exists and there are keys and values present. Let’s say we already have a lens and want to add one to the collection. The merge will actually overwrite the existing lens instead of adding to it.

inventory = %{
  cameras: %{
    10_001 => "Nikon D90"
  }, 
  lenses: %{
    10_002 => "Nikon 50mm F1.4"
  }
}

Map.merge(inventory, %{lenses: %{10_003: "Nikon 85mm F1.8"}})

// %{
  cameras: %{
    10_001 => "Nikon D90"
  }, 
  lenses: %{
    10_003 => "Nikon 85mm F1.8"
  }
}

Uh-oh. Not what we wanted either. OK, well, a prolonged solution would be to check if the lenses key exists in inventory first, then use a condition to either update the lens or create a new map…

Ugh, that’s too much work.

Map.update/4 with Nested Elixir Maps

There’s a convenient way to update a nested map that is 1 layer deep which already exists and that is the Map.update/4 function we saw earlier.

inventory = %{
  cameras: %{
    10_001 => "Nikon D90"
  }, 
  lenses: %{
    10_002 => "Nikon 50mm F1.4"
  }
}

Map.update(inventory, :lenses, %{10_003 => "Nikon 85mm F1.8"}, &Map.put(&1, 10_003, "Nikon 85mm F1.8"))
// %{
  cameras: %{
    10_001: "Nikon D90"
  }, 
  lenses: %{
    10_002: "Nikon 50mm F1.4", 
    10_003: "Nikon 85mm F1.8"
  }
}

The key part of this function is the third argument. It is the default value that will be used if the key in the second argument cannot be found in the inventory map provided.

But, if a key and value are found, the value will be passed to the function in the 4th argument, in which case, we simply add the new lens to the collection using Map.put/3.

However, as I mentioned before, Map.update/4 will not work if you are trying to update a map which is more than one layer deep.

Let’s rework our example to use deeply nested maps. Let’s say our camera inventory is divided up by brand which are represented as maps.

inventory = %{
  cameras: %{
    "Nikon" => %{
      10_001 => "Nikon D90"
    },
    "Canon" => %{}
  }
}

Let’s say we want to add a Canon camera to the inventory.

inventory = %{
  cameras: %{
    "Nikon" => %{
      10_001 => "Nikon D90"
    },
    "Canon" => %{}
  }
}

Map.update(inventory, "Canon", %{10_004 => "Canon 50D"}, &Map.put(&1, 10_004, "Canon 50D"))

// %{
  "Canon" => %{
    10_004 => "Canon 50D"
  },
  cameras: %{
    "Nikon" => %{
      10_001 => "Nikon D90"
    },
    "Canon" => %{}
  }
}

Ugh. Not what we wanted. OK, so how do we update maps which nested more than 1 layer deep?

There’s actually a rarely talked about way to do this.

The Access.key/2 function in Elixir with Deeply Nested Maps

We can actually use a combination of the get_and_update_in/3 and the Access.key/2 function.

This solution will work in the cases where "Canon" map exists and has keys and values present or does not.

inventory = %{
  cameras: %{
    "Nikon" => %{
       10_001 => "Nikon D90"
    }
  }
}

get_and_update_in(inventory, [Access.key(:cameras, %{}), Access.key("Canon", %{})], fn(canons) ->
  {canons, put_in(canons, [10_004], "Canon 50D")}
end)

// %{
  cameras: %{
    "Nikon" => %{
      10_001 => "Nikon D90"
    },
    "Canon" => %{
      10_004 => "Canon 50D"
    }
  }
}
inventory = %{
  cameras: %{
    "Nikon" => %{
       10_001 => "Nikon D90"
    },
    "Canon" => %{
      10_004 => "Canon 50D"
    }
  }
}

get_and_update_in(inventory, [Access.key(:cameras, %{}), Access.key("Canon", %{})], fn(canons) ->
  {canons, put_in(canons, [10_005], "Canon 80D")}
end)

// %{
  cameras: %{
    "Nikon" => %{
       10_001 => "Nikon D90"
    },
    "Canon" => %{
      10_004 => "Canon 50D",
      10_005 => "Canon 80D"
    }
  }
}

This solution works because the get_and_update_in/3 function can evaluate functions passed into the list in its second argument.

It can evaluate function as as well the traditional “key” values like atoms and strings.

Here’s the kicker, Access.key/2 actually returns a function. So, we are actually passing a function in the list as part of the second parameter to get_and_update_in/3 function: [Access.key(:lenses, %{})].

As you might have guessed, the second (optional) argument to Access.key/2 is the default value that should be used if the lenses key is not found.

Of course, if the key is found, it ignores the default value. But this allows us to dynamically update a map in Elixir not matter if the key exists or not.


Freelance Elixir & Ruby on Rails DeveloperHey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Elixir and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot

Ecto Stale Entry Error – Solving This Cryptic Elixir Error

The Ecto stale entry error (Ecto.StaleEntryError) is one of the more cryptic errors you’ll see in Elixir.

Elixir is pretty good about providing helpful error messages and information, but this one left me scratching my head.

Until I figured out one simple cause of the Ecto stale entry error.

Ecto Stale Entry Error

Ecto Update

If you are using Ecto’s update/2 callback, you may have come across this error.

To get to the point, you may be seeing the Ecto.StaleEntryError because you are simply trying to update a record that does not exist.

I said it was simple right?

Let’s look at an example:

defmoudle Example.Repo do
  use Ecto.Repo, otp_app: :example
end

defmodule Example.User do
  use Ecto.Schema
  import Ecto.Changeset

  schema "users" do
    field(:deactivated, :boolean)
  end

  def changeset(user, params = %{}) do
    cast(user, params, [:deactivated])
  end

  def deactivate(user_id) do
    %__MODULE__{id: user_id}
    |> changeset(%{deactivated: true})
    |> Example.Repo.update() # StaleEntryError here
  end
end

defmodule Example.Deactivate do
  alias Example.User

  def deactivate(user_id) do
    User.deactivate(user_id)
    # do some other clean-up
  end
end

In this example, we have a module, a User and we want to deactivate an existing user in the database.

Now, you may notice that I’ve cheated a bit.

Instead of querying the database first to see if the user exists, I’m simply creating a User struct with the id of the user that needs to be deactivated.

def deactivate(user_id) do
  %__MODULE__{id: user_id}
  |> changeset(%{deactivated: true})
  |> Example.Repo.update() # StaleEntryError here
end

The benefit of this approach is that it does not require an extra database call to find the row in the database.

The downside, however, is if I pass a user_id of a row does not exist, update with throw the Ecto.StaleEntryError.

How to Prevent Ecto’s Ecto.StaleEntryError

You can easily prevent this error from happening in several ways, which way you choose is completely up to you.

You can wrap the Deactivation.deactivate/1 function in a OTP-compliant process and simply “let it crash”, then let a supervisor restart the process.

If you want to ensure that the record really is there, you can query for the record first, using something like Repo.get/3, then pass the record to update, as the documentation for update shows.

Finally, you could wrap the function in a try/rescue block, but I would not recommend this one, as the other methods allow for more intentional programming and leverage some of the strengths of Elixir better.

Hopefully, these options will help you save a little time when trying to solve the Ecto stale entry error.

This post is part of my series of Elixir Tutorials


Freelance Elixir & Ruby on Rails Developer Hey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Elixir and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot

Performing Bulk Updates with Ecto And Elixir

Updating one database row in Ecto is straightforward. But figuring out how to bulk update rows using Ecto without updating one record at a time?

Now that is a pain.

What if there were a way to bulk update rows of data in your database using only ONE update call?

Well, there is, and in this post, I’ll show you how you can do it using Ecto without having to write a single line of SQL.

bulk update ecto

Ecto update/2 And Changesets

You can, of course, simply create changesets or structs for each row of data, group them together in something like a list, and then pass them all one-by-one using Enum.each/2 to Ecto.Repo.update/2.

defmodule Example.User do
  use Ecto.Schema
  import Ecto.Changeset

  def changeset(user, params) do
    cast(user, params, [:name, :deactivated])
  end

  schema "users" do
    field :name, :string
    field :deactivated, :boolean
    field :deactivated_on, :date

    timestamps()
  end
end

iex(1)> Enum.each(users, fn(user) ->
...(1)> changeset = Example.User.changeset(user, %{deactivated: true})
...(1)> Example.Repo.update(changeset)
...(1)> end)

...[debug] QUERY OK db=33.5ms
UPDATE "users" SET "deactivated" = $1, \
"updated_at" = $2 WHERE "id" = $3 [true, \
{{2018, 7, 10}, {1, 20, 19, 922659}}, 10]

...[debug] QUERY OK db=4.9ms
UPDATE "users" SET "deactivated" = $1, \
"updated_at" = $2 WHERE "id" = $3 [true, \
{{2018, 7, 10}, {1, 20, 19, 957723}}, 11]

...[debug] QUERY OK db=1.8ms
UPDATE "users" SET "deactivated" = $1, \
"updated_at" = $2 WHERE "id" = $3 [true, \
{{2018, 7, 10}, {1, 20, 19, 963006}}, 12]

...[debug] QUERY OK db=3.8ms
UPDATE "users" SET "deactivated" = $1, \
"updated_at" = $2 WHERE "id" = $3 [true, \
{{2018, 7, 10}, {1, 20, 19, 965138}}, 13]

...[debug] QUERY OK db=8.6ms queue=0.1ms
UPDATE "users" SET "deactivated" = $1, \
"updated_at" = $2 WHERE "id" = $3 [true, \
{{2018, 7, 10}, {1, 20, 19, 969666}}, 14]
:ok

Ecto and update_all

Obviously, performing individual updates can be time-prohibitive if you are performing a lot of updates. Ecto knows this, which is why it also provides the Ecto.Repo.update_all/3 function.

update_all/3 will actually do a one SQL UPDATE call on each of the rows and columns that you provide to it.

Here’s the catch:

The update_all/3 function works a bit differently than update/2.

“update_all” requires you to pass in a “queryable”. That is, something that implements the Queryable protocol.

Update_all and Schemas

Adding use Ecto.Schema and the schema/2 function to a module, automatically converts the current module into a queryable. Like the User example we saw above:

defmodule Example.User do
  use Ecto.Schema 
  import Ecto.Changeset

  def changeset(user, params) do
    cast(user, params, [:name, :deactivated])
  end

  # Queryable
  schema "users" do
    field :name, :string
    field :deactivated, :boolean
    field :deactivated_on, :date

    timestamps()
  end
end

Because Example.User implements uses the schema/2 macro, we can now pass it to update_all/3 as the first argument.

Ecto.Repo.update_all(Example.User,
  set: [deactivated_on: Date.utc_today()]
)

...[debug] QUERY OK source="users" db=21.0ms queue=0.1ms
UPDATE "users" AS u0 SET "deactivated_on" = $1 [{2018, 7, 10}]
{10, nil}

As you can see, Ecto does the update as one single UPDATE call. However, this example updates every single one of our Users.

What if we want to bulk update a sub-section of our Users?

Ecto and Queries

The other option to use with update_all/3 is to build a query directly using Ecto.Query and the from/2 function. Elixir calls these keyword-based queries.

import Ecto.Query

query = from(u in User, where: u.deactivated == false)

Ecto.Repo.update_all(query,
  set: [
    deactivated: true,
    deactivated_on: Date.utc_today()
  ]
)

However, there are a couple of issues with this approach.

First, update_all does not allow all options in queries the way that other query functions do, such as, Repo.all/1.

For example, you cannot use joins in your query or other modifiers like order_by. Ecto will throw an error if you try.

Second, there’s the issue of delay between queries.

Let’s take our example above. Let’s say you wanted to query a bunch of Users in your database, take some action on them (like send an email), and then update them in the database. Now, let’s say this could be 100s or 1000s of users.

You can build a query to grab the users from the database as structs which you can then use to do whatever-you-need-to-do. You could then use that same query you built before and pass that into the update_all/2 function (provided that query doesn’t have any exceptions noted above).

Here’s an example:

import Ecto.Query

query = from u in User, where: u.deactivated == false
users = Ecto.Repo.all(query)

do_something(users)

Ecto.Repo.update_all(query, 
  set: [
    deactivated: true, 
    deactivated_on: Date.utc_today()
  ]
)

The problem with this approach is, in the time it took us to query the database, take some actions, and update the database, what if another record was inserted which qualifies for our query? It will now get updated with the update_all/2 function without any actions being taken on it.

Use IDs for updating

If you wanted to use a complicated query for your updates, you can try this approach. Build a complicated query and retrieve the records you want using Ecto.

import Ecto.Query

query =
  from(u in User,
    join:
      p(
        from(Post,
          where: p.author_id = u.id,
          where: u.deactivated == false
        )
      )
   )

users = Ecto.Repo.all(query)

do_something(users)

Now, instead of passing the same query we used before (and we can’t since it contains a join clause), we create a list of the user ids that we retrieved.

import Ecto.Query

query =
  from(u in User,
    join:
      p(
        from(Post,
          where: p.author_id = u.id,
          where: u.deactivated == false,
          select: u.id
        )
      )
  )

users = Ecto.Repo.all(query)

do_something(users)

users_id = Enum.map(users, &(&1.id))

new_query = from u in User, where: id in ^users_id

Ecto.Repo.update_all(new_query, 
  set: [
    deactivated: true, 
    deactivated_on: Date.utc_today()
  ]
)

So, instead of update_all updating every user, it will only update the user with the ids that we passed into the query.

Problem with update_all

As Elixir points out in the documentation, using update_all/2 with a queryable means that certain autogenerated columns, such as inserted_at and updated_at, will not be updated when using update_all/2 as they would be if you used update.

There is one more way that you can do a bulk update using Ecto.

Upserts with insert_all

The insert_all/3 callback of Ecto has an interesting option called on_conflict. If you specify the on_conflict AND you provide a list of structs with ids of database rows that already exist in your database, a single UPDATE call will be made passing the ids provided your list.

Let’s take our use example before. We do the same thing as before: build a complicated query, pass the structs to a function, and then map over the structs taking their ids.

Finally, we pass that list into insert_all/3 with the on_conflict option. We also pass in [set: [deactivated: true, deactived_on: Date.utc_today()]].

import Ecto.Query

query =
  from(u in User,
    join:
      p(
        from(Post,
          where: p.author_id = u.id,
          where: u.deactivated == false,
          select: u.id
        )
      )
  )

users = Ecto.Repo.all(query)

do_something(users)

new_users = Enum.map(users, &(&1.id))

Example.Repo.insert_all(
  Example.User,
  new_users,
  on_conflict: [
    set: [
      deactivated: true,
      deactivated_on: DateTime.utc_today()
    ]
  ],
  conflict_target: [:id]
)

The database will see the “on conflict” option and perform an UPDATE call with the set option we passed in. It will update the database rows in one call instead of one for each row.

The Problem with insert_all/3

Unfortunately, insert_all/3 is not without its flaws either. It also will not update any autogenerated columns, such as timestamps. Not only that, but if the row has a column that is not provided by your structs, in our case the list ids, then insert_all will replace those values with NULL in those columns.

A solution to this problem would be to provide these columns with values in your struct before you pass them to insert_all.

now = DateTime.utc_now()

new_users =
  users
  |> Enum.map(& &1.id)
  |> Enum.map(fn id -> struct(id, %{inserted_at: now, updated_at: now}) end)

Example.Repo.insert_all(
  Example.User,
  new_users,
  on_conflict: [
    set: [
      deactivated: true,
      deactivated_on: DateTime.utc_today()
    ]
  ],
  conflict_target: [:id]
)

As stated in the documentation, insert_all/3 can return some cryptic values depending on the database/persistence layer you choose to use. Some will return the amount of rows updated (postgresql), others will return the amount of rows attempted to be updated or inserted (mysql), etc.

Hope this information was helpful. If you have any questions or have another tip, feel free to leave a comment below.

This post is part of my series of Elixir Tutorials


Freelance Elixir & Ruby on Rails Developer Hey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Elixir and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot

How to Use IEx.pry in Elixir Tests

Elixir’s IEx.pry is a great debugging tool.

It allows you to stop time (“pry”) in the middle of your application in order to inspect the conditions in which your app is currently running.

However, if you’ve ever had to try and use IEx.pry while running your Elixir tests using mix test, you’ve probably encountered a problem.

It won’t actually work.

You may have seen an error similar to this:

Cannot pry #PID<0.474.0> at Example.ProjectsTest ... 
Is an IEx shell running?

I’ll show you the way to use IEx.pry/0 in your Elixir tests and a couple of quick tips to make using IEx.pry in your tests even easier.

Solution

The solution is straightforward. You have to run your mix tests in an interactive elixir session.

How do you do that?

Simply prepend your mix test command with iex -S.

For example, iex -S mix test would run all of your available tests and anywhere you’ve put an IEx.pry the shell will ask you:

Request to pry #PID<0.464.0> at Example.ExampleTest...

....

Allow? [Yn]

Typing Y will drop you into the pry prompt.

Avoiding Timeouts Using IEx.pry in ExUnit

If you are going to be debugging for a while in your pry shell, you should consider adding the the --trace to the test task, i.e. iex -S mix test --trace to avoid timeouts while you are in IEx.pry.

Otherwise, your test process may timeout and crash while you are still debugging using pry. It may raise an ExUnit.TimeoutError after 60 seconds:

 ** (ExUnit.TimeoutError) test timed out after 60000ms. You can change the timeout:
...

Running One Test File or Line Numbers

You can, of course, do the same thing when running a single test file or even a single test.

# Run single test file
iex -S mix test --trace path/to/simple_test.exs

# Run single test
iex -S mix test --trace path/to/simple_test.exs:12

But here’s the deal:

If you are like me, you rarely run your Elixir tests in an interactive shell.

Why?

Because I want to write a test, run it quickly, watch it fail, and then make it pass. Over and over. Iterating quickly. I don’t have time to stop and run an interactive session every time.

Occasionally, however, I run into bugs when making my Elixir tests pass. When this occurs, my debugging workflow works like this:

I’ll try a couple of different things and if none of those reveals the bug, I’ll drop in a IEx.pry somewhere in my test (don’t forget to require IEx).

I do this enough that remembering the correct sequence of commands and typing them out becomes time-consuming and tedious. So, I’ve come up with a few tricks to speed this process up.

Here are two tips that will allow you to do this quickly.

VIM map for IEx Pry

Dropping in a IEx.pry in your code requires that you add both require IEx and IEx.pry to your code.

That is too much typing.

So, to save my hands from carpal tunnel, I have leveraged mappings in Vim. I added this mapping to my .vimrc file:

nmap <leader>r orequire IEx; IEx.pry<esc>

Now, all I have to do is hit “[spacebar] + r” to insert a require IEx; IEx.pry into my test. But, that is only have the problem. I’ll need to run my test again with an IEx shell.

Shell alias for IEx shell tests

Let’s say you’ve added IEx.pry to your test, but now you want to run the same test you just ran again, but this time using pry.

You’ll probably have to go back through your history, find the test you ran, then move your cursor to the beginning of the line, then add iex -S to your mix test command.

Or if you are like me, I’ll often forget to prepend the iex -S in front of my mix test.

Too. Much. Typing.

I came up with simple Bash/Zsh alias that I use all the time and now, you can too.

# Zsh users
alias repry='fc -e - mix\ test=iex\ -S\ mix\ test\ --trace mix\ test'

# Bash users
alias repry='fc -s mix\ test=iex\ -S\ mix\ test\ --trace mix\ test'

What does this alias do? Well, it uses the ‘fc’ command in *nix that searches for the last mix test in your command history and then replaces the mix test with iex -S mix test --trace. This alias works regardless of whether I ran my entire test suite using mix test or ran one test using a specific line number.

Here’s a graphic to pull it all together:

IEx Pry Test

Hope this information was helpful. If you have any questions or have another tip, feel free to leave a comment below.

This post is part of my series of Elixir Tutorials


Freelance Elixir & Ruby on Rails DeveloperHey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Elixir and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot

How to Solve Elixir’s “Module Is Not Available” Error

Elixir’s “module is not available” error can drive you nuts.

But fear not:

After writing a lot of Elixir and seeing this error often, I’ll developed a few quick tricks that you can use to solve why your module is not available in seconds instead of minutes.

Here’s 4 quick things you can check.

Elixir Module Not Available Tips

Check Your Module Definition

This almost goes without saying, but every time I don’t check the module definition first, it turns out I’ve misspelled one of my module definitions.

Check the top (usually top) of your file for your defmodule function. Some things to look for are that everything is spelled correctly, i.e. App.User and not App.Users, that you have included your application name if needed, i.e. App.User not User, that you are using the proper punctuation, i.e. App.User.Note not App.UserNote.

Did You Forget to include alias?

Sometimes, in a bout of “in the zone” programming, you may have written a call to a module that you believed you had aliased, but forgot to include the alias function call.

For example, trying to call User.full_name() when the full function call is App.User.full_name() would result in the module is not available error. App.User, in this case, would need to be aliased first.

Inspect Loaded Modules

Still can’t find the module? Well, one debugging trick is to check a list of all the currently loaded, user-defined modules in the application.

There’s two ways you can do this:

First, you can check the my_app.app compiled file in your /_build directory.

Specifically, this file is usually located in:

"_build/#{MIX_ENV}/lib/#{YOUR_APP_NAME_HERE}/ebin"

While this file may appear a bit cryptic, you can probably ascertain what the modules entry shows. It’s a list of user-defined modules available to your application from compilation.

The second trick is:

Use :application.get_key(:my_app, :modules) wherever the module is not available error is occurring. Combine this with the IO.inspect/1 function and it will show you all of the available user-defined Elixir modules at that point in time.

IO.inspect :application.get_key(:my_app, :modules)

It’s a quick way to see all the user-defined Elixir modules available to the :my_app application at that point in your Elixir application.

Using this Elixir debugging trick may allow you to catch if you have badly misspelled a module or perhaps forgot to include a module for compilation.

Which brings us to my last tip:

Make Sure Your Module is Available to Your Application

My last tip is the most complex in terms of implementation and understanding.

But here goes:

You need to check that your Elixir module is available to your application by telling the elixirc compiler to generate a .beam file for your module.

Why would the Elixir compiler not compile a .beam file? Well, 2 common reasons:

  1. You placed the module in a directory outside of lib and did not
    tell the compiler.

  2. You used the .exs extension instead of .ex

The first reason can be puzzling if you do not realize the problem. By default, the Elixir compiler only looks to compile files and load modules located in the lib directory of an Elixir application.

As your Elixir application complexity increases, you may decide to include Elixir modules that need to be compiled outside of the lib directory. A good example of this are modules that may be used for supporting tests such as mocks.

The solution to this problem is straightforward, but a bit hidden in the Elixir documentation.

You need to tell the Elixir compiler the directory where your modules are located.

To do this, you need to use the Elixir elixirc_paths option in your project configuration. All you need to do is set the elixirc_paths key to a list of paths of the directories that you want the Elixir compiler to compile.

Here’s the catch:

In many cases, you probably do not want to compile the same files across different environments.

For example, in a production environment, you may not want to compile the modules that you use for your test environment.

To get around this, we can use a function that we define in mix.exs that returns a different lists of paths depending on the environment.

In fact, this is exactly what Phoenix Framework does by default in new projects.

  def project do
    [
      ...
      elixirc_paths: elixirc_paths(Mix.env),
      ...
    ]
  end

  # Specifies which paths to compile per environment.
  defp elixirc_paths(:test), do: ["lib", "test/support"]
  defp elixirc_paths(_),     do: ["lib"]

You can see in the code above the elixirc_paths/1 function includes the “test/support” directory in the Elixir compiler paths for the test environment, but not for other environments such as development.

Setting this directory in the Elixir compiler paths means that any modules defined within .ex extension files will automatically be compiled and their modules loaded for your application.

Which brings me to the second issue:

Another mistake that you may make is to use the .exs extension instead of the .ex extension. You may be defining Elixir modules that you need to compile for use in your application in files with a .exs extension.

The .exs extension is used for scripting. Like the .ex extension, .exs files are compiled, their modules loaded, and executed.

It is very well documented that .ex extension files get compiled down to .beam files. The modules in your compiled .beam files get loaded by your application automatically when the application starts.

However, the compiled BEAM bytecode from .exs files does not get output into a .beam file the way that .ex files do. They are, in essence, ephemeral. They are compiled, loaded, and executed, and then disappear. Hence, if you are trying to access a module in an .exs file from your application, it will fail with the “module not available” error.

Hope this information was helpful. If you have any questions or have another tip, feel free to leave a comment below.

This post is part of my series of Elixir Tutorials


Freelance Elixir & Ruby on Rails DeveloperHey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Elixir and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot

How To Test Asynchronous Text Changes with Hound and Phoenix

Writing asynchronous acceptance tests in Hound for Elixir and Phoenix can be difficult, especially if you are using a JavaScript framework like React, Vue.js, or Angular.

If you have ever used end-to-end testing in your web application’s test suite, you have undoubtedly come across the issue of “flapping” tests.

In many of the Ruby on Rails projects that I get asked to work on, I come across this problem frequently. So much so, that I wrote a presentation about Flapping JS Tests in Rails for the West Michigan Ruby Users Group back in 2015.

What are “Flapping” Tests?

Flapping tests are tests that fail inconsistently. Flapping tests are usually an indication of a race condition that is happening between the test suite and the browser under test. Many times, they occur because some asynchronous code is running and does not return before the test suite makes an assertion.

One common case is that the browser makes some asynchronous request via JavaScript and before the server can return the result and the JavaScript can display it on screen, the assertion is made.

End-to-End or acceptance tests in Elixir and Phoenix are not immune to these kinds of race conditions either.

What is Hound?

Hound is an end-to-end testing framework written for Elixir applications. Hound is different from Elixir’s own ExUnit integration tests. Hound can test the entire stack end-to-user (JS framework and all), closer to what people associate with “acceptance” tests.

Why Use End-to-End Tests?

There is a lot of controversy around end-to-end tests. Generally, it is because acceptance tests can be very slow to run. I feel that, used sparingly, they can provide a lot of value to a developer who may not have testers on their team to test their application end-to-end.

End-to-End tests can be run against critical application behavior and notify you of when you have broken an important part of the application. But, I tend to agree, in most cases, they are not necessary and shouldn’t be used.

What’s Wrong with Testing Asynchronous Text Changes in Hound?

That said, I have written a couple of integration tests for Calculate using Hound and came across an interesting race condition.

Let’s use something that I think is a fairly popular use case:

Problem:

A user performs some action which fires an asynchronous request to the server. Upon response from the server, a “counter” on the page updates based on the request.

Enough already, let’s see some code.

Here’s our template (could be a React/Vue/whatever-new-JS-hotness component):

<ul class="stats">

  <li id="messages-count">10</li>
</ul>

And here’s our end-to-end test:

test "increment counter" do

  visit(‘/‘)

  assert visible_in_element?({:css, “#messages-count”}, “10”)

  click(:css, “#send-message”)
  assert visible_in_element?({:css, “#messages-count”}, “11”)

end


OK, that’s nice Adam. But what is the actual problem?

Well, the problem lies with a loophole in the WebDriver spec and the way that Hound’s matchers and finder functions work.

Matchers and Finder Functions in Hound

Hound provides two convenient “matchers” for testing that text is visible on a page. visible_in_page?/1 and visible_in_element?/2. The problem, however, is that both of these functions rely on Hound’s internal finder functions.

The way Hound’s finder functions work is that you have to provide a strategy and a selector to the finder functions. The WebDriver spec actually provides the types of valid strategies that are available, however, the WebDriver spec doesn’t actually specify a way to select an element by text…except for links (and partial links…what??)

So, if you are trying to query an element which is not a link, say, our list item above, then you have to provide another strategy and selector. In this case, we use css and an ID.

OK, great. So, that should work then right? Ah, you know I’m setting you up. Unfortunately, Hound is soooo fast at querying the browser, that a race condition occurs.

When our user clicks the #send-message button, the browser makes an asynchronous request and waits for a response before updating the #message-count.

However, Hound does not wait. After performing the click, Hound runs the next line which is to find the element again.

This next part is important, so we’re going to take a closer look.

How Matchers and Finder Functions Work


Under the hood, find_element/2 uses the make_req/2 function which actually will try 5 times (default) to find an element (after a 250ms default wait time), before failing if it can’t find the element.

The problem in our case is not that Hound can’t find the element. The element is still there, but the text may not have changed yet. So, if Hound is too fast, this line will likely fail:


assert visible_in_element?({:css, “#messages-count”}, “11”)

How To Test Asynchronous JavaScript Changes in Hound

First, I’m going to present to you how not to solve this problem. A popular way to solve this is to simply throw in a sleep function for an arbitrary amount of time. Please, please do NOT do this:

assert visible_in_element?({:css, “#messages-count”}, “10”)

click(:css, “#send-message”)
:timer.sleep(1000)
assert visible_in_element?({:css, “#messages-count”}, “11”)

What we’ve done is told the test suite to pause for an entire second before running the next line. The idea here is since we need the test suite to wait for the asynchronous request to return to the browser, we tell the suite to wait for a second, which should allow enough time for the request to be fulfilled.

What’s the big deal? It’s a harmless second right? Well, sure, it starts out as just one second.

As your test suite grows, you’ll likely have more end-to-end tests with more race conditions and more sleep functions.

This is not an effective way to test asynchronous JavaScript changes, but is an effective way to slow down your test suite. And slowing down your test suite is sure-fire way to ensure that no one wants to run your test suite. Believe me, I’ve seen it a lot in Rails projects.

So, what’s a better way?

Well, there are two options.

First, we can write our own test helper function which retries the visible_in_element?/2 matcher if it fails to find the new text. Here’s an example of one I wrote:

# Will automatically retry looking for asynchronous text change
defp text_visible?(element, pattern, retries \\ 5)

defp text_visible?(element, pattern, 0) do
  visible_in_element?(element, pattern)
end

defp text_visible?(element, pattern, retries) do
  case visible_in_element?(element, pattern) do
    true -> true

    false ->
      :timer.sleep(10)
      text_visible?(element, pattern, retries - 1)
  end
end

assert text_visible?({:css, ".counter"}, ~r/11/)

This solution uses Elixir’s pattern matching and tail recursion to retry to find the text within the element. If it can’t find it, it will wait 10ms before trying again. It will do this a default of five times.

What makes the method more effective? If at any point in the retries, the visible_in_element?/2 function does return true, then the function will stop retrying and return true back to the assertion. It does not continue on. So, this function could take 20ms or 100ms versus the other method which will ALWAYS wait a full second before continuing.

The second option is less elegant, but perhaps, more straightforward. We can use the :xpath strategy along with a specific selector.

find_element(:xpath, "//ul/li[text()=\"11\"]")

The benefit of this is that it is one-line and can be written without any other code. This method works because it leverages Hound’s own make_req/2 function. The XPATH selector we provided will be retried by make_req/2 until it finds the element on the page.

If you prefer to have it not raise an error, but rather fail an assertion, you can use the element?/2 matcher function with an assertion. Tt would be:


assert element?(:xpath, "//ul/li[text()=\"11\"]")

However, I find this method less readable if you are not used to XPATH’s syntax. You could also make the argument that this method will be less DRY as every time you want to query a text change, you’ll have to write a new XPATH query for that specific element.

Either of these options is better than simply adding a :timer.sleep(1000) to you test suite, because both will retry to find the matching element and will return early if a match is ever found.

You may be wondering, why doesn’t Hound do this automatically with their matchers? Well, that’s up for debate, but I have opened an issue to try and address this going forward. It may not make sense for all cases, but might be worth exploring. Feel free to voice your opinion over there.

This post is part of my series of Elixir Tutorials


Freelance Elixir & Ruby on Rails Developer Hey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Elixir and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot

How to Count Specific Items in a Collection with Elixir

So, I was working on this exercise over at Exercism.io, when I stumbled up an interesting function.

To make a long story short, I needed a count of a specific character (codepoint) in a character list i.e. 'hello' contains two ls. However, this solution could apply to any collection which implements the Enumerable protocol in Elixir.

I have been reading Programming in Elixir. The author, Dave Thomas, emphasizes the use of recursion and pattern matching throughout Elixir. So, of course, I had implemented the solution using recursion and pattern matching. You can see that solution here.

You Can Count (Ha!) on Elixir

Of course, I missed the obvious. Elixir already had a very convenient function for doing exactly this: Enum.count/2.

I had anticipated that Elixir would have a function to do a straight count of all the items in a collection (which it does: Enum.count/1 doc). However, I had not considered one for looking for a specific kind of item.

Well, Enum.count/2 provides a convenient API for this exact problem. You simply pass the enumerable (in my case, the character list) and a function which takes one argument (the current item) and will return true for whatever items you want to count.

Here’s a convenient graphic to demonstrate:

Elixir Enum Count

Elixir Enum.count/2 Examples

Of course, the expression used to evaluate the current item does not need to be a simple equality operation. Here are some examples using a variety of different operators:

Count all the binaries (strings) which equal “price”:

Enum.count(["Total", "retail", "price", "$14.95"], &(&1 == "price"))
1

Pipe syntax:

["Total", "retail", "price", "$14.95"]
  |> ... # some other functions
  |> Enum.count(&(&1 == "price"))

Count all the binaries which are members of a list:

roles = ["User", "Customer", "Partner", "Admin"]
Enum.count(roles, &Enum.member?(["Admin", "Editor", "Developer"], &1))
1

Count all the binaries (strings) which match a regular expression:

Enum.count(["Total", "price", "$14.95"], &(&1 =~ ~r/\$\d+\.\d+/))
1

Count all the binaries (strings) in a list:

Enum.count([1, "list", 3], &is_binary(&1))
1

Count all the binaries (strings) in a multi-type list:

Enum.count([1, "list", 3], &is_binary(&1))
1

Count all the integers in a multi-type list:

Enum.count([1, "list", 3], &is_integer(&1))
2

Count all the integers greater than 3:

Enum.count([1,2,4,8,16,32], &(&1 > 3))
4

Read the documentation on Enum.count/2

This post is part of my series of Elixir Tutorials


Freelance Elixir & Ruby on Rails Developer Hey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Elixir and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot

Fluid Layouts with Auto Layout, Size Classes, Spacer Views, and Constraint Priorities

I came across a problem the other day with the “compact x any” size class in iOS 8 & Xcode 6. I was creating a layout for the size class which is supposed to cover the iPhone 4S 3.5-inch screen all the way up to the iPhone 6 4.7-inch screen. However, this became difficult to do well because what looked good on a small screen (like the 3.5-inch) did not look good on the large screen (4.7-inch screen). Or what looked good on a bigger screen, the content did not fit on the smaller screen.

So, I started playing around with how to create a fluid layout to make this work and it turned out to be much more difficult than I anticipated. So, I thought I’d share an example of how I did it.

I’m going to show you how to create a fluid layout using Xcode 6 in the compact-any size class which works for all compact width layouts such as the 3.5-inch iPhone 4S all the way up to the 4.7-inch iPhone 6. I will show you how to create a fluid layout for a form for this size class so that, for example, when we’re on the iPhone 3.5 inch, it’s a compact form (see pic below)fluid layout initial setup, but when we go up to 4.7-inch iPhone 6, the form will expand and fill the entire screen (see pic below). fluid_layout_iphone_6_size We will use spacer views to make this form expand out to fill this entire screen.

In this form, you can see we have some labels and then some text fields. But what you need to know is that we also have an encompassing view surrounding each of those.encompassing views So, each of these fields and labels has an encompassing view which then has some constraints on it to their neighboring-sibling views and labels.

Ok, now that you know that. I’ll show you how we can add spacer views to make this form fluid.

Auto Layout Spacer Views

First, select all of the labels and text fields (and their encompassing views) and slide them down to make room for a spacer view underneath the header label.fluid layouts create space From the object collection in the right menu, search for a plain “view”. Scroll down and select a plain “view” from the collection and drag and drop it onto the storyboard.select view We’re going to need re-size this. We’re going to keep it 200 points wide. Go up to the size inspector and change the height to just 10. change height of spacer viewReposition the view just below the first label.reposition spacer view Now, we need to set up our constraints.

Auto Layout Constraints

The first constraint that we’re going to set up is a vertical constraint to the first label. So, hold down ctrl + click and drag to the first label and select “vertical spacing” create vertical spacing constraint Next, set up a center horizontally container constraint. This is going to make the spacer view center horizontally with the rest of these labels and fields. Create another vertical spacing constraint with the next view that encompasses the first label and text field. create_constaint_with_encompassing_view Then, create constraints for the width and the height. autolayout_create_height_and_width_constraint

So now, we have all the constraints for this spacer view, but we need to change the constants on some of the constraints. To do that, select the spacer view and then select the size inspector from the menu of the right.

The first constant we’re going to change is the first vertical spacing constraint. Select the constraint and change the constant to zero.change_autolayout_constraint_constant Then, change the bottom space constraint to zero as well. And you can see now that all the space that exist now between the first label and the next encompassing view is the height of our spacer view.

Auto Layout Contraint Priorities

The last thing we need to do for this spacer view is set the priority of this height constraint. To change the priority, select the height constraint and hit “edit” and under the priority selection, select 750.change_auto_layout_constraint_priority So, slightly less priority than all the other constraints. And that’s it, that’s all we need to do for this first spacer view.

So, copy the spacer view by holding down the option key and dragging it down below the first form field. copy_and_reposition_second_spacer_view

Now, we need to do the same thing with this spacer view as we did with the first one. Set a vertical constraint to the encompassing view of the first form field. Then, add a vertical spacing constraint to the next encompassing view of the next form field. Add a constraint to make sure it’s centered horizontally and it’s all set, it should have already copied over the width and height constraints.

We need to change the constants on the constraints once again. Select the spacer view and go to the size inspector and change the vertical spacing constraints’ constants to zero.

We also need to do something a little bit different with the height constraint. We’re actually going to remove the height constraint. Open up the document outline for the storyboard. open_document_outline Go back over to our height constraint and select it and that will show selected in the document outline, so we’ll select it and just hit delete and remove that.remove_auto_layout_height_constraint So, now that constraint is gone. Select the second spacer view again, and hold down ctrl, click and drag and select the first spacer view and set the constraint as equal heights. set_auto_layout_equal_heights_constraint

The idea here is that we’ll create the spacer views that will then expand equally out as the screen gets larger. So, we don’t want them to be different heights or expand the different sizes, we want them all to be equal size. So, we’re going to set them as equal heights. And that’s it! That’s all we need for this second spacer view.

You will need to setup a spacer view in between each encompassing view of the form. Because of the rest of the spacer views are pretty much the same as the first and the second one, I’m just going to go ahead and skip to the end of this so that all the spacer views will be set up. I will show you the last step that we need to do in order to create our fluid layout. Here’s what the layout looks like with all the spacer views setup. spacer_views_all_setup

So, the last step we need to do, select the “sign up” button and control drag and create one more constraint by selecting the controller’s view and select “bottom space to bottom layout guide”. add_auto_layout_bottom_space_constraint So now we have created this constraint for this button. We have this constraint now, but we want the “sign up” button constrained, so that’s a little bit closer to the bottom of the screen. The way we can do that is selecting the new constraint and updating this constant to be a smaller number than 209, that’s automatically set to right now. Set the constant on this constraint to 50. hange_auto_layout_constraint_constant_to_50 You can see the form now expands. And you can see that is expanding because the height of this spacer views is expanding as well and they’re expanding equally. And that’s because the constraint priority that we set up earlier for the height of the spacer view. fluid_form_with_spacer_views_with_background_color

Because that constraint has a lower priority than the rest of the constraints that were set up, the constraint breaks when it expands and contracts for the different sizes of the screens. If your spacer views have a background color, you can go ahead and remove the color from this spacer views. Now, you can see how the form expands and contracts in the different fields have equal spacing in between them, it expands and contracts to fill out the different screens. fluid_form_with_spacer_views_without_background_color

So, that’s it! That’s how we can create a fluid layout using spacer views and constraint priorities in Xcode 6 with size classes. So if you have any questions, feel free to leave a comment and I will try to answer them.


Freelance Elixir & Ruby on Rails Developer Hey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Elixir and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot

How to Use Swift Computed Properties to Create a Simple Goal Tracker Class

This is the fourth Swift tutorial and video in a series I’m doing on Swift development.

Source code examples are available on GitHub

In this tutorial, we’re going to take a look at Swift computed properties and how they work. We’re going to create a very simple GoalTracker class. All our GoalTracker class is going to do is track our progress through something, i.e. how many miles or kilometers we’ve run, or how many pages we’ve read in a book, etc. but we’re going to keep it pretty simple.

Setup GoalTracker Swift Class

First, let’s create a class called GoalTracker.

class GoalTracker {
}

Next, on our GoalTracker class, we’re going to create a variable property called goal and give it a type of Double and we’re going to initialize it to zero.

class GoalTracker {
  var goal: Double = 0.0
}

Next, we’re going to create another variable property. Let’s call it unitsCompleted. It’s also going to be a Double and we’ll also initialize it to zero.

class GoalTracker {
  var goal: Double = 0.0
  var unitsCompleted: Double = 0.0
}

##Swift Read-only computed properties
Finally, let’s create another variable property called unitsLeft and it will be a Double but instead of initializing it to zero, we’re going to use curly braces and declare this as a read-only computed property. Now, the way read-only computed properties work, all we have to do is return an instance of a double type. In this case, to determine units left, all we need to do is return the goal minus the units completed.

class GoalTracker {
  var goal: Double = 0.0
  var unitsCompleted: Double = 0.0
  var unitsLeft: Double {
    return goal - unitsCompleted
  }
}

That’s it. We can check this by declaring a variable called goalTracker and then we’ll just initialize a GoalTracker. We can then set the goal property of the goalTracker variable. In this case, we’ll set it to 20.0.

var goalTracker = GoalTracker()
goalTracker.goal = 20.0

Now, if we now take our GoalTracker and we set our units completed property to 5.0 and then we can print our GoalTracker unitsLeft property.

goalTracker.unitsCompleted = 5.0
println(goalTracker.unitsLeft) //15.0

Great, it works! What if we wanted to set unitsLeft and determine our unitsCompleted? Well, we can do that by reformatting the unitsLeft computed property.

Swift Getter and Setter

First, we’re going to declare a getter function on the computed property. Every computed property in swift can have a getter and a setter. We can create a getter by simply wrapping our original expression in a “get” function.

class GoalTracker {
  var goal: Double = 0.0
  var unitsCompleted: Double = 0.0
  var unitsLeft: Double {
    get {
      return goal - unitsCompleted
    }
  }
}

We’ll declare the setter which takes a parameter called “newUnitsLeft”. This parameter is provided automatically when we declare a setter in a Swift computed property. All it does is prepend a “new”” to the front of whatever property we’ve declared. In this case, it’s just newUnitsLeft. Given newUnitsLeft, we can determine the unitsCompleted. We simply set unitsCompleted equal to the goal minus the newUnitsLeft parameter. That’s it.

class GoalTracker {
  var goal: Double = 0.0
  var unitsCompleted: Double = 0.0
  var unitsLeft: Double {
    get {
      return goal - unitsCompleted
    }

    set(newUnitsLeft){
      unitsCompleted = goal - newUnitsLeft
    }
  }
}

Now, if we change unitsCompleted to unitsLeft and then we get our units completed, you can see that we can now determine our units completed from our units left if we change that to 8.3, now you can see that unitsCompleted is updated to 11.7.

goalTracker.unitsLeft = 8.3
println(goalTracker.unitsCompleted) //11.7

Percentage Completed

But let’s say we wanted to take this one step further and we wanted to find out how much we’ve completed as a percentage, how would we do that? We can do that by declaring another computed property on our GoalTracker class. Let’s declare a variable property called percentageCompleted. This is going to be a double and a computed property.

class GoalTracker {
  var goal: Double = 0.0
  var unitsCompleted: Double = 0.0

  var percentageCompleted: Double {
  }

  var unitsLeft: Double {
    get {
      return goal - unitsCompleted
    }

    set(newUnitsLeft){
      unitsCompleted = goal - newUnitsLeft
    }
  }
}

We’re going to use a getter and a setter. In this case, we’ll declare the getter first. To determine the percentage completed, all we need to do is return the unitsCompleted divided by the goal. We’ll also want to format our percentageCompleted because both dividing goals and unitsCompleted will result in a decimal, so if we do multiply it by 100, we’ll get a percentage.

class GoalTracker {
  var goal: Double = 0.0
  var unitsCompleted: Double = 0.0

  var percentageCompleted: Double {
    get {
      return unitsCompleted/goal * 100
    }
  }

  var unitsLeft: Double {
    get {
      return goal - unitsCompleted
    }

    set(newUnitsLeft){
      unitsCompleted = goal - newUnitsLeft
    }
  }
}

In the setter, we’re going to pass in the new percentageCompletedParameter that’s provided by Swift and then set unitsCompleted equal to the goal times the newPercentageCompleted divided by 100. This will allow us to input the percentage completed as a whole number instead of having to input it as a decimal.

class GoalTracker {
  var goal: Double = 0.0
  var unitsCompleted: Double = 0.0

  var percentageCompleted: Double {
    get {
      return unitsCompleted/goal * 100
    }

    set(newPercentageCompleted){
      unitsCompleted = goal * (newPercentageCompleted/100)
    }
  }

  var unitsLeft: Double {
    get {
      return goal - unitsCompleted
    }

    set(newUnitsLeft){
      unitsCompleted = goal - newUnitsLeft
    }
  }
}

Now, we can test it out. If we set 8.3 as our unitsLeft, then our percentageCompleted is 58.5%. If we wanted to see percentageCompleted if we pass in 10.0 for our units completed, the percentage is 50%. We can also change this to say, 1.2 and the percentageCompleted is 6%. Our percentage completed is working as intended.

goalTracker.unitsLeft = 8.3
println(goalTracker.percentageCompleted) //58.5
goalTracker.unitsCompleted = 10.0
println(goalTracker.percentageCompleted) //50.0
goalTracker.unitsCompleted = 1.2
println(goalTracker.percentageCompleted) //6.0

If we wanted to check our setter, we just take our percentage completed property and we set it equal to 2.46% and then the units left would be 17.54.

goalTracker.percentageCompleted = 2.46
println(goalTracker.unitsLeft) //17.54

Adam DeLong - Freelance Ruby on Rails Developer Hey, I’m Adam. I’m guessing you just read this post from somewhere on the interwebs. Hope you enjoyed it. When I’m not writing these blog posts, I’m a freelance Elixir and Ruby developer and working on Calculate, a product which makes it easier for you to create software estimates for your projects. Feel free to leave a comment if you have any questions.

You can also follow me on the Twitters at: @DeLongShot