JWT Token Generation Using AWS Lambda


Here at BeamWallet we tend to prefer micro-service architecture approach as opposed to archaic monolithic thinking. Recently we came across a simple/common issue which is generating JWT Token to allow our servers to communicate between each other.

Since all of our systems live in private VPCs  and in the interest of the DRY principle we decided to build a centralised service that would handle all of this. We built the solution using AWS Lambda and also AWS API Gateway.

We have 2 different ways of deploying and for secret storing. 

AWS  Elastic Beanstalk with AWS Secret Manager

Our EBS applications are Java Spring backends which have all the passwords and keys stored in AWS Secret Manager.  On start up of each app all passwords are loaded from AWS Secret Manager and loaded into the System Properties.

Our naming convention for naming secrets stored in the Secret Manager is <system>/<environment>
Eg core/dev  or common/staging (where common has commonly used variables across all systems)

AWS ECS with Ansible

We deploy our other apps to ECS using ansible, where the secrets are stored in our Ansible Vault, and after the passwords are stored in ECS' Task definition as Environment Variables.

Our Solution (In Lambda)

Setting up IAM

Add the following roles

Screen Shot 2018-08-10 at 5.19.15 pm.png

Add the Lambda Function

Screen Shot 2018-08-10 at 5.22.18 pm.png

Configure the Lambda function

Set the handler to org.srini.awslambda.examples.generatejwt.GenerateJWTFunctionHandler::handleRequest

Screen Shot 2018-08-10 at 5.24.47 pm.png

Private Key Retrieval

As mentioned above we store our secrets in 2 different locations so we wanted our function to be smart enough to know where to get the private key from. We have a list which contains which service to get which key from.

The easiest one to get the secrets from was the AWS Secrets using the aws-java-sdk. We have all our private keys with the same name so the naming convention was really easy. 

String endpoint = "secretsmanager.eu-central-1.amazonaws.com";
AwsClientBuilder.EndpointConfiguration config = new AwsClientBuilder.EndpointConfiguration(endpoint, region);
AWSSecretsManagerClientBuilder clientBuilder = AWSSecretsManagerClientBuilder.standard();
AWSSecretsManager client = clientBuilder.build();
GetSecretValueRequest request = new GetSecretValueRequest().withSecretId(service + "/" + environment);
String res = client.getSecretValue(request).getSecretString();
HashMap map = new Gson().fromJson(res, new HashMap<String, String>().getClass());
return map.get(AWS_SECRET_KEYWORD).toString();

The hardest step was  getting the private key from the ECS configuration. To access we needed retrieve the task definition from within the container definition which is inside the service. Again we were following the same naming conventions across all of our services which made this task much easier.

String privateKey = "";
AmazonECSClientBuilder clientBuilder = AmazonECSClientBuilder.standard();
AmazonECS client = clientBuilder.build();
String commonName = environment + "-" + service;
List<Service> services = client.describeServices(new DescribeServicesRequest().withCluster(commonName).withServices(commonName)).getServices();
if (service.isEmpty()) {
    throw new IllegalArgumentException("No cluster/service is found with " + commonName);
// will always return 1 or 0
String taskDefinition = services.get(0).getTaskDefinition();
List<ContainerDefinition> containerDefinitions = client.describeTaskDefinition(new DescribeTaskDefinitionRequest().withTaskDefinition(taskDefinition)).getTaskDefinition().getContainerDefinitions();
if (containerDefinitions.isEmpty()) {
    throw new IllegalArgumentException("No container definitions is found with " + commonName);
// will always return 1 or 0
for (KeyValuePair keyPair : containerDefinitions.get(0).getEnvironment()) {
    if (keyPair.getName().equals(AWS_ECS_KEYWORD)) {
        privateKey = keyPair.getValue();
return privateKey;

Key Generation

We are using the Java Security Library to generate the token, io.jsonwebtoken.jjwt to generate the JWT token, and using Lambda variables to set how long we want the time out to be. 

public class TokenGenerator {
    private final int DEFAULT_TIME_IN_MINUTES = 10;
    private final String timeInMinutes = System.getenv("TIME_IN_MINUTES");
    private final String TOKEN_TYPE = "typ";
    private final String RND = "rnd";

    public String generateToken(String secretKey, String subject) throws NoSuchAlgorithmException, InvalidKeySpecException {
        if (!com.amazonaws.util.StringUtils.isNullOrEmpty(timeInMinutes)) {
            TIME_IN_MINUTES = Integer.parseInt(timeInMinutes);
        String jwtToken = "";
        KeyFactory keyFactory = KeyFactory.getInstance("RSA");
        PrivateKey privateKey = keyFactory.generatePrivate(new PKCS8EncodedKeySpec(Base64.decodeBase64(secretKey)));
        jwtToken = Jwts.builder()
                .claim(TOKEN_TYPE, "service")
                .claim(RND, Date.from(LocalDateTime.now().atZone(ZoneId.systemDefault()).toInstant()))
                .setIssuedAt(new Date())
                .signWith(SignatureAlgorithm.RS512, privateKey)
        return jwtToken;

API Gateway

The api gateway was used just as a very simple means to ensure we were able to build/develop and also would be able to use on all environments. To set up the API Gateway

Step 1: Create API

Screen Shot 2018-08-10 at 4.33.02 pm.png

Step 2: Link to Lambda


Screen Shot 2018-08-10 at 4.33.45 pm.png

Step 3: Deploy the API

Screen Shot 2018-08-10 at 5.13.00 pm.png

They came, they saw, they Beamed

This year, Beam became the first app to let UAE pay for fuel from their car. 12 milion litres later, here’s a little throwback to our Beam pioneers.

Based in Dubai, Maureen, Brenda, Hadi and Ahmed were the first in UAE to trial Beam at ENOC in July this year. Watch what they had to say or read the interview below.




It was all too easy. It was actually a little strange. Because I have the habit of having to get out of the car and pay. I wouldn’t mind pumping my own gas but it was extremely easy. I can never imagine using my credit card again to pay for gas.


“It was all too easy. It was actually a little strange.”



It’s actually been a great experience. It’s very easy and convenient to use, and hassle-free because I don’t have to get out of the car. It’s just easy, just press the button and it automatically detects how much you owe, and you pay via the app and you’re on your way. It’s very quick, easy and hassle-free.


“Just press the button… and you’re on your way”




The experience was really good. It was really fast. I didn’t have to get out of my car. I didn’t have to hand over my credit card. I just had to take my phone out, use Beam, and it was fast.


“I didn’t have to get out of my car. I didn’t have to hand over my credit card.”




It’s an amazing experience because I always wanted to pay with credit card at the gas station. I couldn’t earlier because there was an extra charge. And when it was available, you always had to step out of the car to pay with the PIN because the machines were not mobile.


Now I can sit in the car, pay with Beam, enjoy the rewards and not have to step out of the car especially with the hot weather.


“It’s an amazing experience… Now I can sit in the car, pay with Beam, and enjoy the rewards.”


Related posts

Like what you see? Beam at ENOC today to win fuel for a year: www.winfuel.ae

Mitch Williams won fuel for a year in October. See what he had to say here.



The story behind Beam at ENOC

They say Rome wasn’t built in a day. This year, Beam became the first app to let UAE pay for fuel from their car. The path was not without challenges. Our product team gives us a behind-the-scenes look at the journey to creating a world-first.


Where did the idea come from?

Testing in the lab   

Testing in the lab


ENOC approached us to do a pilot in late 2015. Their goal was to let UAE residents be the first in the world to pay with their phones at the pump.


How did Beam execute this?

It was a long journey due to the nature of the petrol station infrastructure. We had to do a lot of cross border testing of the solution step by step between Sydney (where our product team is based) and Dubai.


What challenges did Beam face?

The primary challenge was designing a solution where the series of events (sessions) between Beam and ENOC's systems are managed seamlessly without the user feeling all the steps involved. From the time the customer arrives at the petrol station to payment, the number of scenarios that could go wrong was quite high.


ENOC’s goal was to let UAE residents be the first in the world to pay with their phone at the pump.”


What issues did the team solve in the process?

We solved:

  • The network connectivity issues between Beam's (cloud based) and ENOC's (closed network/VPN) systems .

  • The mapping out of events between Beam and ENOC's systems accurately and reliably. One of our developers said this felt like "stacking peas with a boxing glove".

  • Real life scenarios that could not be encountered in a lab environment. We continuously monitored the logs for such scenarios and proactively addressed them on a daily basis.


“Instead of providing the same payment method at all places, we optimise the experience based on the environment you are in.”


What are the key achievements with this particular solution?

We created a world-first experience in the UAE and the growth rate was beyond our expectations. We have 50K Beamers transacting at ENOC, 12 million litres of fuel pumped, and 210k tanks filled so far and this is growing as we speak.


“One of our developers said this felt like stacking peas with a boxing glove.”


Why is this solution unique?

This solution is unique because:

  • We use BLE beacons at the pump to make the pump selection easy through the app

  • Instead of providing the same payment method at all places, we optimise the experience based on the environment you are in.

  • Beam is a wallet you can use at over 2,600 stores in the UAE not just at ENOC stations, which makes it convenient for our customers.


      “The growth rate was beyond our expectations.”

The first Beamers to test the fuel experience

The first Beamers to test the fuel experience

What does this mean for the future of Beam?

  • We’ve always believed that the payment experience should be optimised based on the environment instead of giving customers one-size-fits-all solutions.

  • Our experience at ENOC proved that you can alter the way we pay to make it easier for the customers, and when you do this, people will switch to pay with Beam.


“Beam is a wallet you can use at over 2,600 stores in the UAE, not just at ENOC stations.”


What does this mean for the future of Beam? (cont…)

  • We will continue to improve our current experiences at ENOC, restaurants (tipping, splitting bills etc.) and tap to pay at the checkout.

  • We are continually on the lookout for new opportunities where we can make the payment method easier and more convenient for our customers.

  • Our long term vision is that payments should be an entirely seamless and rewarding experience for buyers and sellers.


“The payment experience should be optimised based on the environment instead of giving customers one-size-fits-all solutions.”


Related posts

Like what you see? Beam at ENOC today to win fuel for a year: www.winfuel.ae

Maureen, Brenda, Hadi and Ahmed were the pioneers of the Beam at ENOC experience.

See what they had to say here.



Visualising Invite a Friend

We love Beam. We want our customers to love Beam. We want them to tell their friends how much they love Beam and we want their friends to love Beam. Like many other sites and services on the web we encourage this with a refer a friend program.

The Beam referral program works in the usual way: You have a code. You give your code to a friend. They sign up with it and Beam says thank you by giving you both some cash to spend.

These sort of rewards are open to abuse although the incidence of this is rare. We have fraud detection and protection methods in place but one of the best ways to check on these is to expose the data and get a feel for it. So we started out visualising our refer a friend network. What we found was so cool that we just have to share it:

It looks almost organic. Like the view through a microscope. You have to zoom in for the best effect.

Surprisingly nobody on our team can recall ever seeing an invite a friend network presented like this. Hopefully you find it as engrossing as we do.

To answer the common questions:

  1. Yes this is a small sample of data, there is too much to practically display at once, but the individual networks are complete.
  2. The arrows point from the inviter to the invitee.
  3. The visualisation is built using http://visjs.org/

I have measured out my life In coffee spoons

“I have measured out my life in coffee spoons” wrote T. S. Eliot in The Love Song of J. Alfred Prufrock a poem ostensibly about time and not much at all about coffee. Whereas here we have a heatmap showing time and space and very much about coffee. It plots coffee purchases made with Beam against where the purchase was made vs the time of day and it clearly shows that Dubai wakes up with coffee on it’s mind and it wakes up at 7am. It also shows that all that coffee is having some effect because the caffeination doesn’t stop for the rest of the day.


We hope all that coffee is helping people to meet and talk and fall in love and create masterpieces and we hope that you enjoy this bit of visualisation eye candy.

The Beam Engineering Team will leave you with another coffee quote by the late great Terry Pratchett “Coffee is a way of stealing time that should by rights belong to your older self.”

Spring cloud & AWS Elasticache

Spring Cloud AWS (http://cloud.spring.io/spring-cloud-aws/) provides a nice caching abstraction allowing objects to be cached in AWS ElastiCache (https://aws.amazon.com/elasticache/) via the @Cacheable annotation. There is some documentation on this but limited real world examples. For example how do you run locally or in tests when you can’t connect to ElastiCache?

Here is how we did it at Beam…

We setup the Spring AWS caching largely as per the standard instructions – http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_caching This is what is in the applicationContext.xml:

 <cache:annotation-driven proxy-target-class="true"/>
 <aws-cache:cache-manager id="cacheManager">
 <aws-cache:cache-ref ref="beamCache"/>

 <aws-context:simple-credentials access-key="${aws.key " secret-key="${aws.secret "/>

The major departure here is that we use a cache-ref to a bean called “beamCache”. The reasoning for this is so we can have dynamic runtime config between an embedded memcached server or the elasticache server.

The beamCache bean is configured like so:

package com.beam.spring;

import com.amazonaws.regions.RegionUtils;
import com.amazonaws.services.elasticache.AmazonElastiCacheClient;
import com.google.common.collect.Lists;
import de.flapdoodle.embed.memcached.Command;
import de.flapdoodle.embed.memcached.MemcachedExecutable;
import de.flapdoodle.embed.memcached.MemcachedProcess;
import de.flapdoodle.embed.memcached.MemcachedStarter;
import de.flapdoodle.embed.memcached.config.ArtifactStoreBuilder;
import de.flapdoodle.embed.memcached.config.DownloadConfigBuilder;
import de.flapdoodle.embed.memcached.config.MemcachedConfig;
import de.flapdoodle.embed.memcached.config.RuntimeConfigBuilder;
import de.flapdoodle.embed.memcached.distribution.Version;
import de.flapdoodle.embed.process.config.IRuntimeConfig;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cache.Cache;
import org.springframework.cloud.aws.cache.CacheFactory;
import org.springframework.cloud.aws.cache.ElastiCacheFactoryBean;
import org.springframework.cloud.aws.cache.memcached.MemcachedCacheFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Lazy;

import java.io.IOException;
import java.util.List;

public class CacheFactoryBean {
 private static final Log LOG = LogFactory.getLog(CacheFactoryBean.class);

 private static final int EMBEDDED_PORT = 11211;

 String cacheClusterId;
 String regionName;
 int cacheExpirySeconds;

 @Bean(name = "beamCache")
 Cache getCache() throws Exception {
 final MemcachedCacheFactory memcachedCacheFactory = new MemcachedCacheFactory();

 if (cacheClusterId.equals("localhost")) {
 LOG.info("Starting embedded memcached server on port " + EMBEDDED_PORT);
 // Connect a memcache client to the embedded server
 LOG.info("Connecting to local memcached");
 return memcachedCacheFactory.createCache(cacheClusterId, "localhost", EMBEDDED_PORT);
 } else {
 LOG.info("Connecting to ElastiCache cluster: " + regionName + " :: " + cacheClusterId);

 // We are only interested in memcached
 List cacheFactories = Lists.newArrayList();

 // Setup the aws elasticache client
 AmazonElastiCacheClient amazonElastiCacheClient = new AmazonElastiCacheClient();

 // Use the factory to produce the cache
 ElastiCacheFactoryBean elastiCacheFactoryBean = new ElastiCacheFactoryBean(amazonElastiCacheClient, cacheClusterId, cacheFactories);
 return elastiCacheFactoryBean.getObject();

 @Bean(destroyMethod = "stop")
 MemcachedExecutable embeddedMemcachedExe() {
 final Command command = Command.MemcacheD;
 // Hardcode the download url. Using the value from: https://github.com/flapdoodle-oss/de.flapdoodle.embed.memcached/blob/master/server.properties
 IRuntimeConfig runtimeConfig = new RuntimeConfigBuilder()
 new ArtifactStoreBuilder()
 .download(new DownloadConfigBuilder()

 MemcachedStarter runtime = MemcachedStarter.getInstance(runtimeConfig);
 return runtime.prepare(new MemcachedConfig(Version.Main.V1_4, EMBEDDED_PORT));

  @Bean(destroyMethod = "stop")
  MemcachedProcess embeddedMemcachedProcess(MemcachedExecutable embeddedMemcachedExe) throws IOException {
    return embeddedMemcachedExe.start();


The @Lazy annotation ensures that the embedded beans and the cache itself are only instantiated in those environments where they are required.

This is dependant on a few properties which are the only things to change between local & production and everything in between. If the cluster id is set to “localhost” then a local embedded memcached server is automatically started (and stopped) allowing use in local and test environments. For production, stage, etc environments the appropriate Elasticache cluster can be specified, for example:





For the embedded memcache server we use the flapdoodle embedded memcache server, see: https://github.com/flapdoodle-oss/de.flapdoodle.embed.memcached

The next issue is the need for runtime configuration of the cache name as we can’t use constants in the @Cacheable annotation with this config. We implement a CacheResolver that points everything at the “beamCache” bean configured in the factory outlined above. Switching to Scala now:

 * Cache resolver that points everything at the same cache, which is typically the elasticache cluster
class FixedCacheResolver @Autowired()(@Qualifier("beamCache") cache: Cache) extends CacheResolver {

 override def resolveCaches(context: CacheOperationInvocationContext[_]): util.Collection[_ <: Cache] =


We can then use this annotation to enable caching on methods in a service or cache wrapper component:

@CacheConfig(cacheNames = Array("beamCache"), cacheResolver = "fixedCacheResolver", keyGenerator = "beamKeyGenerator")

With methods along the lines of this to do the caching:

 override def findStoreGroupsForStore(storeId: Long): Set[StoreGroup] = {
 log.debug(s"Cache miss: findStoreGroupsForStore($storeId)")
	... expensive evaluation ...
 override def evictStore(storeId: Long) {
 // no-op - taken care of by the annotation and the magic of BeamKeyGenerator
 log.debug(s"Evicted Store($storeId) from the cache")

And the magic of “beamKeyGenerator” is to generate consistent cache keys for different methods and parameters and also take the environment into account so we can have for example the uat and stage environments pointing to the same cache cluster.

class BeamKeyGenerator @Autowired()(@Value("${beam.environment}") environment: String) extends KeyGenerator {

 private val log = LoggerFactory.getLogger(classOf[BeamKeyGenerator])

 private val storeGroupsForStore = "storeGroupsForStore"

 override def generate(target: scala.Any, method: Method, params: AnyRef*): AnyRef = {
 val m = method.getName match {
 case "findStoreGroupsForStore" => storeGroupsForStore + ":"  + params.head
 case "evictStore" => storeGroupsForStore + ":" + params.head
 case _ => method.getName + ":" + params.mkString(",")

 // We prefix with the environment so we can run with the same uber cache in multiple environments
 val key = environment + ":" + m
 log.debug(s"Generated cache key: $key for ${method.getName}")


Building an external DSL in Scala

It’s a requirement we have run into many times but the general case goes something like: the marketing team want to setup adhoc promotions and don’t want to be beholden to the developers to do so. Similarly the developers would prefer to focus on developing rather than responding to urgent “can you setup this campaign” requests from marketing.

So there is just that pesky problem of how you actually give marketing the ability to create promotions and have the flexibility they want? Our response is to build a DSL and Scala’s StandardTokenParsers makes it a pleasure to do so.

Say for example we want to allow marketing to write english like business rules such as:

total reward budget is 100000 and
(maximum cycles per day is 2 and
maximum cycles per week is 3 and
per customer total reward is 500)

We start with some modeling. We will call each of these lines a “limit” so we start with a marker trait:

trait Limit

We want each of our limits to be represented by a case class so we have the following:

case class TotalRewardBudgetLimit(budget: BigDecimal) extends Limit

case class MaximumCyclesLimit(timePeriod: TimePeriod, value: Int) extends Limit

case class CustomerTotalRewardLimit(amount: BigDecimal) extends Limit

For completeness we model TimePeriod as such:

sealed trait TimePeriod

case class Day extends TimePeriod

case class Week extends TimePeriod

We want to allow our language to have conjunctions, disjunctions and precedence by which we mean the ability to join sentences with and, or and combine them with brackets. So we have these additional limits:

case class ConjunctionLimit(left: Limit, right: Limit) extends Limit

case class DisjunctionLimit(left: Limit, right: Limit) extends Limit

We can then parse our DSL simply by leveraging Scala’s StandardTokenParsers like so:

object LimitDsl extends StandardTokenParsers with PackratParsers {

  lexical.delimiters +=("(", ")", "and", "or")

  lexical.reserved +=("total", "reward", "budget", "customer", "maximum", "cycles", "per", "day", "week", "is", "and", "or")

  private lazy val limit: Parser[Limit] = conjunction | disjunction | totalBudget | perCustomerTotalReward | maximumCyclesPer

  private lazy val conjunction: Parser[ConjunctionLimit] = "(" ~ limit ~ "and" ~ limit ~ rep("and" ~> limit) ~ ")" ^^ { 
    case "(" ~ l ~ "and" ~ r ~ c ~ ")" => c.foldLeft(new ConjunctionLimit(l, r))(new ConjunctionLimit(_, _))

  private lazy val disjunction: Parser[DisjunctionLimit] = "(" ~ limit ~ "or" ~ limit ~ rep("or" ~> limit) ~ ")" ^^ { 
    case "(" ~ l ~ "or" ~ r ~ d ~ ")" => d.foldLeft(new DisjunctionLimit(l, r))(new DisjunctionLimit(_, _))

  private lazy val totalBudget: Parser[TotalRewardBudgetLimit] = "total" ~> "reward" ~> "budget" ~> "is" ~> numericLit ^^ { 
    case a => new TotalRewardBudgetLimit(a.toDouble)

  private lazy val perCustomerTotalReward: Parser[CustomerTotalRewardLimit] = "per" ~> "customer" ~> "total" ~> "reward" ~> "is" ~> numericLit ^^ {
    case a => new CustomerTotalRewardLimit(a.toDouble)

  private lazy val maximumCyclesPer: Parser[MaximumCyclesLimit] = "maximum" ~> "cycles" ~> "per" ~> timePeriod ~ "is" ~ numericLit ^^ { 
    case t ~ "is" ~ a => new MaximumCyclesLimit(t, a.toInt)

  private lazy val timePeriod: Parser[TimePeriod] = ("day" | "week") ^^ {
    case "day" => new Day()
    case "week" => new Week()

  def parse(s: String) = {
    val tokens = new lexical.Scanner(s)


The use of lazy val rather than def and the mixing in PackratParsers over a significant performance improvement.

We can then parse any string by calling LimitDsl.parse(…) this will return a Parsers.ParseResult that we can pattern match on like so:

LimitDsl.parse(value) match {
  case LimitDsl.Success(l, _) => l
  case LimitDsl.Failure(msg, _) => … failure handling
  case LimitDsl.Error(msg, _) => … error handling

If successfully parsed this contains a single limit that we can recurse down to evaluate. Assuming we want a boolean response we can do this something like this:

def evaluate(limit: Limit): Boolean = {
  limit match {
    case l: CustomerTotalRewardLimit => … custom logic
    case l: ConjunctionLimit => evaluate(l.left) && evaluate(l.right)
    case l: DisjunctionLimit => evaluate(l.left) || evaluate(l.right)
    case l: MaximumCyclesLimit => … custom logic
    case l: TotalRewardBudgetLimit => … custom logic
    case _ => throw new IllegalArgumentException("Unhandled limit: " + limit)

As for the front end we often start simple and just use a text-box to allow rules to be entered as text. This is quick to implement but isn’t particularly user friendly or overly sexy. Luckily we can get as crazy sexy as we want on the front end as long as we produce our nice plain dsl text as the output.