Visualising Invite a Friend

We love Beam. We want our customers to love Beam. We want them to tell their friends how much they love Beam and we want their friends to love Beam. Like many other sites and services on the web we encourage this with a refer a friend program.

The Beam referral program works in the usual way: You have a code. You give your code to a friend. They sign up with it and Beam says thank you by giving you both some cash to spend.

These sort of rewards are open to abuse although the incidence of this is rare. We have fraud detection and protection methods in place but one of the best ways to check on these is to expose the data and get a feel for it. So we started out visualising our refer a friend network. What we found was so cool that we just have to share it:

It looks almost organic. Like the view through a microscope. You have to zoom in for the best effect.

Surprisingly nobody on our team can recall ever seeing an invite a friend network presented like this. Hopefully you find it as engrossing as we do.

To answer the common questions:

  1. Yes this is a small sample of data, there is too much to practically display at once, but the individual networks are complete.
  2. The arrows point from the inviter to the invitee.
  3. The visualisation is built using

I have measured out my life In coffee spoons

“I have measured out my life in coffee spoons” wrote T. S. Eliot in The Love Song of J. Alfred Prufrock a poem ostensibly about time and not much at all about coffee. Whereas here we have a heatmap showing time and space and very much about coffee. It plots coffee purchases made with Beam against where the purchase was made vs the time of day and it clearly shows that Dubai wakes up with coffee on it’s mind and it wakes up at 7am. It also shows that all that coffee is having some effect because the caffeination doesn’t stop for the rest of the day.


We hope all that coffee is helping people to meet and talk and fall in love and create masterpieces and we hope that you enjoy this bit of visualisation eye candy.

The Beam Engineering Team will leave you with another coffee quote by the late great Terry Pratchett “Coffee is a way of stealing time that should by rights belong to your older self.”

Spring cloud & AWS Elasticache

Spring Cloud AWS ( provides a nice caching abstraction allowing objects to be cached in AWS ElastiCache ( via the @Cacheable annotation. There is some documentation on this but limited real world examples. For example how do you run locally or in tests when you can’t connect to ElastiCache?

Here is how we did it at Beam…

We setup the Spring AWS caching largely as per the standard instructions – This is what is in the applicationContext.xml:

 <cache:annotation-driven proxy-target-class="true"/>
 <aws-cache:cache-manager id="cacheManager">
 <aws-cache:cache-ref ref="beamCache"/>

 <aws-context:simple-credentials access-key="${aws.key " secret-key="${aws.secret "/>

The major departure here is that we use a cache-ref to a bean called “beamCache”. The reasoning for this is so we can have dynamic runtime config between an embedded memcached server or the elasticache server.

The beamCache bean is configured like so:

package com.beam.spring;

import com.amazonaws.regions.RegionUtils;
import de.flapdoodle.embed.memcached.Command;
import de.flapdoodle.embed.memcached.MemcachedExecutable;
import de.flapdoodle.embed.memcached.MemcachedProcess;
import de.flapdoodle.embed.memcached.MemcachedStarter;
import de.flapdoodle.embed.memcached.config.ArtifactStoreBuilder;
import de.flapdoodle.embed.memcached.config.DownloadConfigBuilder;
import de.flapdoodle.embed.memcached.config.MemcachedConfig;
import de.flapdoodle.embed.memcached.config.RuntimeConfigBuilder;
import de.flapdoodle.embed.memcached.distribution.Version;
import de.flapdoodle.embed.process.config.IRuntimeConfig;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cache.Cache;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Lazy;

import java.util.List;

public class CacheFactoryBean {
 private static final Log LOG = LogFactory.getLog(CacheFactoryBean.class);

 private static final int EMBEDDED_PORT = 11211;

 String cacheClusterId;
 String regionName;
 int cacheExpirySeconds;

 @Bean(name = "beamCache")
 Cache getCache() throws Exception {
 final MemcachedCacheFactory memcachedCacheFactory = new MemcachedCacheFactory();

 if (cacheClusterId.equals("localhost")) {"Starting embedded memcached server on port " + EMBEDDED_PORT);
 // Connect a memcache client to the embedded server"Connecting to local memcached");
 return memcachedCacheFactory.createCache(cacheClusterId, "localhost", EMBEDDED_PORT);
 } else {"Connecting to ElastiCache cluster: " + regionName + " :: " + cacheClusterId);

 // We are only interested in memcached
 List cacheFactories = Lists.newArrayList();

 // Setup the aws elasticache client
 AmazonElastiCacheClient amazonElastiCacheClient = new AmazonElastiCacheClient();

 // Use the factory to produce the cache
 ElastiCacheFactoryBean elastiCacheFactoryBean = new ElastiCacheFactoryBean(amazonElastiCacheClient, cacheClusterId, cacheFactories);
 return elastiCacheFactoryBean.getObject();

 @Bean(destroyMethod = "stop")
 MemcachedExecutable embeddedMemcachedExe() {
 final Command command = Command.MemcacheD;
 // Hardcode the download url. Using the value from:
 IRuntimeConfig runtimeConfig = new RuntimeConfigBuilder()
 new ArtifactStoreBuilder()
 .download(new DownloadConfigBuilder()

 MemcachedStarter runtime = MemcachedStarter.getInstance(runtimeConfig);
 return runtime.prepare(new MemcachedConfig(Version.Main.V1_4, EMBEDDED_PORT));

  @Bean(destroyMethod = "stop")
  MemcachedProcess embeddedMemcachedProcess(MemcachedExecutable embeddedMemcachedExe) throws IOException {
    return embeddedMemcachedExe.start();


The @Lazy annotation ensures that the embedded beans and the cache itself are only instantiated in those environments where they are required.

This is dependant on a few properties which are the only things to change between local & production and everything in between. If the cluster id is set to “localhost” then a local embedded memcached server is automatically started (and stopped) allowing use in local and test environments. For production, stage, etc environments the appropriate Elasticache cluster can be specified, for example:



For the embedded memcache server we use the flapdoodle embedded memcache server, see:

The next issue is the need for runtime configuration of the cache name as we can’t use constants in the @Cacheable annotation with this config. We implement a CacheResolver that points everything at the “beamCache” bean configured in the factory outlined above. Switching to Scala now:

 * Cache resolver that points everything at the same cache, which is typically the elasticache cluster
class FixedCacheResolver @Autowired()(@Qualifier("beamCache") cache: Cache) extends CacheResolver {

 override def resolveCaches(context: CacheOperationInvocationContext[_]): util.Collection[_ <: Cache] =


We can then use this annotation to enable caching on methods in a service or cache wrapper component:

@CacheConfig(cacheNames = Array("beamCache"), cacheResolver = "fixedCacheResolver", keyGenerator = "beamKeyGenerator")

With methods along the lines of this to do the caching:

 override def findStoreGroupsForStore(storeId: Long): Set[StoreGroup] = {
 log.debug(s"Cache miss: findStoreGroupsForStore($storeId)")
	... expensive evaluation ...
 override def evictStore(storeId: Long) {
 // no-op - taken care of by the annotation and the magic of BeamKeyGenerator
 log.debug(s"Evicted Store($storeId) from the cache")

And the magic of “beamKeyGenerator” is to generate consistent cache keys for different methods and parameters and also take the environment into account so we can have for example the uat and stage environments pointing to the same cache cluster.

class BeamKeyGenerator @Autowired()(@Value("${beam.environment}") environment: String) extends KeyGenerator {

 private val log = LoggerFactory.getLogger(classOf[BeamKeyGenerator])

 private val storeGroupsForStore = "storeGroupsForStore"

 override def generate(target: scala.Any, method: Method, params: AnyRef*): AnyRef = {
 val m = method.getName match {
 case "findStoreGroupsForStore" => storeGroupsForStore + ":"  + params.head
 case "evictStore" => storeGroupsForStore + ":" + params.head
 case _ => method.getName + ":" + params.mkString(",")

 // We prefix with the environment so we can run with the same uber cache in multiple environments
 val key = environment + ":" + m
 log.debug(s"Generated cache key: $key for ${method.getName}")


Building an external DSL in Scala

It’s a requirement we have run into many times but the general case goes something like: the marketing team want to setup adhoc promotions and don’t want to be beholden to the developers to do so. Similarly the developers would prefer to focus on developing rather than responding to urgent “can you setup this campaign” requests from marketing.

So there is just that pesky problem of how you actually give marketing the ability to create promotions and have the flexibility they want? Our response is to build a DSL and Scala’s StandardTokenParsers makes it a pleasure to do so.

Say for example we want to allow marketing to write english like business rules such as:

total reward budget is 100000 and
(maximum cycles per day is 2 and
maximum cycles per week is 3 and
per customer total reward is 500)

We start with some modeling. We will call each of these lines a “limit” so we start with a marker trait:

trait Limit

We want each of our limits to be represented by a case class so we have the following:

case class TotalRewardBudgetLimit(budget: BigDecimal) extends Limit

case class MaximumCyclesLimit(timePeriod: TimePeriod, value: Int) extends Limit

case class CustomerTotalRewardLimit(amount: BigDecimal) extends Limit

For completeness we model TimePeriod as such:

sealed trait TimePeriod

case class Day extends TimePeriod

case class Week extends TimePeriod

We want to allow our language to have conjunctions, disjunctions and precedence by which we mean the ability to join sentences with and, or and combine them with brackets. So we have these additional limits:

case class ConjunctionLimit(left: Limit, right: Limit) extends Limit

case class DisjunctionLimit(left: Limit, right: Limit) extends Limit

We can then parse our DSL simply by leveraging Scala’s StandardTokenParsers like so:

object LimitDsl extends StandardTokenParsers with PackratParsers {

  lexical.delimiters +=("(", ")", "and", "or")

  lexical.reserved +=("total", "reward", "budget", "customer", "maximum", "cycles", "per", "day", "week", "is", "and", "or")

  private lazy val limit: Parser[Limit] = conjunction | disjunction | totalBudget | perCustomerTotalReward | maximumCyclesPer

  private lazy val conjunction: Parser[ConjunctionLimit] = "(" ~ limit ~ "and" ~ limit ~ rep("and" ~> limit) ~ ")" ^^ { 
    case "(" ~ l ~ "and" ~ r ~ c ~ ")" => c.foldLeft(new ConjunctionLimit(l, r))(new ConjunctionLimit(_, _))

  private lazy val disjunction: Parser[DisjunctionLimit] = "(" ~ limit ~ "or" ~ limit ~ rep("or" ~> limit) ~ ")" ^^ { 
    case "(" ~ l ~ "or" ~ r ~ d ~ ")" => d.foldLeft(new DisjunctionLimit(l, r))(new DisjunctionLimit(_, _))

  private lazy val totalBudget: Parser[TotalRewardBudgetLimit] = "total" ~> "reward" ~> "budget" ~> "is" ~> numericLit ^^ { 
    case a => new TotalRewardBudgetLimit(a.toDouble)

  private lazy val perCustomerTotalReward: Parser[CustomerTotalRewardLimit] = "per" ~> "customer" ~> "total" ~> "reward" ~> "is" ~> numericLit ^^ {
    case a => new CustomerTotalRewardLimit(a.toDouble)

  private lazy val maximumCyclesPer: Parser[MaximumCyclesLimit] = "maximum" ~> "cycles" ~> "per" ~> timePeriod ~ "is" ~ numericLit ^^ { 
    case t ~ "is" ~ a => new MaximumCyclesLimit(t, a.toInt)

  private lazy val timePeriod: Parser[TimePeriod] = ("day" | "week") ^^ {
    case "day" => new Day()
    case "week" => new Week()

  def parse(s: String) = {
    val tokens = new lexical.Scanner(s)


The use of lazy val rather than def and the mixing in PackratParsers over a significant performance improvement.

We can then parse any string by calling LimitDsl.parse(…) this will return a Parsers.ParseResult that we can pattern match on like so:

LimitDsl.parse(value) match {
  case LimitDsl.Success(l, _) => l
  case LimitDsl.Failure(msg, _) => … failure handling
  case LimitDsl.Error(msg, _) => … error handling

If successfully parsed this contains a single limit that we can recurse down to evaluate. Assuming we want a boolean response we can do this something like this:

def evaluate(limit: Limit): Boolean = {
  limit match {
    case l: CustomerTotalRewardLimit => … custom logic
    case l: ConjunctionLimit => evaluate(l.left) && evaluate(l.right)
    case l: DisjunctionLimit => evaluate(l.left) || evaluate(l.right)
    case l: MaximumCyclesLimit => … custom logic
    case l: TotalRewardBudgetLimit => … custom logic
    case _ => throw new IllegalArgumentException("Unhandled limit: " + limit)

As for the front end we often start simple and just use a text-box to allow rules to be entered as text. This is quick to implement but isn’t particularly user friendly or overly sexy. Luckily we can get as crazy sexy as we want on the front end as long as we produce our nice plain dsl text as the output.