Planet Drupal

Drupal as an IVR? Just add Twilio.

Before the web was used to provide users data access and to automate customer service, there were IVRs. You know these ubiquitous technologies by experience if not by name. You use them to pay bills by phone, check account balances or even register for classes.

If you have ever wanted to integrate telephony into your website, you should checkout Twilio. Not only is the integration pretty straightforward, it's amazing all that you can do.

Using Twilio’s quick start tutorials, I put together a short demo to help you jump in.

Answering the call

To start you will need to install the Twilio module. You will also need a Twilio number. You can get a demo number for free.

Next we want to use Drupal to answer calls to that number. Start in Drupal by going to admin > configuration > twilio (admin/config/system/twilio). At the bottom of the page under Module callback, you will notice two urls with the voice one ending in twilio/voice. If you point your Twilio number to this endpoint, the hook hook_ twilio_voice_incoming will fire enabling you to custom program how you want the call handled.

For our demo we are going to avoid custom programming, so we are going to use the TwiML feature of the Twilio module. TwiML is a messaging protocol for providing instructions to Twilio. To create a TwiML, simply click the TwiML Manager tab from the Twilio configuration page (i.e. admin/config/system/twilio/twiml)

Note: don’t worry about inputting the Account SID and other required fields for this demo. They are only needed for outbound phone calls and SMS messages.

Create a "demo start" TwiML using the form at the bottom of the TwiML Manager and input:

Name: Demo start
Machine name: demo_start

  <Gather action="" method="POST">
    <Say>To proceed, enter your user I.D. followed by the pound sign.</Say>

(Note: just make the action = "/twilio/twiml/demo_uid_check". The text filters on our blog keep adding the full domain which is not intended.)

When you save the TwiML, a new endpoint URL is created and displayed in the TwiML Manager list table. It should follow the format: http://[yourdomain]/twilio/twiml/demo_start

Copy this url and paste it into the voice request URL in Twilio. This setting can be found in your Twilio account by click the Numbers menu then clicking your demo phone number.

Once you have saved the setting, call your demo number. If everything is setup correctly, the “Say” messages in the demo TwiML will be played back to you.

Looking at the code for the TwiML we just executed, the “Say” tag function should be self-explanatory. The Gather tag allow us to do something pretty interesting. It enables us to request input from the user's phone. This tag is instructing Twilio to gather user input and then submit the result to the action /twilio/twiml/demo_uid_check.

UID check TwiLM

Next we want to create the "Demo uid check" TwiML to process the user input. From the TwiML Manager in Drupal add another TwiML using the settings:

Name: Demo uid check
Machine name: demo_uid_check

  $account = user_load($_REQUEST['Digits']);
  if(empty($account->name)) {
    header("Location: /twilio/twiml/demo_uid_error");
  <Say>Thank you, <?php echo $account->name; ?>. I will now transfer you.</Say>
  <Say>The call failed or the remote party hung up. Goodbye.</Say>

(Replace +10015551212 with an actual number you want to forward to.)

Now call your demo number back and enter a valid user id from your site. If everything is connected correctly you should hear the Say tags from the uid check TwiML and the phone call should be transferred to the number specified in the Dial tag.

This script contains some dynamic PHP elements. The PHP code is designed to lookup the user account from Drupal using the inputted digits ($_REQUEST['Digits']) to both validate the user and to play back the name.

Summing up

This was just a quick example of how you can use Twilio to interact with Drupal to provide data and IVR functionality. Twilio offers an array of quickstart recipes that show you how to do all kinds of useful tricks with both phone calls and SMS texts. Following the recipes you can use Drupal to implement call tracking, click to call features, phone number verification, call recording/voice mail, call routing and auto attendant and more.

The use case I was working on was call tracking and SMS notifications upon receiving phone calls and form submissions. If you would like to see it in action, check out the Intelligence Twilio submodule in the latest dev release of Intelligence. I am hoping to get a release rolled out in the next couple of weeks.

Important note: for security reasons, I recommend putting any php code that returns dynamic data into a custom module using the Twilio hooks rather than the TwiML Manager. The TwiML Manager comes in quite handy for a quick demo, however for a production app if you are not careful the PHP eval feature of the TwiML output can create security vulnerabilities.

Are you using Twilio and Drupal to do some cool stuff? I would love to hear your comments.

photo by: pixle

Smarter Drupal projects in projects management with git-subtree

In order to properly manage a Drupal project you have to master the art of managing projects within projects. A Drupal site is really made up of numerous components such as modules, features and themes that in turn are their own projects.

The challenge becomes determining the best way to version control the code. The most straightforward way to manage a project is to make the entire site a single monolithic repo. However, this becomes tricky when you want to commit changes in modules back to their own repos, particularly if you want to maintain the commit history for that specific module.

Git is a fantastic version control system. However, it does not handle the project within a project scenario well. Git does have a submodule function that is designed to help solve this, however, the submodule approach often causes more problems than it solves.

In particular, the problem is that a submodule does not automatically become a part of the main repo. You have to remember to merge each submodule in, which creates a lot of room for errors and often leads to head-scratching discrepancies between the code on your local and target servers.

Luckily, there is a new(ish) git feature called git-subtree that solves this problem in a more elegant way. Subtree enables modules to be a part of both the parent repo and their own repo at the same time. A subtree module’s code is automatically merged in allowing you to manage the main repo, a simple homogenous project, yet you can commit/push/pull specific modules to their own repo. You can even keep distinct history so the module maintains a log of all the changes that directly pertain to it.

GIT subtree setup

There are several different ways to setup a subtree. In this example I will be taking the more robust route. While it is more steps, it will prove more convincing down the road.

To start, I am assuming we have a Drupal project setup in Git that contains Drupal core. We want to add a module to that project that we are also maintaining as a separate project. For this example I will use the Intelligence module which I have been doing a lot of work on recently.

Step 1: Remote setup (optional)

Using Git bash from the root of my Drupal project, the first step is to setup a git remote for the module using the command:

git remote add –f [remote name] [repository url]


git remote add -f intel


git remote add -f intel

Use the first example format if you are a maintainer and want to push back to the repo. Use the second version for anonymous checkouts.

Setting up a remote is an optional setup that essentially creates an alias shortcut so that whenever we want to push or pull in the future we can just use the remote label “intel” instead of the full remote url.

Note the –f option. This causes a fetch to executed when the remote is setup so that in the next step the local repo knows the available branches.

Step 2: Add the module using subtree

To add the module so that is can be managed as a subtree use the command:

git subtree add --prefix=[path to module] [remote] [branch] --squash


git subtree add --prefix=sites/all/modules/intel intel/7.x-1.x –-squash

This command is where the magic is. It adds the module where specified in the prefix value and in a way that it transparently acts the same as any other code in the main project repo. However, you can also push and pull the code in that module back to that module’s project using:

git subtree [push|pull] --prefix=[path to module] [remote] [branch] –-squash


git subtree push –prefix=sites/all/modules/intel intel/7.x-1.x --squash

Note the use of the –squash flag. This is added so the commit history of the module is not merged back into the main project’s commit history. This is generally what you want for multi purpose modules, however you may not want this option if your module is closely coupled with your main site, e.g. it is a module implementing custom features for that specific site.

Step 3: Splitting commits (optional)

The last step enables the module to maintain a commit history specific to only changes in its code. While the command is called split, it is really more like a filter. It creates a synthetic history for the module that includes only commits that contain changes to the module’s code.

The command for to enable this functionality is:

git subtree split --prefix=[path to module] --annotate="(annotation prefix) " --branch [branch name]


git subtree split --prefix=sites/all/modules/intel --annotate="(split) " --branch intel

This command creates a “hanging” branch in your main project where a separate synthetic history can be maintained. Now when you do a git subtree push to your module, any commit made to the main repo that include changes to the module will be added to the module repo.

Bottom line

So far the subtree feature seems to be a significant step in the right direction versus submodules. There are a few pain points. One of the primary problems I kept running into is sometimes it was tricky to setup the split so the history is initialized properly.

Another issue is you have to be in the root of the main git repo to push/pull the subtree modules, and it gets tedious always typing out the long paths to Drupal’s modules.

The original author of the subtree feature was working on two very handy features that seems to be abandoned. A push/pull all command which would be a big time saver and a .gitsubtrees file that was a little similar to a Drush make file providing a list of all the modules and pinned version managed as subtrees. Hopefully these features will get worked out as they would be very helpful.

One last note: There is not a ton of documentation about the subtree command. What makes things even more complex is that there is a somewhat similar concept call the Git subtree merge strategy. While similar in purpose, these techniques are different and it can make Googling for help a bit tricky.

*Cool tree image by sniffette.

My Top 5 Favorite Drupal Development Workflow Tricks

When you’ve been working with any platform for a long time, you start to build a relationship with that platform. For Drupal in particular, a lot of developers seem to take similar paths when it comes to building this relationship. At first, we’re struck with awe as we admire the sheer power of Drupal, and then we start marching up the roller coaster some of us would call the “learning curve”. We love it, we hate it, but most of all - we learn from it. After visiting the issue queue and StackOverflow for the millionth time, we start keeping a few tricks up our sleeves for the common “features” that Drupal sometimes throws at us.

These are some of the most common tricks and solutions I’ve discovered that I use when working with Drupal, whether it’s a module, drush command, or a code snippet that hopefully helps out those of you who are relatively new to Drupal.

Undefined index: distribution_name

If I had a nickel for every time I saw this error, I’d have some nickels. Chances are you installed a distribution and updated Drupal, and now you’re getting this error. It’s not anything serious, but it sure is annoying. There are two ways we can handle this:

Add the following line into includes/

if ( ! array_key_exists('distribution_name', $info)) {
  $info['distribution_name'] = 'Drupal’;

If you did that, we just killed a kitten somewhere, because that’s technically hacking core. We can also just solve the issue in the database. It’s scary, I know, but it works. But once it’s done and cache cleared, problem solved.

Use the following MySQL command

UPDATE `MYDATABASE`.`system` SET `status` = '1',`info` = '' WHERE `system`.`filename` = ‘profiles/MYPROFILE/MYPROFILE.profile’;

Easily backup your database

If you aren’t using Drush yet, now’s a good time to get it set up. Drush is a magical tool we use in Drupal that helps up significantly expedite our workflow. If you’re on Mac or Linux, there are multiple ways to install Drush, I prefer using Homebrew. (brew install drush). There is also a Windows installer available as well.

As far as backing up your database, there are a multitude of ways in Drupal that you can do this. If you have a small site, the easiest way is definitely the Backup & Migrate module. If you have a large Drupal site, Backup & Migrate might not work so well, so we can always revert back to the terminal.

mysqldump -u root -p[root_password] [database_name] | gzip > dumpfilename.sql

But if you don’t have your database credentials available, just use drush!

drush sql-dump --gzip --result-file

An advantage to this approach is that Drush likes to organize everything for you. All of our backups, including code backups during updates, are all put away into their own site directories in ~/drush-backups. Also, if you haven't heard of sql-sync-pipe, consider using it as an alternative to Drush's own sql-sync. It's amazing.

Working with files

One of the biggest pains when working between a live server and your local machine is dealing with broken images. You put all your content and images on the production server, and move the database down to your local while pushing code or features back up. Unfortunately, we also have to move the files directory down so we don’t have broken images locally. We can use drush rsync to move the files down, but that could be potentially a few hundred megabytes to a couple gigabytes of storage being taken up on your machine. Mark Carver had published a post on this using Apache rewrites, but there’s a Drupal way we can handle this.

To ease this pain, download the Stage File Proxy module and add the required line to your settings.php file. Stage File Proxy by default transfers only the files you need from production down to your local. Stage File Proxy also has an additional mode in which it can serve a 301 redirect to files on the server, so it's possible to see all your images without having a local files directory at all. How convenient, right?

$conf['stage_file_proxy_origin'] = '';

Local Settings.php file

If you work in a team development environment, then this will definitely help you. We keep all of our clients projects in a central repository, and then push and pull to local machines and development or production servers, but one of the files we don’t update across the environment is the settings.php file for multiple reasons.

  1. We don’t want to keep database credentials in the repository, mostly for security reasons.
  2. Everyone is different. Some of us need different settings for things like varnish, error reporting, etc.

Drupal Dork has a more in-depth article on using local settings for development sites. Just add the following line to your settings.php and add local.settings.php to your repository's .gitignore so everyone can have their own copy of database settings and other configurations.


Compare enabled modules

Let’s set the stage. You inherit a client site and take a snapshot of the database before making any changes. A few days down the road after committing a lot of updates, you finally push up to the server and realize something isn’t working the way it used to - but you have no idea what happened. Were you missing some modules? Was there something in the .gitignore that you missed? Maybe there was a git submodule that wasn’t included. An easy way to check is to restore the original database and compare a list of the modules enabled now and then using Drush.

drush pm-list --status=enabled --type=module --pipe > enabled_modules_now.txt

Do the same command for the old site and new site, then just use an online diff checker, like, and you’ll end up seeing something like this. Now we know what modules that were enabled that aren’t now, and vice versa.


These are just some of my favorite Drupal workflow tricks that help in some of these general and specific situations. If you have any tips and tricks, share in the comments below!

Introducing the Intelligence System

One of the most inspirational books I have read over the last decade is The Fifth Discipline by Peter Senge. Senge espouses that in the long run the only sustainable competitive advantage an organization can create is the ability to learn faster than the competition. The book’s focus is around a single critical component for maximizing learning and innovation, systems thinking.

Ever since reading that book I have looked for opportunities to integrate systems thinking into web development. Websites are ideal for such an approach that is about solving complex problems by understanding how components in a system interact and affect the outcome in a holistic way.

As an engineer, I find it rather odd the typical way website strategies are developed. Strategic planning is derived from the needs and knowledge of a few stakeholders. There is nothing wrong with planning around heuristics, it is fast and efficient. But it is only half of the process.

In a world as complex as the web that evolves in internet time, how valid are these mental models? As humans we know what we need (sort of) but how do we really know what will delight end users and what drives value for the organizationHow much do experience, personal preferences, and context effect what we think is best?  How do we really know what works and what doesn't?

The standard operating procedure is like a scientist creating a hypothesis, developing an experiment then never gathering or analyzing the data to validate the hypothesis. (We all learned the proper way to do this in elementary school. That was the whole point of the science fair.)

The lack of systems thinking in web development is not just a problem of vision. It is also a problem of tools. You need a set of tools capable of quantifying all desired outputs of a website. It needs to be able to relate the effect (output) to any number of causes (input). Data has to be made transparent across the entire team to raise the collective intelligence of all decisions and create a shared vision. Finally it needs to be simple, so it actually gets used.


Over the last few months we have been working on a framework for addressing these challenges using Drupal. The project started out as just a port of the dashboard from the SEO Tools module which integrated a set of tailored Google Analytics reports into Drupal.

I wanted the Drupal 7 version to be less SEO specific and more about content and engagement. As I dug into what was possible with Google Analytics, the project morphed into something else.

Monday the Intelligence system went into beta. Now the biggest challenge is explaining what it is and what it is really designed to do.

It’s maybe best described by its goal: A system for integrating data to support better decision-making (although that is rather high level).

Or maybe as a metaphor:  The Intelligence system is to websites what the sensory nervous system is to humans.

Or maybe even as my brother put it as I explained it over a few cervezas: “oh it’s like Google Analytics on steroids.” Well, he got part of it.

To extend the metaphor a bit; we can think of the websites we build today like making a person with only half of their nervous system. There is motor control but no senses. The brain (the decision makers) tells the body (the website) what to do, but without eyes, ears and other sensory feedback there is no way to know if it is really doing what we ultimately want. We might tell the mouth to say the most amazing things, but without the eyes we may be just talking to a blank wall or even a crowd of people who aren’t really listening.

What problem(s) are we trying to solve?

Maybe more insightful is to describe the problem, or more accurately the problems, we are trying to solve. The challenge is the list is actually quite long and keeps growing:

  • Do people really want that new feature we just pushed?
  • What content resonates with our audience?
  • Which is more effective Twitter or Facebook?
  • and the scores of other questions stakeholders have asked us over the years.

In reality there are three central problems this system is designed to help us solve.

Are we really getting better?

Leading websites are constantly being improved. New ideas are implemented based on new knowledge. By measuring the outcomes, these mental models are either validated or corrected before too much damage is done. In either case, understanding is gained leading to better and more effect decision making in the future.

How do we achieve more with less?

One of my favorite management quotes is “We know we are getting 80% of the results from 20% of the effort. The problem is we don’t know which 20%”. Productive websites require a lot of inputs; content, marketing campaigns, visitors, engagement, etc. Each adds to the output, but not all equally. A core goal of the Intelligence system is to help teams maximize efficiency by identifying and focusing on only what provides the most value.

What is my return on investment?

Nearly as shocking as the lack of systems thinking in web development is the lack of capital budgeting principles. Innovation must be funded. Yet most organizations treat web initiatives as a cost center rather than a profit center. Once again there is a lack of tools to quantify returns in a holistic way. Thus what is hard to measure is not. Intelligence can help organizations determine the real value their web efforts are generating.

Getting it

When people first see the system the typical response is "Oh, look at the pretty reports.” OK, the reports are cool, but that is not really the point.      

The goal is to create actionable analysis. To give people data in a way that will drive meaningful change. Google Analytics has cool reports, but when was the last time you used them? When was the last time your designers, developers and content creators used them?

The goal is to help people understand the systems within systems that make up a website and how they interact to affect outcomes. To provide insight that facilitates team learning and more efficient innovation. To help people build websites with more intelligence.

Currently we are working on documenting the system. Most of the docs will be standard how to tutorials. I already have a few out (see the project page). But I realized while writing the next tutorial on scoring, I need to provide some bigger picture background what we are trying to achieve. So I am diverging from the more hand-on tutorials to do a few posts about applying systems thinking to websites.

photo by Ars Electronica

Creating Custom Google Analytics Events In Drupal

One of the more powerful but rarely used features in Google Analytics is event tracking. A standard Google Analytics install just tracks page views. Event tracking allow us to record many additional items such as when someone shares a page on a social network, comments on a blog, submits a form, or clicks on calls to actions.

The challenge with event tracking is it is not automatic like page views. Your site has to tell Google Analytics when an event has occurred. The Intelligence module solves this issue by providing a flexible framework for managing and triggering events in Google Analytics.

Preset and Custom Events

The Intelligence module comes with several preset events that automatically track items site managers commonly want to know about. You can view the events in your site by going to Admin > Config > Services > LevelTen Intelligence > Events. The items in this list will vary based on which modules you have installed that implement preset events.

Intelligence events list

The events in the list after a fresh install are all preset events that are coded in modules to track automatically.  If you completed the previous tutorial on managing Google Analytics goals in Drupal, you will notice the example goal "Bingo" we created is included in the events list. This is because goals are in actuality just a special event. In this tutorial we will look at creating custom events that you can use to track practically anything on your site without programming. 

To create your first custom event, click the Add event inline link at the top of the event list.

The event add/edit form has many options that can be a little overwhelming at first. Don't worry, you don't have to understand what all the fields are to start creating custom events. In fact, our first custom event will only need two fields; key and title.

Adding event crossing attempt

The key is a unique identifier for the event. The title is the administrative name for the event. Enter the following text for your first event:

  • Key: crossing_attempt
  • Title: Crossing attempt

Leave all other fields as is and click Add event.

You have now created your first custom event. To trigger the event, we want to use the intel event fields method we implemented in the managing goals tutorial. If you edit a node/entity of a type that has added the intel event fieldset collection, you now can trigger the custom event by selecting it in the event dropdown.

Go ahead and create a new basic page node titled "Bridge of Death" and add the Crossing attempt event to it. If you have not added the intel event fields to the basic page content type, you will need to do that first (see managing goals tutorial link above).

node edit adding crossing attempt event

Now open up the Google Analytics real time events report in another browser tab. When you view the new node page, you can now see the event triggering in the report.

Hint: there is a refresh filter applied to events, so you may have to navigate to another page then back to trigger the event.

Google Analytics real time events report showing crossing attempt

Understanding Google Analytics events 

Events in Google Analytics are similar to goals in GA, however events offer some additional flexibility. Goals essentially have two parameters; a name and value. Events enable us to use up to five parameters;

  • category
  • action
  • label (optional, defaults to empty string)
  • value (optional, defaults to 0)
  • non-interaction (optional, defaults to false)

While Google Analytics goals use a single name identifier, events use three. The category, action and label parameters work together to form a classification hierarchy. The default Google Analytics events report will displays a list of all event categories. If you click a category, you will see a list of all actions under that category.

For example, if you click the Crossing attempt event in GA's real time event report you will see the event action which by default is set to the node/entity title and the label which defaults to the page's system path. Note that since for the Crossing attempt event we left the action and label fields blank, the Intelligence module automatically added the node title and system path for these parameters. 

Google Analytics action and label under events

The value parameter is simply a numeric value. It might represent a dollar value similar to Google Analytics goals or might be some other type of number related to the event. For example, for a video play event the value could be the number of seconds the visitor watches the video.

Non-interaction is a boolean value (true or false). If set to true the event will not count as a hit for bounce rate calculations.

You can set these values in Drupal under the Google Analytics event values fieldset on Intelligence's event edit form (select edit in ops from the event list at Admin > Config > Services > Intelligence > Events).

Custom Event triggers

Intelligence system implements a wrapper to enhance management of Google Analytics events called an intel event. One of the main features of this wrapper is it enables you to customize when and how an event is triggered leveraging the power of jQuery.

The Crossing attempt event we created earlier triggers whenever someone views the page. This is the default behavior for intel events. But lets say we wanted to trigger an event when someone does something on the page, such as clicks a particular link or submits a form. We can accomplish this using the selector and event fields in the Event trigger fieldset on the event edit form.

To illustrate how this works, lets create a second custom event called Crossing success using the following inputs:

  • Key: crossing_success
  • Title: Crossing success
  • jQuery event: click
  • jQuery selector: .correct-answer 

Leave all other settings as their defaults and click the Add event button at the bottom. 

Adding intel event crossing success

The Crossing success event does not automatically trigger when the page loads. It will trigger only when a visitor clicks a link with the class correct-answers.

To see how this works lets go back to our Bridge of Death node to enable the event. We need to do two things to make this work correctly, first add the Crossing success event and second add some content to the node that contains the trigger link.

In the body of the node, enter the following content: (make sure to enter as source HTML. Turn off any WYSIWYG editor and select a text format that permits HTML.)

Under the Intelligence fieldset add the event by clicking the Add another item button then selecting the Crossing success event. In the value field next to the event selector, enter the number 5. This will set the value of this event to 5 in Google Analytics.

Node edit form implementing two crossing events

Now when you view the page, the Crossing attempt event still triggers when the page is viewed but the Crossing success one does not. Only when visitors click the link for the correct answer will the Crossing success event trigger. Using these two events you could then calculate for every person who comes to the Bridge of Death page and how many know the answer to the gate keepers question.

You can combine custom events and various settings to devise a wide range of variants. For example you might also want to track how many people clicked the wrong answer. Hint: create a Crossing fail event similar to Crossing success and change the jQuery selector to .choices a:not([class='correct-answer'])

Or you could setup several different pages with variants of the question and see if there is any difference in how people answer each variant. Google Analytics events report coupled with GA's filters provide fairly flexible tools for analysis.

Google Analytics event report

Beyond the basics

This tutorial is meant to be an introduction to how to use Intelligence events. Custom events enable site administrators to setup their own events without having to program modules. We covered the essentials for using custom events in this tutorial. There are some more advanced options such as using tokens, event types and callbacks which we will cover in future tutorials.

Even more sophisticated events are possible within module programming. The Intelligence system provides a set of preset events for things like conversion funnel and engagement tracking. It also enables a set of hooks for module developers to implement their own intel events. These subjects will be covered in future tutorials. 


Managing Google Analytics Goals in Drupal

 A vital step towards more meaningful analytics is the proper use of goals. One of the challenges of Google Analytics is having to keep its goals in sync with changes in your Drupal site. The Intelligence module provides a much more flexible and integrated way of managing your Google Analytics goals from within Drupal’s admin.

The goal of this tutorial will be to walk through how to setup IntelligenceTM, our Google Analytics with Drupal module to enable you to trigger your analytics goals and assign custom values on any Drupal entity. Before starting this tutorial make sure you have the latest version of the Intelligence module installed.

Setting up goals in Google Analytics

Before you can configure LevelTen Intelligence to manage your goals you must setup corresponding Google Analytics goals. Once the goals have been setup the first time, you will be able to customize them within Drupal without having to make additional changes in Google Analytics.

To do the initial goal setup go to the Google Analytics Admin for the property you are using with LevelTen Intelligence. If you are in the right place you should see three columns labeled Account, Property and View (Profile).

Google Analytics adding view goals

Under the View (Profile) column click Goals. On the goal list page, click the Create a Goal button.

Google Analytics Goal setup templates page

With newer Google Analytics accounts the first step will be a Goal setup page that gives you several Templates to choose from. If you see this page, select the Custom radio button and then click Next step. If you don't see this page simply proceed to the next step described below.

Google Analytics adding a event triggered goal

Under Goal description, create a name for your goal. For the Type, select Event then click Next step.

Google Analytics using regular expression event goals

Under Goal details, set the drop down box next to Category to Regular expression. The input for the text field should follow the pattern:
[Goal name]\+$

If for example you were setting up a goal for when someone submits your contact form you would enter:

Under Use the Event value as the Goal Value for the conversion, make sure the slider box is set to Yes.

Finally click Save Goal.

Repeat these steps to setup additional goals as needed.

Setting up goals in Drupal

Once you have setup your goals in Google Analytics, you need to let Drupal know what they are so they can be triggered properly by the site.

To setup your goals within Drupal go to Admin > Config > Services > LevelTen Intelligence > Scoring. Expand the Advanced settings fieldset towards the bottom of the form.

Adding goals to Intel scoring admin

There are two text boxes to enter your goals; (event driven) Goals and Submission goals. Event goals can be triggered on any page view. Submission goals are a special subset of goals that use special logic to only trigger when a form is submitted. Submission goals will be covered in a latter post about conversion tracking. For now it is fine to add all your goals to the standard event driven goals text box.

Enter one goal per line as | (pipe) separated elements. The first element should be the Goal ID set by Google Analytics. The Goal ID can be found in the goals list in Google Analytics. The second element is the goal name (title). The third is an optional description to use for administrative purposes.

Finding Google Analytics goal ids

Triggering goals with fields

Now that Drupal knows how to sync with your Google Analytics goals, we can trigger them from nodes and other entities. The Intelligence module uses the fields system to attach goals to nodes.

Node level goal settings uses a pair of fields; one to select which goal to trigger and another to assign the value for that goal. Since the two fields are coupled together and multiple goals can be triggered per page, field collections are used to help organize the user interface.

Field collections are provided by the Field collection module. Download and install this module if it is not already installed on your site.

One field collections are enabled go to a content type you want to add selectable attributes by going to Admin > Structure > Content types > [content type] > manage fields.

Adding intel event col field to basic page

On the manage fields page, add a field with the following values:

  • Label: Events
  • Machine name: field_intel_event_col
  • Type of data: Field collection
  • Form element: Embedded

Click Save at the bottom of the page. On the next page, Field settings, leave the defaults and click Save field settings. On the form field form change the number of values under field settings to Unlimited. Click Save settings.

Intel events col field settings

Now that we have the collection created, we need to add the actual fields to the collection. Do this by going to Admin > Structure > Field collections >  and for the field_intel_event_col field click on manage fields under Operations.

Field collections list page

On the Manage fields page, add a field for selecting goals using the following settings:

  • Label: Event
  • Machine name: field_intel_event
  • Type of data: List (text)
  • Form element: Select list

Click Save.

Intel event goal selection field

On the next page, Field settings, you don't need to change anything so simply click Save field settings. The next page presents the full instance field settings. You don't need to change anything on this page either, so simply click Save settings. Note, you want to leave the Number of values setting at 1.

Once done creating the select field you should be back at the Manage fields list for the field collection. Now we want to add a field to set the value for our goals. Add another field using the following settings values:

  • Label: Value
  • Machine name: field_ intel_event _value
  • Type of data: text
  • Form element: Text field

Click Save to create the field.

Intel event goal value field

On the next two fields settings pages the default settings will work fine so simply click the Save field settings button. When the field collection setup is complete you should see the two fields in the list on the Manage fields page.

Completed intel event field collection


Testing your setup

Now your selection fields are setup, you can go back to the content type you originally added the field collection to and you should see the intel event field collection.

Intel events node fieldset

Simply select one of the goals and optionally enter a value. Save the node then view the page.

If everything is setup properly, you should be able to see the goal triggered in Google Analytics in the Real-Time Conversions report.

Google Analytics real time conversion report

How to Install the Intelligence Module

This post covers the step by step instructions for getting the core of the Intelligence module up and running. Once the steps are complete your Drupal site will start tracking expanded Google Analytics data and you will be able to generate many of the Intelligence reports. Additional posts will explain how to customize the system and unlock more advanced features.

Install the LevelTen Intelligence module

Download the Intelligence module and follow the standard process for installing modules. To get started, only enable the base LevelTen Intelligence module without any of the sub modules. We will cover sub modules later on.

Also download and install any of the dependent modules if you don’t already have them installed:

Setup an Intelligence Google Analytics account

LevelTen Intelligence works directly with Google Analytics. You will want to setup a dedicated Google Analytics tracking id to use with LevelTen Intelligence.

Most Drupal sites are already using Google Analytics via the Google Analytics module and thus have and existing tracking id. If you already have a Google Analytics tracking id, you will want to setup a second one specifically for LevelTen Intelligence. If you don’t currently have a Google Analytics tracking id, you should set up two accounts. One to track standard data via the Google Analytics module and one to track enhanced data via the Intelligence module.

You can create tracking id's using Google Analytics’ admin. Tracking ids are associated with a Property. You can create two tracking id's using the same account by setting up two properties. Navigate to the account you want to use in Google Analytics admin, then use the select box under Property and select Create new property. Make sure to select Classic Analytics as the tracking method on the New Property setup page.

Get a LevelTen Intelligence API key

Next you will need an API key from the LevelTen Intelligence API. Create an account at using the registration page.

Once logged in, click My Properties on the right hand menu, then Add new property. Select a name for your property to help you identify which property is associated with this key. Input the Google Analytics tracking id you created for use with LevelTen Intelligence in the previous step.

Install Libraries

You will need to install the LevelTen Intelligence libraries from the API site into Drupal. To do this, click the 'Downloads' link in the right hand user menu while logged into the API site. Download your preferred archive formats (zip for Windows, tar.gz for Mac & Unix) and unarchive the files into your libraries directory in Drupal. This is the directory created by the Libraries modules. It is usually:

[Drupal root]/sites/all/libraries/

Configure LevelTen Intelligence settings

LevelTen Intelligence Drupal admin configuration screenshot

Login to your Drupal site and go to admin > config > services > LevelTen Intelligence. Input the tracking id and API key generated on the API site into the form and click Save configuration. If everything is setup properly you will get a status message “LevelTen intelligence API connected.”

Next you will need to configure the Google Analytics API by going to admin > config > system > Google Analytics reports. Follow the direction on the configuration page to connect to the Google Analytics API. Make sure you connect with the Google Analytics property (tracking id) that you setup for working with LevelTen Intelligence.

LevelTen Intelligence tracking data will follow the parameters set in Drupal's standard Google Analytics module. If you do not have previously configured the Google Analytics module, do that next by going to admin > configure > system > Google Analytics

Verify install

Everything should now be setup. If configured properly your page view analytics should be tracking. To verify this is working go to your Google Analytics account and view the Reports for the property associated with LevelTen Intelligence. Under Real-Time in the Standard Reports menu, select Content. Now click through a few pages on your website and you should see page views registering in the real time report. 

To verify your Google Analytics API connection is setup correctly, in your Drupal site go to admin > reports > LevelTen Intelligence > dashboard. You should see counts under the Traffic section. Note that most of the other reports are set to report from the previous day back, so they will not start showing data till a day after installation.


The LevelTen Intelligence tracking follows the the tracking rules setup in Drupal's standard Google Analytics module's. These may be configured to not track authenticated users and designated pages. Make sure when testing the system that the page views are not registering because the Google Analytics module's is not set to track your page views.

When configured properly your website should have two tracking ids, one for the standard Google Analtyics module and one for LevelTen Intelligence. Verify that each module is set to use unique tracking codes.

JavaScript errors may interfere with tracking. Use Firebug or another browser developer tool to check if there are any JavaScript errors on you site. If so, fix them and then retest.

E-Commerce Sites and PCI Compliance

I'm a pretty paranoid person in general, so I'm hypersensitive to topics related to risk and prevention. I always wear my seatbelt. I tear up my unused checks. I cut the cords on our window blinds to prevent choking. Being in the web design field, one of my greatest fears is messing up a client's site in a way that causes loss of security and/or loss of money. 

So when I recently listened to a podcast on financial security and the fines for non compliance, I was all ears. The subject of the podcast was PCI compliance.  PCI (Payment Card Industry) compliance affects anyone who is using credit card transactions on their website (even if you are using PayPal... so keep reading).

When consumers make a purchase through a website using a credit card, they assume that their data is protected.  

We see in the news that hacks can happen and credit card numbers get stolen but we expect companies to make every effort to protect our information. Without PCI compliance, credit card information is vulnerable. What happens if consumer data is stolen and PCI compliance is not up to par? There can be thousands of dollars of fines, future audit requirements to confirm PCI compliance ($$expensive$$), and loss of brand trust for the company (not to mention the web company that built the site).

When creating an e-commerce site project, who should be taking the initial step to protect consumer data?  Is that the responsibility of the web design company who is constructing the site?  Or is it the responsibility of the company to whom the site belongs? The answer is both of them.  Worst case scenario is neither.  

Why a Company Should Know PCI Compliance:

Essentially if anything is missed or found to be negligent, you open yourself to liability that results in a lot of fines. Even if a breach is a result of a poorly constructed site, it's the company to whom the site belongs that is liable for those fines. 

If you use a third party service to process your card transactions (like PayPal or Square), this can take care of a lot of your compliance needs but you need to make sure that the service is PCI compliant as well. You also need to make sure the your site is PCI compliant, even if it's not technically e-Commerce. Just having a link to Paypal makes you eligible for PCI compliance so know what you are doing. Check out the PCI SAQ's (Self Assessment Questionnaire) to find out specifically what type of compliance areas you are responsible for. 

Why a Web Firm Should Know PCI Compliance: 

When a client comes to a web firm, they often assume that the company knows every aspect of their project and the security risks associated.  They may not even mention (or know) that they need a site that is PCI compliant. Make sure you are asking the questions.  While Drupal has a ton of security features that make it a great option for e-Commerce, using Drupal alone does not guarantee PCI compliance.

PCI compliance is not just about technical security, but the processing of the information. If a company is not familiar with PCI compliance, they may create a procedure that is inherently vulnerable, regardless of the great security features of their site. If their site is hacked, the company who created the site may suffer some reputation damage (regardless of whether or not they were the reason for the breach). Even hosting options (like the Cloud) can fall outside of PCI guidelines. 

PCI compliance is not a sexy topic for anyone so trying to find an expert in the Drupal community is not easy. Rick Manelius is the guy to listen to. A great resource that Rick created specifically to help Drupal developers working with e-Commerce sites is his Drupal Compliance White Paper. A very readable resource, this paper covers the basics of PCI compliance that companies and developers need to know to protect their own liability.  Also, check out his post on Ubercart and Drupal Commerce; he has a great section entitled Myths About PCI Compliance within Drupal.

In short, ignoring PCI compliance is a lot like ignoring your seatbelt. You may never get into a crash, but if you do it is sure to mess you up. 


Drupal Compliance PCI White Paper

Download a Quick Reference Guide from PSI Security Standards

10 Tips for eCommerce on Drupal

​Photo credit goes to Mr. Tallahassee.

The Golden Toolbox - Drupal Hosting Platform Features

In my last post, I gave an unbiased comparison, with a series of load tests, performance, and hosting features, of some of the most popular Drupal hosting platforms from Acquia, Pantheon, OpenShift, and AWS, to get you the best bang for your buck with free Drupal hosting. In this article, I'd like to go a bit further and explore the types of tools and features that each platform also offers for free, and why. Of course, this is going to require more criticism than the last article where we focused mostly on analytics.
But who am I kidding, this is the internet. Let's take a look!

Amazon Web Services Free Tier

AWS gives you the ability to set up literally anything you want in terms of a hosting environment, up to a certain point on various features, until you'll have to pay for more. When you sign up for free tier, you get a year for free, the equivalent of 750 hours of computing. You can spin up an EC2 instance with whatever flavor of OS you'd like (including Windows and RedHat) and start installing away at your hearts content. It's hard to compare AWS to the other platforms because it's an IaaS, and not something that is made for building specifically with Drupal. However, since it is an IaaS, we have a lot of flexibility of how we can set up our server, including adding an S3 bucket for storage, setting up an Elastic IP, and building a LAMP or LEMP stack to how we see fit. This also means we have to set up our own development tools, such as installing git and drush, hosting a repo on GitHub or BitBucket, and whatever other tools we need.
You can see from the screenshot that we have a GUI where we have pretty fine grain control of our server, such as security groups and port blocking… but that also means we have very fine grain control that we have to know what we're doing. In the case of using AWS, you are your own system admin, which may or may not be a good thing.
Amazon Web Services Free Tier
*editors note - I had attempted setting up Barracuda on an EC2 a couple of times, and unfortunately could never get it working on a free account. So of course I ran the script a couple of times and I actually got charged $0.30 for the couple hundred thousand calls it made.

OpenShift by RedHat

When I was testing the free Drupal hosting platforms, it was honestly the first time I had ever used OpenShift, and I have to admit that I was really impressed with what they have done and how easy it was to set up. Since OpenShift isn't geared specifically towards Drupal, the install process wasn't anything as seamless as Acquia or Pantheon. You can either build it manually, or run the prepackaged Drupal install to get up and running. With OpenShift, you have instances running called gears, and you have a gear for each application with multiple cartridges. A cartridge would be MySQL, phpMyAdmin, load balancer, or even some kind of metrics. One of the coolest features is that you can set your gear to scale based on web traffic. OpenShift will set up a load balancer that dynamically uses your remaining gears to handle the traffic. A down side to OpenShift is that each gear is only 1 GB in size on the free plan.
OpenShift has an intuitive interface that allows you to add more applications and add/remove cartridges to those apps. In addition to the web GUI, OpenShift also provides a set of command line tools (RHC Client Tools) that allows you to manage your app directly through the terminal. This is great for those of you who are more technically oriented and prefer working in the terminal. In addition to all these goodies, OpenShift also gives you the ability to host your app under a custom domain name, which means… your free dev site just turned into a free production site. So technically on OpenShift, you can actually host your Drupal site for free. If you're curious, just see how easy it is to deploy your Drupal site in a couple of minutes.
OpenShift by RedHat

Acquia Free Cloud

At LevelTen, we have long been consumers and users of Acquia's platform as we manage multiple client sites with them. After hearing that they were going to start offering a free tier for developers, I knew I had to test it out. For those of you who haven't worked with Acquia before, the administration GUI you get on Free Cloud is the same one you'll see on paid Acquia accounts. Acquia Free Cloud has support for Git, a very intuitive GUI, SSH access, extended capabilities with Cloud API and Cloud Hooks, and an automated developer workflow. The workflow is inherently built to enable developers to follow continuous integration best practices with designated dev and stage environments. One can simply drag-and-drop to deploy code and content between environments. Under a free development account, you get 500 MB of disk space, 2 PHP processes and Drush integration.
Acquia Cloud Free comes equipped with additional free developer tools such as Apache Solr powered Search, performance monitoring tool with Acquia Insight and content moderation tool with Mollom. If you've never used Acquia Insight, it performs automated testing of the site by scanning the site every 30 minutes and assigns a score to the site based on where it stand on performance, security and best practices. It even performs a thorough code analysis of the site, alerting users of updates to their site modules and even providing a 'Diff' of a module in the site if the module has been forked. Apart from the developer tools, Free Tier also allows access to the Acquia library as a learning / knowledge resource. 

Acquia Free Dev Cloud

Acquia Insight

Acquia distributions available


Pantheon Free Sandbox

In San Francisco, there's been a new and different approach to Drupal hosting being brewed up by Pantheon. Pantheon is built on a set of distributed network services: Varnish for edge caching, Valhalla for file storage, Redis for object caching, Styx for page request routing, Solr for searching, and DROPs for running the Drupal application itself. They've designed all of these to act in concert out of the box which gives us a development site.
Other than their innovative hosting platform, Pantheon provides a lot of tools and features for developers when building a site. Pantheon offers dev and test environments, Git and SFTP support (make commits through the UI), error reporting, a great looking admin interface that is very snappy and responsive, a multitude of Drupal distributions to start off with (CiviCRM, Commerce Kickstart, Pushtape, and more), and other standard hosting features, but I'd like to point out a couple that I think are very convenient.
You can create a backup of your site very quickly into 3 different categories: code, database, and files. This makes it convenient to restore individual pieces of your site, or move the site to another host if need be. Don't want to create a whole backup? No problem, just export the database or the files individually. Another convenient feature is easily locking your dev site to the public. Generally we would install something like Shield, but this functionality is built right in. SSH keys are a breeze as they're tied to your user account not to projects. As long as you have an SSH key on your account, you can join any project and get to work. Now Pantheon doesn't give developers SSH access like Acquia, but they have been working on a suite of Drush-based command line tools to help manage your site. Overall, I think Pantheon has a very clean, fast, and intuitive administrative interface that any developer would enjoy.

Pantheon Dashboard

Pantheon Backups

Pantheon Password Protect

Distributions available on Pantheon



Of course, every developer is going to have their own preferences. Some of them want fine grain control of every aspect of their system, which is where Amazon Web Services or Rackspace Cloud would come in. Others want an optimized Drupal platform that has all the tools they would ever need without having to be a system administrator, which of course would be Acquia or Pantheon. The last of the bunch might want more flexibility with a hybrid system, a GUI interface to pick and choose parts of our system, while still having a bit of control of other tools, features, and other code bases to implement as with OpenShift.

Subdomain Multisite Part 2

This is a continuation of exploring Drupal subdomain multisites. Part 1 is here.


​​Sharing user accounts and roles

Getting user accounts and roles to share between subdomains is very important. We want users to experience a single sign on (SSO) solution and for the site to appear to be one site. Luckily, we have some great tools to support this that are used on!

First we'll need the following modules:

  • bakery
  • role_export

Bakery manages the user accounts and sign ins for all the subsites like so it is pretty well tested. 

To set up bakery, we put its settings in the global.settings.php file we created in part 1.

// Set bakery settings.
$parts = explode('.', $_SERVER['HTTP_HOST']);
$subdomain = '';
if (count($parts) < 2) { // This is for drush.
  drupal_set_message('Could not determine domain name. add --uri=http://mydomain.local to drush to specify your domain name.');
else {
  if (count($parts) > 2) {
    $subdomain = array_shift($parts);
  $base = implode('.', $parts);

  if ($subdomain == 'accounts') {
    $conf['bakery_is_master'] = TRUE;
    $conf['bakery_slaves'] = array(
      'http://kb.' . $base . '/',
      'http://forum.' . $base . '/',
      'http://' . $base . '/',
  else {
    $conf['bakery_is_master'] = FALSE;
  $cookie_domain = '.' . $base; // Set cookie domain for logins.

// Common bakery configuration
$conf['bakery_master'] = 'http://accounts.' . $base . '/';
$conf['bakery_key'] = 'PRIVATEKEYHERE';
$conf['bakery_domain'] = '.' . $base;
$conf['bakery_supported_fields'] = array(
  'name' => 'name',
  'mail' => 'mail',
  'status' => 0,
  'picture' => 0,
  'language' => 0,
  'signature' => 0,

You can tweak this as needed but it will set up bakery for sandboxes, dev, test and production without any tweaks at all. So you can have forum.client.localhost and kb.client.localhost and they will still use bakery properly.

Next we'll need to set up the feature which contains the user fields and roles. Role export assigns a role id to each role and allows us to export each of the roles to a global feature. We define all the roles for the whole site and export them to a global feature placed in sites/all/modules. We typically name this client_user. We also put any user fields in this file as well. For example, field_firstname and field_lastname. Then we make sure this feature is enabled on all the sub sites. This is so that all the sites have roles with the same ids and same fields on the user object.

The reason this is important is that it allows us to transfer field and roles across all the sites with minimal effort. To do this, we need to patch bakery with the patch in

This creates two new hooks for bakery. Those are hook_bakery_transmit and hook_bakery_receive. This sends and receives additional information between the subdomains. Here are the hooks we typically use and we put them in the global feature we created with all the user and role information.

 * Implements hook_bakery_transmit().
function featurename_bakery_transmit($edit, $account, $category) {
  $info = array();
  //  First populate with account values
  foreach($account as $key => $value) {
    if (substr($key, 0, 6) == 'field_') {
      $info[$key] = $value;
  // Then override with edit values.
  foreach ($edit as $key => $value) {
    if (substr($key, 0, 6) == 'field_') {
      $info[$key] = $value;
  // Save off roles as well. Because of role_export all sites will have the same role ids.
  if (isset($edit['roles'])) {
    $info['roles'] = array_filter($edit['roles']);
  else {
    $info['roles'] = array_filer($account->roles);
  return array('featurename' => $info);


 * Implements hook_bakery_receive().
function featurename_bakery_receive($account, $edit) {
  if (isset($edit['data']['featurename'])) {
    return user_save($account, $edit['data']['featurename']);

That pretty much does it. Just make sure that bakery and your client_user feature is enabled on all of the subsites and user accounts and sessions will be transferred to each of the subsites automatically. You will still need to set permissions for each role on each of the subsites but since they have different modules, their permissions will be different so this makes sense. 

I'll write up the next section soon.

Featured image from