Learn SQL by Calculating Customer Lifetime Value Part 2: GROUP BY and JOIN

Learn SQL by Calculating Customer Lifetime Value Part 2: GROUP BY and JOIN

This is the second installment of our SQL tutorial blog series. In the first part, we set up the data source with SQLite and learned how to filter and sort data. This time, we will learn two other key concepts in SQL: GROUP BY and JOIN.

Get the FREE e-book based on this blog series!

GROUP BY: SQL’s Pivot Table

The simplest way to describe GROUP BY is “SQL’s pivot table, except not as powerful.” To explain what I mean by this, let’s review the data from the previous installment.

sqlite> .tables
payments  users

Ah, yes. We had two tables, “users” and “payments.” The “payments” table stored each transaction (let’s say this is data from an e-commerce website), and the “users” table stored what day the user signed up and which campaign source they came from.

Get Treasure Data blogs, news, use cases, and platform capabilities.

Thank you for subscribing to our blog!

A natural question to ask here is, how much money did each of the 10 users (id=1~10) pay to the website?

If you were using Excel, this is as simple as creating a pivot table:

Here is the equivalent operation in SQL, using GROUP BY:

As the name suggests, the GROUP BY operation groups the table’s rows into different groups based on the column name that follows the “GROUP BY” keyword. In the above query, we have “GROUP BY user_id” so we are grouping the “payments” table based on its “user_id” column.

When you do this, you might get multiple rows in the one group. For example, let’s look at which rows belong to user_id = 1:

But here we are not interested in all columns of each group, we are only interested in the “amount” column. More specifically, we are only interested in the sum of the “amount” column per user_id. And this is exactly what “SUM(amount)” does. So, if we were to break the query apart:

Needless to say, GROUP BY can be combined with WHERE (filtering) and ORDER BY (sorting), which are covered in Part 1. Syntactically, WHERE comes before GROUP BY, which in turn comes before ORDER BY.

For example, here is how you can calculate the total for user_id > 5, sorted by the total amount:

But Where is the Customer Lifetime Value?

At this point, you might be wondering when we are going to discuss customer lifetime value (CLV). In fact, we have already calculated it in the previous section. For our simple model of an e-commerce website, we’ll consider CLV to be the sum of all the purchases a customer has made to date, which is precisely what “SELECT user_id, SUM(amount) FROM payments GROUP BY user_id” does!

So what now?

JOIN: Connecting Multiple Sources of Information

Remember that we began with two tables: “payments” and “users.” We just used the “payments” table to calculate CLV, and previously, we used the “users” table to learn how to use WHERE and ORDER BY.

But also, let’s recall that the “users” table included the “campaign” column indicating which campaign a given user responded to.

Let’s say you wish to determine which campaign (for example, organic/Facebook/Twitter) yields the highest CLV?

If you were using Excel, this is where VLOOKUP comes in. Namely, you VLOOKUP the campaign column in the “users” table in the CLV pivot table that we just computed.
Seeing is believing, so here is a screenshot of what it looks like in Excel:

This is all well and good (I am a big VLOOKUP user myself), but what if you have more than 10 users. What if, say, you have 100,000 users? Excel won’t be able to handle the data, or even if it can, the UI begins to lag. And if you have 10 million users (which happens with decently sized e-commerce websites), Excel is definitely not going to be sufficient.

This is where SQL’s JOIN comes in handy. SQL databases (e.g., MySQL, PostgreSQL, etc.) are far more scalable than Excel – even more so with proper indices – and can perform more complex computations in a more automated manner. (Note: Indexing is a fascinating and deep topic in databases, but it’s beyond the scope of this blog series. Just remember that grouping by indexed columns is much faster than grouping by unindexed columns.) Here is the same operation in SQL. Note that we are first computing the CLV table as before:

Wow, that’s a lot to unpack, so let me reformat the SQL:

The first line chooses columns from the joined table. However, we don’t know how JOINs work yet, so let’s look at the rest of the lines first.

The second line is simply the original CLV calculation. Note that the “SUM(amount)” is aliased as “cltv” as is the resulting “intermediate” table.

The third and fourth lines show that we are joining the “users” table onto the “cltv” table that we just aliased. But how do you join two tables?

This is answered on the last line: it matches the rows of the “cltv” table with the rows of the “users” table so that the “cltv” table’s “user_id” field equals the “users” table’s “id” field. There is no strictly equivalent process for this in VLOOKUP because VLOOKUP forces you to JOIN by the leftmost columns. In SQL, you can JOIN by any desired columns!

The Other CLV: Campaign Lifetime Value

Now that we have a single view of the user IDs, campaign sources and CLVs, we can calculate which campaign has the highest return thus far. To do so, we simply run one more GROUP BY, grouping per-user CLV by campaign:

From this we discover that Twitter is the winner! It looks like there is a healthy amount of organic traffic, comparable to Facebook. Note that this data only accounts for how much revenue is generated for each campaign. Different campaigns have different costs, so if you wish to calculate the ROI of different campaigns (and I hope you do!), you need the data for how much money is spent on each campaign.


  • GROUP BY in SQL is like Pivot Table in Excel, except it scales better with larger datasets (especially with proper indices).
  • JOIN in SQL is like VLOOKUP in Excel, except JOIN is more flexible.
  • You can query against an output of another query to ask more complex questions against your data.

If you want to process massive datasets using SQL, check out Treasure Data, and if you’re interested in use-case specific SQL query templates, check out our library!

Contact me on Twitter @kiyototamura or leave a comment if you have any questions about this.

Kiyoto Tamura
Kiyoto Tamura
Kiyoto began his career in quantitative finance before making a transition into the startup world. A math nerd turned software engineer turned developer marketer, he enjoys postmodern literature, statistics, and a good cup of coffee.
Related Posts