Cookpad Case Study

Cookpad Case Study (TYO:2193) is Japan’s No.1 recipe website. With more than fifteen million users and one million online recipes, dominates the market of recipe search and sharing. In fact, according to’s internal research, 50% of Japanese women in their 20s and 30s use as their primary source of recipes and cooking advice.


So, how does serve up the right recipes at the right time? The short answer is “because they collect all sorts of data about their users.” The long answer is the rest of this article.

Sharded MySQL for Analytics’s web application is written in the popular open-source framework Ruby on Rails. They have over one hundred instances of Rails applications to serve 20 million monthly unique users.


Rails logs data into a local file system by default, and used to consolidate these logs into a cluster of MySQL servers every night via rsync, a standard Unix utility.

The cluster of MySQL servers then computed various key metrics, such as page views per recipe, and the computed key metrics were copied over to another MySQL instance daily.

Get Treasure Data blogs, news, use cases, and platform capabilities.

Thank you for subscribing to our blog!

This last MySQL instance was responsible for answering questions about internal key metrics. This MySQL instance served as a data mart for’s in-house dashboard and Google Spreadsheet on which various non-engineering organizations relied everyday for decision-making.

The Problems’s original architecture was based on shared MySQL servers and had severe scaling and maintainability limitations.

  • Limited Scalability: MySQL is great software. It’s been deployed extensively for more than a decade and the support community is active and mature. However, MySQL was not designed to support petabytes of data. To make their MySQL servers more scalable,, like most other large-scale MySQL users, aggressively sharded their MySQL databases to alleviate the load on each instance. But this is not a robust solution and usually results in systems that are brittle and hard to scale.
  • Rigid Schema: Relational databases, including MySQL, require a well-defined schema upfront. While a rigid schema can help you organize and document data, it is not well-suited to a fast-moving, data-driven company like where the underlying data can change weekly if not daily.
  • Up to 24 Hours of Delay: Because the first step of the ETL process (copying log files from Rails server to MySQL servers) was run once a day, data refreshes could take up to 24 hours. Because of this delay, they couldn’t evaluate the effectiveness of new features or the popularity of new content in a timely fashion. The long feedback loop meant slower product development.

The Solution: Treasure Data
By introducing Treasure Data [1], transformed their infrastructure in two fundamental ways.


Treasure Data’s Cloud Data Warehouse replaced the cluster of MySQL servers.
Now, they run scheduled jobs on Treasure Data that update the MySQL aggregation server that powers their in-house dashboard.

Instead of using the default file-based logging, td-agent has been installed on each Rails server to automatically forward logs to Treasure Data.

Those changes essentially solved all three of the problems that was facing. Let’s look at them in detail.

  • Scalability is no longer an issue: unlike MySQL, Treasure Data was designed from the ground up to scale. For us, adding more storage or CPU is only a few keystrokes away.
  • Flexible Schema: Treasure Data’s proprietary columnar database implements a flexible schema model that lets you add or remove a schema at any given time. This means no longer has to worry about changes in the underlying data model breaking their ETL process.
  • Updates every 5 Minutes not every 24 Hours: td-agent is a versatile, robust logger that can handle up to 17,000 messages per second per instance. Furthermore, it is far closer to real-time than uploading data in nightly batches: By default, td-agent buffers data locally for reliability and transfers it to Treasure Data every 5 minutes, so the data is never behind by more than 5 minutes. Of course, the size of the buffer window is configurable, so you can bring it as close to real-time as you need to. now has a scalable, robust, high-performance data warehousing and analytics solution and all of this was achieved in less than three weeks. You could say that Treasure Data is’s recipe for future success.

You might raise questions about uploading access logs to a third-party service like Treasure Data. Don’t fret, we worked closely with Cookpad’s data infrastructure team to ensure identity-related information is anonymized or filtered out to stay compliant with data privacy guidelines.

Treasure Data
Treasure Data
Treasure Data is the only enterprise Customer Data Platform (CDP) that harmonizes an organization’s data, insights, and engagement technology stacks to drive relevant, real-time customer experiences throughout the entire customer journey. Treasure Data helps brands give millions of customers and prospects the feeling that each is the one and only. With its ability to create true, unified views of each individual, Treasure Data CDP is central for enterprises who want to know who is ready to buy, plus when and how to drive them to convert. Flexible, tech-agnostic, and infinitely scalable, Treasure Data provides fast time to value even in the most complex environments.