Discovery of a MySQL bug

Recently, one of our MySQL instances crashed. We responded to the alert, but by the time we logged into the server, the MySQL instance has already restarted and recovered. So, we started to investigate the cause of the crash. Our MySQL data volume is nfs-mounted on a NetApp filer in order to allow for reliable crash-recovery and data replication. Our monitoring system showed a spike of near-Gb-line-rate output from the MySQL server to the NetApp filer and it quickly became apparent that MySQL crashed because it ran out of disk space. However, the MySQL disk usage did not change. Instead, the space was used by the NetApp snapshots, each on the order of 20GBs. Snapshots, much like source control, keep track of changes made to files allowing you to view past versions. The NetApp is configured to automatically create snapshots and garbage collect old ones. However, as the rate of change grows, the size of snapshots grows as well. The writing of GBs of data caused enormous snapshots to be created. This meant that whatever wrote those massive files in a short time, also deleted them in that same short time.

Additional metrics in our monitoring system pointed us towards MySQL's internal temporary tables. MySQL uses these internal temporary tables to satisfy certain queries. When an internal temporary table becomes too big for memory, MySQL flushes it to disk. Once the query finishes, the internal temporary table is deleted. So, in this case, MySQL was processing a query and needed a 20GB internal temporary table!? Naturally, we need to find out what query is causing this and on which data set (each customer is stored in a separate database using the multi-tenancy principle).

Based on mytop and the MySQL slow query log, we were able to single out one suspicious query. It was a query executed by Solr's data import handler during delta reindex. The application knows which models and which attributes are stored in Solr. Whenever any of those models and attributes change, the application sends a request to Solr asking it to delta reindex. We have an extension in Solr which waits 60 seconds before actually delta reindexing, which effectively coalesces multiple delta reindex requests into a single delta reindex and lightening the load on the MySQL

It turned out that this suspicious query was changed at the beginning of December, but, due to the Christmas / New Year, was only in production for a couple of weeks. The change was rather unusual as it added a join from table A to table X to table B to a query that was already joining from table A to table Y to table B. This was a deliberate change allowing us to migrate from table X to table Y over a few development cycles and the fact that we now have a double join (and a cartesian product) from table A to table B was discussed at development time and was deemed okay. We assumed less than 50 rows resulting from the join of table A to table B, and so in the worse case we'd end up with 2,500 (50 x 50) rows because of the double join.

We found the internal temporary table that MySQL flushed to disk in one of the NetApp's snapshots. We then matched the data in that file to a particular data set. In this data set, the join from table A to table B had 500 rows, which would produce 250,000 (500 x 500) rows because of the double join. Clearly 250,000 rows is significantly larger than the expected 2,500 rows, but still nothing that MySQL should not be able to handle.

We manually explained and executed this query and we saw no issues whatsoever. Hmmm??? Perhaps this query was not the cause after all?

Nevertheless, we proceeded to copy the data set to our staging servers and see whether we can reproduce the issue there. Based on the suspicious delta reindex query, we knew what to edit in the UI in order to trigger that query. We made the edit and waited. The application asked Solr to delta reindex. Solr waited 60 seconds and started to delta reindex. After few seconds, Solr started to execute our suspicious query and after another 30 seconds, MySQL dumped 20GB internal temporary table to disk. Bingo!

So again we tried to manually explain and execute that query and again we saw no issues whatsoever. Wat!?

We speculated there must be some difference between Solr running the query and us running the query manually. We enabled the MySQL full query log and repeated the edit in the UI. This is what Solr executed,

711723 Query /* mysql-connector-java-5.1.21 ( Revision: ${bzr.revision-id} ) */SELECT @@session.auto_increment_increment
711723 Query SHOW COLLATION
711723 Query SET NAMES utf8mb4
711723 Query SET character_set_results = NULL
711723 Query SET autocommit=1
711723 Query SET sql_mode='STRICT_TRANS_TABLES'
711723 Query SET autocommit=0
711723 Query <delta reindex query>

We quickly focused on SET NAMES utf8mb4. Running the delta reindex query from MySQL client by itself did not cause the problem, but running the delta reindex query after SET NAMES utf8mb4 caused the 20GB file. In fact, we tried other encodings, including utf8, and all of them seem to cause the 20GB file. The only one that worked seemed to be latin1.

This led us to realize that by default our MySQL client uses latin1 whereas our application and Solr use utf8 and utf8mb4, respectively. So in theory, if Solr used latin1 encoding it would avoid the 20GB file. We quickly reconfigured Solr to use latin1 and sure enough, it worked.

Our attention now turned to understanding why that particular delta reindex query on that particular data set with utf8mb4 encoding was causing MySQL to write 20GB file. It turns out MySQL has a bug when a GROUP_CONCAT(DISTINCT ...) is too big to fit in memory and requires MySQL to flush to disk. In our case, the bug is triggered because utf8 / utf8mb4 use multiple bytes per character where as latin1 uses one byte per character. You can read more details about this bug here.