Tracking SQL Slowness in MySql : OTRS

Well I am new to MySql but I have had my fair share with other DBs like Ora,PG and Db2. The outcome I am going to share with you is a bit new to me when compared to other RDBM’S.

I work in a company which does SW solutions for banks and we have a Ticketing system for the customers to log issues. We use a open source product called OTRS for this. Apparently this app uses mysql as its DB.

As reported to me, when the user does a particular function the CPU goes to 100% and the entire server freezes for a good 10 mins.

I used a simple ‘TOP’ command on the Linux machine to identify the process and figured that it was the process MYSQLD(process name for MySQL DB).

Then I had to identify the SQL which runs on the DB at this particular time.

steps : login to mysql user from CMD and execute the command ‘SHOW PROCESSLIST’

As seen at the bottom of the above image the SQL responsible was the Insert on select(given below)

insert into sla_table (tn) select tn from ticket where not exists(select tn from sla_table where;

(My test tables will have a ‘new’ at the start)

By the looks of it, it does looks a bit stupid for a SQL but when you think about the functionality of the OTRS it made sense. Therefore fine tuning or changing the SQL was my last option.

I ran the SQL manually on a test instance and it took almost 2 mins to complete.

Initially I was thinking its a Index level issue(tn columns were already indexed), memory level, buffer level issue , other config level issues, Changed all tables to InnoDB etc…believe me I tried every possible thing for 2 days. But with minimum impact. All I managed was to improve 10 seconds.

Then as the final option I went on the path of optimizing the SQL. As other RDBS systems MySQL also has a SQL Explain Plan. I meddled with this for a few hours. (note that I have pushed only the select part, by this time I have figured out that the select query is the one which takes time)

Luckily MySQL had good documentation on this and how to read the output specifically. I used this link After going through this I figured the below,

  1. TYPE : Column has the values ‘Index’ which means a full table scan is on. A full table scan is performed using reads from the index to look up data rows in index order.
  2. POSSIBLE KEYS : NULL means, there are no relevant indexes. In this case, you may be able to improve the performance of your query by examining the WHERE clause to check whether it refers to some column or columns that would be suitable for indexing.
  3. REF : this was also NULL. The ref column shows which columns or constants are compared to the index named in the key column to select rows from the table.

As a summary I figured that even tho there are indexes already, these are not been utilized. So I dropped the index and recreate it. But still the same.

I tired to read more on the SQL behavior and understand how this SQL will behave during execution. I figured that the running SQL on server had a slight difference to the one which Iam passing via a client. (this is printed in mysql log and also as a warning message in the explain plan output)

This new SQL looks like below,

select `otrs`.`ticket`.`tn` AS `tn` from `otrs`.`ticket` where (not(exists(select `otrs`.`sla_table`.`tn` from `otrs`.`sla_table` where (convert(`otrs`.`sla_table`.`tn` using utf8) = `otrs`.`ticket`.`tn`))));

When you compare with the original there is a diff in the inner where clause. It tires to convert the entire index column into the character set urf8. I did not get this instantly but aft few mins of reading, I figured that this can be the reason for the Indexes being ignored in the EXPLAIN PLAN. I thought of taking a deeper look at the two table structures,

CREATE TABLE `sla_table` (
 `tn` varchar(50) NOT NULL,
 `sla_state` varchar(45) DEFAULT ‘2’,
 `exception` smallint(6) DEFAULT ‘0’,
 `queue_id` smallint(6) DEFAULT NULL,
 `1st_esc` smallint(6) DEFAULT ‘0’,
 `2nd_esc` smallint(6) DEFAULT ‘0’,
 `breached_time` date DEFAULT NULL,
 `Remarks` mediumtext,
 `updated_by` int(11) DEFAULT NULL,
 `update_time` datetime DEFAULT NULL,
 PRIMARY KEY (`tn`),
 KEY `FK_sla_table_user` (`updated_by`)

CREATE TABLE `ticket` (
 `id` bigint(20) NOT NULL AUTO_INCREMENT,
 `tn` varchar(40) DEFAULT NULL,
 `title` varchar(255) DEFAULT NULL,
………………………………….. middle part skipped ………………………..
 KEY `ticket_user_id` (`user_id`),
 KEY `FK_ticket_service_id_id` (`service_id`),
 KEY `FK_ticket_sla_id_id` (`sla_id`),
 KEY `FK_ticket_create_by_id` (`create_by`),
 KEY `FK_ticket_change_by_id` (`change_by`),
 KEY `ticket_queue_view` (`ticket_state_id`,`ticket_lock_id`),
 KEY `tn` (`tn`)

Bam.. the CHARSETs were different.

I changed the CHARSET of the initial table with the below commad;

ALTER TABLE sla_table CONVERT TO CHARACTER SET utf8; and then ran the explain plan again. :) it had a impact I wanted.

See, the TYPE column has changed to eq_ref. Which means the SQL has healthy indexing. POSSIBLE_KEYS column shows that it uses the ‘tn’ column as a key. REF column is also showing that the ‘tn’ in referred correctly. The most important part, ROWS (number of rows MySQL believes it must examine to execute the query) shows 1 now.

Finally, lets run the original insert statement and find out how much this change has effected performance wise

Amazing ha ? Now it does take only 0.05 secs.

Worth the effort and satisfied. Our Support Team head was planning to upgrade the hardware thinking it was a HW/OS issue earlier. Seems not required at all. :)

note that : ticket/sla_table and newticket/newsla_table are identical in structure and row counts. I created these for testing the issue.

on an extra note I learned that there is a performance gain when the indexes in where clause has the same length. Eg : tn column in tickets table has the length 40. The sla_table’s tn column is with length 50. Change both into 40 and seems there is a gain according to msql docs.