Is there any way to eliminate the use of TMPDIR files (created during JOINS) by providing memory to MySQL?? A co-worker maintains that we should increase a server's memory from 48 to 96G, because the fact that we have I/O to the TMPDIR directory indicates that MySQL does not have enough memory. I have not been able to find anything in the manuals that indicates that this is the case, but then, I am new to MySQL.
↧
Using memory instead of TMPDIR files (no replies)
↧
Using file_per_table, ibdata1 file still grew to 77GB (no replies)
With a master and slave running MySQL 5.5.28 on linux, I noticed the ibdata1 file on the slave is 77GB, three times that of the the master. Since we're using file_per_table, that seems surprising. The slave is used primarily for reporting, so I'm wondering what type of use could cause the ibdata1 files to get so big.
Any ideas?
Below are the innodb settings used on the slave.
innodb_additional_mem_pool_size = 100M
innodb_buffer_pool_size = 90G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_read_io_threads = 4
innodb_write_io_threads = 4
innodb_flush_log_at_trx_commit = 2
innodb_lock_wait_timeout = 120
innodb_log_buffer_size = 8M
innodb_log_files_in_group = 3
innodb_log_file_size = 1024M
innodb_max_dirty_pages_pct = 70
innodb_strict_mode = on
innodb_thread_concurrency = 16
innodb_file_per_table = 1
innodb_support_xa = 1
Any ideas?
Below are the innodb settings used on the slave.
innodb_additional_mem_pool_size = 100M
innodb_buffer_pool_size = 90G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_read_io_threads = 4
innodb_write_io_threads = 4
innodb_flush_log_at_trx_commit = 2
innodb_lock_wait_timeout = 120
innodb_log_buffer_size = 8M
innodb_log_files_in_group = 3
innodb_log_file_size = 1024M
innodb_max_dirty_pages_pct = 70
innodb_strict_mode = on
innodb_thread_concurrency = 16
innodb_file_per_table = 1
innodb_support_xa = 1
↧
↧
unable to apply an SQL query update on table (1 reply)
Hi,
This is peculiar, as it functioned properly before, however, I am not certain what I might be missing in the syntax...
I am trying to update a table directly from myphpadmin as follows... the database is for a zen cart installation:
UPDATE `zenproducts` SET `product_is_call`= 1 WHERE `products_price`=0
I received the following:
0 rows affected. ( Query took 0.0125 sec )
there are about 6,000 rows in that table, and the condition exists...
I also attempted with the following syntax, however, I received the same result:
UPDATE zenproducts
SET product_is_call=1
WHERE products_price=0;
Any suggestions?
This is peculiar, as it functioned properly before, however, I am not certain what I might be missing in the syntax...
I am trying to update a table directly from myphpadmin as follows... the database is for a zen cart installation:
UPDATE `zenproducts` SET `product_is_call`= 1 WHERE `products_price`=0
I received the following:
0 rows affected. ( Query took 0.0125 sec )
there are about 6,000 rows in that table, and the condition exists...
I also attempted with the following syntax, however, I received the same result:
UPDATE zenproducts
SET product_is_call=1
WHERE products_price=0;
Any suggestions?
↧
max_open_connections (1 reply)
On mysql 4.x version on Linux RHEL4.X, the parameter max_open_connections=1500
OS parameter open_files value was 1024 (default).
When an application kept opening more number of connections, after sometime, the MYSQL was hanging and no response from the database.
when checking the processes of MYSQL using ps -ef command of linux, it was found that one of the mysql process went to DEFUNCT state.
When trying to shutdown the mysqld, we got timeout message. Even after trying several times to shutdown the mysql service, We were unable to shutdown the mysql.
Also we were unable to kill that DEFUNCT MYSQL process using kill -9 command.
Finally we had to restart the Linux OS in order to kill the DEFUNCT process and bring up mysql.
if the value of max_open_connections is greater than the OS parameter open_files limit, is there possibility that mysql process might go to DEFUNCT status
Can anyone help me?
Also could you help me on what would be the value of max_open_connections and value of open_files for the server with 8GB RAM and 4 core processors with Linux 64bit OS to accommodate 1500 mysql connections from application?
Thank,
Muthu
OS parameter open_files value was 1024 (default).
When an application kept opening more number of connections, after sometime, the MYSQL was hanging and no response from the database.
when checking the processes of MYSQL using ps -ef command of linux, it was found that one of the mysql process went to DEFUNCT state.
When trying to shutdown the mysqld, we got timeout message. Even after trying several times to shutdown the mysql service, We were unable to shutdown the mysql.
Also we were unable to kill that DEFUNCT MYSQL process using kill -9 command.
Finally we had to restart the Linux OS in order to kill the DEFUNCT process and bring up mysql.
if the value of max_open_connections is greater than the OS parameter open_files limit, is there possibility that mysql process might go to DEFUNCT status
Can anyone help me?
Also could you help me on what would be the value of max_open_connections and value of open_files for the server with 8GB RAM and 4 core processors with Linux 64bit OS to accommodate 1500 mysql connections from application?
Thank,
Muthu
↧
restore from mysqldump --all-databases fails with "cannot add foreign key constraint" (3 replies)
Hello,
I'm testing my backup & restore strategy and actually really glad I did ...
I'm using mysqldumP to backup all our databases:
mysqldump -u root --all-databases > full.sql
And when I try to restore a single database from within the full dump file:
mysql -D bts -o -u root < full.sql
ERROR 1215 (HY000) at line 58: Cannot add foreign key constraint
And this is true for every DB in the file
mysql -D rde -o -u root < full.sql
ERROR 1215 (HY000) at line 2571: Cannot add foreign key constraint
etc
The restoring fails for every DB in the file with foreign key problems.
If I use mysqldump to backup a single DB and restore them from a file containing only one db, restoration is possible without any errors.
The Server I'm doing the dump is:
Server version 5.5.28-log
Protocol version 10
I'm trying to restore the dump on a server 5.5.30 or 5.6.30 ... no luck so far.
Do I miss something here?
Thanks a lot!!!
Didier
I'm testing my backup & restore strategy and actually really glad I did ...
I'm using mysqldumP to backup all our databases:
mysqldump -u root --all-databases > full.sql
And when I try to restore a single database from within the full dump file:
mysql -D bts -o -u root < full.sql
ERROR 1215 (HY000) at line 58: Cannot add foreign key constraint
And this is true for every DB in the file
mysql -D rde -o -u root < full.sql
ERROR 1215 (HY000) at line 2571: Cannot add foreign key constraint
etc
The restoring fails for every DB in the file with foreign key problems.
If I use mysqldump to backup a single DB and restore them from a file containing only one db, restoration is possible without any errors.
The Server I'm doing the dump is:
Server version 5.5.28-log
Protocol version 10
I'm trying to restore the dump on a server 5.5.30 or 5.6.30 ... no luck so far.
Do I miss something here?
Thanks a lot!!!
Didier
↧
↧
Would like to ignore specific table/databases when logging slow_query (1 reply)
Hi,
I'd like to ignore specific table/databases when logging slow_query. Is it possible to do something like this?
log_slow_ignore_database=information_schema
log_slow_ignore_table=mydatabase.canbeignoredtbl
(*This is pseudo configuration for my use case.)
I'd like to ignore specific table/databases when logging slow_query. Is it possible to do something like this?
log_slow_ignore_database=information_schema
log_slow_ignore_table=mydatabase.canbeignoredtbl
(*This is pseudo configuration for my use case.)
↧
working with slow or unpredictable storage (1 reply)
Hello listmates,
Let us say I am running a MySQL installation on a Linux virtual host whose storage is at times unpredictable in terms of performance. What tips could you give me on tunning my MySQL engine such as to minimize the effects of this unpredictable storage on my DB engine's performance?
Most importantly I would like to minimize the accumulation of unfinished threads that threaten the performance and usability of the host.
Thanks.
Boris.
Let us say I am running a MySQL installation on a Linux virtual host whose storage is at times unpredictable in terms of performance. What tips could you give me on tunning my MySQL engine such as to minimize the effects of this unpredictable storage on my DB engine's performance?
Most importantly I would like to minimize the accumulation of unfinished threads that threaten the performance and usability of the host.
Thanks.
Boris.
↧
Maximum number of InnoDB tables in schema (3 replies)
I have a database schema which is growing by up to 600 tables a day because of work currently being done. Each of these 600 tables is composed of up to 400 columns and each one has a unique column 'signature' of different columns: there is also no referential integrity involved.
Each table has a permanent number of 1400 records at most, and in size is probably no more than a handful of Mbytes.
Is there a limit to the number of tables a schema can have at all?
Apologies for any double posting.
Thanks
Martin O'Shea.
Each table has a permanent number of 1400 records at most, and in size is probably no more than a handful of Mbytes.
Is there a limit to the number of tables a schema can have at all?
Apologies for any double posting.
Thanks
Martin O'Shea.
↧
mysql 5.6 host_cache COUNT_HANDSHAKE_ERRORS is growing for one host (no replies)
It's a new 5.6.12 DB.
The COUNT_HANDSHAKE_ERRORS field value (from performance_shema.host_cache table) is growing for one given host. This causes the value of SUM_CONNECT_ERROR field to grow also.
A few questions, please :
1. What could cause that behavior ?
2. How to investigate that ?
3. How to resolve/workaround that ?
Best regards and looking forward your assistance,
Avi Vainshtein
The COUNT_HANDSHAKE_ERRORS field value (from performance_shema.host_cache table) is growing for one given host. This causes the value of SUM_CONNECT_ERROR field to grow also.
A few questions, please :
1. What could cause that behavior ?
2. How to investigate that ?
3. How to resolve/workaround that ?
Best regards and looking forward your assistance,
Avi Vainshtein
↧
↧
Reset root password (no replies)
I am trying to reset the password for root. It is a small db so I am not worried about security while resetting.
1. stop mysql from admin console
2. start mysqld with the option --skip-grant-tables
3. run mysql on command prompt and connect to mysql
4. run the following statements:
mysql> UPDATE mysql.user SET Password=PASSWORD('MyNewPass')
-> WHERE User='root';
mysql> FLUSH PRIVILEGES;
5. connect to mysql using root from MySQL workbench.
At this point I can successfully connect to mysql db.
However, the issue is that even though I started mysqld from command prompt, admin console does not show the status as Started.
When I start it from admin console or when I reboot the machine, I can no longer connect to root using the password i reset before.
Can someone tell what is the issue here and how to fix this?
1. stop mysql from admin console
2. start mysqld with the option --skip-grant-tables
3. run mysql on command prompt and connect to mysql
4. run the following statements:
mysql> UPDATE mysql.user SET Password=PASSWORD('MyNewPass')
-> WHERE User='root';
mysql> FLUSH PRIVILEGES;
5. connect to mysql using root from MySQL workbench.
At this point I can successfully connect to mysql db.
However, the issue is that even though I started mysqld from command prompt, admin console does not show the status as Started.
When I start it from admin console or when I reboot the machine, I can no longer connect to root using the password i reset before.
Can someone tell what is the issue here and how to fix this?
↧
Not all MySQL slow queries are being logged: why? (2 replies)
Hello!
We have the MySQL server running with:
slow_query_log = 1
slow_query_log_file = slow.log
long_query_time = 1
min_examined_row_limit = 0
Logging seems to work fine: we have new entries in the slow.log just as the "Slow queries" value (from server status) grows. However, today we've noticed an extraordinary activity in MySQL. We've seen a processlist with a huge amount of very slow UPDATE queries, just like this:
...
| 29738657 | db1 | backend1:36300 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29738808 | db1 | backend1:36394 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29739011 | db1 | backend1:36482 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29739229 | db1 | backend1:36564 | db1 | Query | 11 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29739565 | db1 | backend1:36755 | db1 | Query | 45 | Updating | UPDATE menu_links SET has_children = 1 WHERE mlid = 2 | 0 | 0 | 0 |
| 29739599 | db1 | backend2:61156 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29739652 | db1 | backend1:36831 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
...
As you can see, it took much more than 1 second to execute these queries. However, there's nothing about in slow.log. We've got some slow entries in slow.log for the same time but there's no UPDATE queries at all.
How can we find out why these queries are not being logged to deal properly with this situation in future?
P.S. Also, we have the replication here but the server where UPDATEs were performed is the master. So, log-slow-slave-statements doesn't matter, right? Anyway, there's the same story on the slave server (no mentions for UPDATEs).
We have the MySQL server running with:
slow_query_log = 1
slow_query_log_file = slow.log
long_query_time = 1
min_examined_row_limit = 0
Logging seems to work fine: we have new entries in the slow.log just as the "Slow queries" value (from server status) grows. However, today we've noticed an extraordinary activity in MySQL. We've seen a processlist with a huge amount of very slow UPDATE queries, just like this:
...
| 29738657 | db1 | backend1:36300 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29738808 | db1 | backend1:36394 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29739011 | db1 | backend1:36482 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29739229 | db1 | backend1:36564 | db1 | Query | 11 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29739565 | db1 | backend1:36755 | db1 | Query | 45 | Updating | UPDATE menu_links SET has_children = 1 WHERE mlid = 2 | 0 | 0 | 0 |
| 29739599 | db1 | backend2:61156 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
| 29739652 | db1 | backend1:36831 | db1 | Query | 12 | Updating | UPDATE menu_links SET menu_name = 'navigation', plid = 0, link_path = 'ratings',
router_path = ' | 0 | 0 | 0 |
...
As you can see, it took much more than 1 second to execute these queries. However, there's nothing about in slow.log. We've got some slow entries in slow.log for the same time but there's no UPDATE queries at all.
How can we find out why these queries are not being logged to deal properly with this situation in future?
P.S. Also, we have the replication here but the server where UPDATEs were performed is the master. So, log-slow-slave-statements doesn't matter, right? Anyway, there's the same story on the slave server (no mentions for UPDATEs).
↧
Updating Grants after Table Rename (no replies)
We're in the process of tidying up our production database, and in the process need to rename a number of tables.
Am I right that when I rename a table, the GRANTS on the table are not updated to point to the new table name? That is what appears to be the case when looking at the information_schema.table_privileges table.
If so is there an easy way to move the grants across to the new table name?
If not what am I missing?
MySQL Server version is 5.0
Am I right that when I rename a table, the GRANTS on the table are not updated to point to the new table name? That is what appears to be the case when looking at the information_schema.table_privileges table.
If so is there an easy way to move the grants across to the new table name?
If not what am I missing?
MySQL Server version is 5.0
↧
mysql master/slave synchronisation failed and restored, now ? (no replies)
Hi,
I have 2 windows servers (one master, one slave) running MySQL 5.6.13 with MySQL replication.
At some point a few days ago the replication failed due to the following bug:
http://bugs.mysql.com/bug.php?id=68892
(Invalid use of GRANT command breaks replication)
It was indeed an invalid grant command I issued at some point.
I issued the following command on the slave to restore synchronization between both servers:
set global sql_slave_skip_counter=1
start slave
The slave is now running again and show master/slave status look both ok.
How can I be sure that both servers are 100% in sync?
Are there any free windows tools that makes it possible to compare both databases to be sure they are correctly in sync?
(I want to be sure I didn't skip or miss something ....)
Thank you very much!!!!
Didier
I have 2 windows servers (one master, one slave) running MySQL 5.6.13 with MySQL replication.
At some point a few days ago the replication failed due to the following bug:
http://bugs.mysql.com/bug.php?id=68892
(Invalid use of GRANT command breaks replication)
It was indeed an invalid grant command I issued at some point.
I issued the following command on the slave to restore synchronization between both servers:
set global sql_slave_skip_counter=1
start slave
The slave is now running again and show master/slave status look both ok.
How can I be sure that both servers are 100% in sync?
Are there any free windows tools that makes it possible to compare both databases to be sure they are correctly in sync?
(I want to be sure I didn't skip or miss something ....)
Thank you very much!!!!
Didier
↧
↧
Is this the right place to ask this? I have two tables throwing duplicate entry errors. (no replies)
I have a bash script (I know, I know) trying to update a number of tables in a big database and two tables throw error 1062.
The error is for example "Duplicate entry 1199-0 for key 'blahtable_uix'". 1199 is the correct foreign key id value for the first field with a MUL key in the table, but there IS a value being written into the second MUL key field and it IS the correct value. What the heck is happening here? I've tried creative googling but found no help.
My OS is Ubuntu server and my mysql version is 5.5.30-cll.
The error is for example "Duplicate entry 1199-0 for key 'blahtable_uix'". 1199 is the correct foreign key id value for the first field with a MUL key in the table, but there IS a value being written into the second MUL key field and it IS the correct value. What the heck is happening here? I've tried creative googling but found no help.
My OS is Ubuntu server and my mysql version is 5.5.30-cll.
↧
new user has permission to existing database (3 replies)
- I created a new schema test1 and new user test1 to access
schema test1:
mysql> create schema test1;
Query OK, 1 row affected (0.00 sec)
mysql> create user test1@'localhost' identified by 'test1';
Query OK, 0 rows affected (0.00 sec)
mysql> create user test1@'%' identified by 'test1';
Query OK, 0 rows affected (0.00 sec)
mysql> grant select,update,insert,delete on test1.* to test1@'localhost' ;
Query OK, 0 rows affected (0.00 sec)
mysql> grant select,update,insert,delete on test1.* to test1@'%' ;
Query OK, 0 rows affected (0.00 sec)
- I verified the user test1 has no permission on other schemas.
select * from mysql.db where user='test1' and db != 'test1' ;
- But when I login as test1, this user can go to other schemas
to create and drop tables, although it cannot use mysql.
Why is that, and how can I solve this problem?
schema test1:
mysql> create schema test1;
Query OK, 1 row affected (0.00 sec)
mysql> create user test1@'localhost' identified by 'test1';
Query OK, 0 rows affected (0.00 sec)
mysql> create user test1@'%' identified by 'test1';
Query OK, 0 rows affected (0.00 sec)
mysql> grant select,update,insert,delete on test1.* to test1@'localhost' ;
Query OK, 0 rows affected (0.00 sec)
mysql> grant select,update,insert,delete on test1.* to test1@'%' ;
Query OK, 0 rows affected (0.00 sec)
- I verified the user test1 has no permission on other schemas.
select * from mysql.db where user='test1' and db != 'test1' ;
- But when I login as test1, this user can go to other schemas
to create and drop tables, although it cannot use mysql.
Why is that, and how can I solve this problem?
↧
Amazon RDS - Can't connect to MySQL database server? (no replies)
Hello,
So I've been working on this issue for a while now, and I'll explain what I'm trying to do. I'm trying to connect to a MySQL database server, hosted by Amazon Web Services in the Amazon RDS service. The database is up and running, and I've created a security group, as well as an EC2 security group. Within the Ec2 security group, I've opened up port 3306 (the MySQL port), and added it as a rule. I've then applied this rule. I've then connected the security group to the EC2 security group, and I connected the database to the security group. Thus the database should have port 3306 open (and for all I know it does!).
So here's the issue. I'm trying to connect to the MySQL server based on the IP of the database I'm given by Amazon Web Services. However, when I attempt to connect via the Terminal command shell, the connection times out, saying that the MySQL connection failed.
Here's what I type in: "mysql -h ***********.rds.amazonaws.com" where the asterisks are part of the IP address.
The error is as follows: "ERROR 2003 (HY000): Can't connect to MySQL server on '***********.rds.amazonaws.com' (60)"
I've also tried specifying a username and password to the command, but the same error occurs.
Any thoughts as to why the connection is failing?
Thanks, any help appreciated!
Jake
So I've been working on this issue for a while now, and I'll explain what I'm trying to do. I'm trying to connect to a MySQL database server, hosted by Amazon Web Services in the Amazon RDS service. The database is up and running, and I've created a security group, as well as an EC2 security group. Within the Ec2 security group, I've opened up port 3306 (the MySQL port), and added it as a rule. I've then applied this rule. I've then connected the security group to the EC2 security group, and I connected the database to the security group. Thus the database should have port 3306 open (and for all I know it does!).
So here's the issue. I'm trying to connect to the MySQL server based on the IP of the database I'm given by Amazon Web Services. However, when I attempt to connect via the Terminal command shell, the connection times out, saying that the MySQL connection failed.
Here's what I type in: "mysql -h ***********.rds.amazonaws.com" where the asterisks are part of the IP address.
The error is as follows: "ERROR 2003 (HY000): Can't connect to MySQL server on '***********.rds.amazonaws.com' (60)"
I've also tried specifying a username and password to the command, but the same error occurs.
Any thoughts as to why the connection is failing?
Thanks, any help appreciated!
Jake
↧
MySQL PROCESS TAKES MORE THAN 70% OF CPU (1 reply)
Hello to the team.
Im facing an issue in my centos server.......mysql overloads CPU.
this is what i have in Cpanel:
/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/a173.amber.fastwebserver.de.err --open-files-limit=99000 --pid-file=/var/lib/mysql/a173.amber.fastwebserver.de.pid
and after i trace the PID
setsockopt(509, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported)
clone(child_stack=0x7f79c87d6ff0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f79c87d79d0, tls=0x7f79c87d7700, child_tidptr=0x7f79c87d79d0) = 16842
poll([{fd=12, events=POLLIN}, {fd=14, events=POLLIN}], 2, -1) = 1 ([{fd=14, revents=POLLIN}])
fcntl(14, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(14, F_SETFL, O_RDWR|O_NONBLOCK) = 0
accept(14, {sa_family=AF_FILE, NULL}, [2]) = 1013
fcntl(14, F_SETFL, O_RDWR) = 0
getsockname(1013, {sa_family=AF_FILE, path="/var/lib/mysql/mysql.sock"}, [28]) = 0
fcntl(1013, F_SETFL, O_RDONLY) = 0
fcntl(1013, F_GETFL) = 0x2 (flags O_RDWR)
setsockopt(1013, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
setsockopt(1013, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
fcntl(1013, F_SETFL, O_RDWR|O_NONBLOCK) = 0
and keeps loopping
Im facing an issue in my centos server.......mysql overloads CPU.
this is what i have in Cpanel:
/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/a173.amber.fastwebserver.de.err --open-files-limit=99000 --pid-file=/var/lib/mysql/a173.amber.fastwebserver.de.pid
and after i trace the PID
setsockopt(509, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported)
clone(child_stack=0x7f79c87d6ff0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f79c87d79d0, tls=0x7f79c87d7700, child_tidptr=0x7f79c87d79d0) = 16842
poll([{fd=12, events=POLLIN}, {fd=14, events=POLLIN}], 2, -1) = 1 ([{fd=14, revents=POLLIN}])
fcntl(14, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(14, F_SETFL, O_RDWR|O_NONBLOCK) = 0
accept(14, {sa_family=AF_FILE, NULL}, [2]) = 1013
fcntl(14, F_SETFL, O_RDWR) = 0
getsockname(1013, {sa_family=AF_FILE, path="/var/lib/mysql/mysql.sock"}, [28]) = 0
fcntl(1013, F_SETFL, O_RDONLY) = 0
fcntl(1013, F_GETFL) = 0x2 (flags O_RDWR)
setsockopt(1013, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
setsockopt(1013, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
fcntl(1013, F_SETFL, O_RDWR|O_NONBLOCK) = 0
and keeps loopping
↧
↧
RoundRobin to connect to MySQL Cluster (no replies)
Hi, I hope someone with experience can help me with this.
I want to setup a cluster of 3 MySQL servers (Ubuntu 12.04). I'm connecting from multiple (Ubuntu 12.04) machines to the cluster.
Can I use DNS Round Robin to connect to them?
If a server goes down, and the client connects to a dead node. Will it automatically switch to the next one?
Will the first command get executed or will it result in an error?
Any help is greatly appreciated!
I want to setup a cluster of 3 MySQL servers (Ubuntu 12.04). I'm connecting from multiple (Ubuntu 12.04) machines to the cluster.
Can I use DNS Round Robin to connect to them?
If a server goes down, and the client connects to a dead node. Will it automatically switch to the next one?
Will the first command get executed or will it result in an error?
Any help is greatly appreciated!
↧
MySQL 5.6.10 - Slave I/O thread dies often with message: Error reading packet from server: bogus data in log event (no replies)
I have production databases running MySQL 5.6.10 on Linux (Redhat distribution) with a simple setup - Master with two slaves with very low traffic (supporting a new application rolled out in production recently).
Both the slaves have frequent replication failures with the following message which leaves the slave I/O thread in the stopped state whereas the SQL thread is still running. This replication can be fixed by issuing the start slave or start slave I/O thread command and the replication would start without any issues till it stops after sometime the same message but the co-ordinates are different.
[ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: 'bogus data in log event; the first event 'XXX-mysql-bin.000018' at 29257479, the last event read from './XXX-mysql-bin.000018' at 29529116, the last byte read from './XXX-mysql-bin.000018' at 29529135.', Error_code: 1236
[ERROR] Error reading packet from server: bogus data in log event; the first event 'XXX-mysql-bin.000018' at 29257479, the last event read from './XXX-mysql-bin.000018' at 29529116, the last byte read from './XXX-mysql-bin.000018' at 29529135. ( server_errno=1236)
Research done so far:
1) Binary log corruption is ruled out because the same issue repeats irrespective of the database host and I have tried setting up a brand new instance with replication etc. and still find the same issue
2) The failure doesn't seem to be associated with any particular query being executed by the application
3) pt-checksum tool revealed no missing data
4) pt-variable advisor report didn't report anything bad
4) TCP dump revealed that the master host is sending a 'F' flag (No more data from sender) followed by 'R' flag (Reset connection). No loss of packets
5) SAR reports don't mention of any concern on system resources
6) Confirmed UUID is unique on all database hosts
7) max_allowed_packet is set to 128M which is plenty I assume
8) slave_net_timeout, net_read_timeout and net_write_timeout values are set appropriately.
9) max_binlog_size and max_relay_log_size is currently set to 32MB
Additional information:
a) OS: Linux version 2.6.32-71.el6.x86_64 (Red Hat 4.4.4-13)
b) MySQL: 5.6.10 Community Server
c) The database host is using NAS storage (NFS)
d) MySQL configuration parameters:
* innodb_flush_log_at_trx_commit=1
* innodb_buffer_pool_instances=8
* innodb_flush_log_at_trx_commit=1
* innodb_log_buffer_size=8M
* innodb_flush_method=O_DIRECT
* sync_binlog=1
* net_read_timeout=300
* net_write_timeout=600
* slave_net_timeout=28800
* max_allowed_packet=128M
* sync_relay_log=1 (Changed from default value of 10000 to experiment but didn't fix the issue)
* sync_relay_log_info=1 (Changed from default value of 10000 to experiment but didn't fix the issue)
* sync_master_info=1 (Changed from default value of 10000 to experiment but didn't fix the issue)
* transaction_isolation=READ-COMMITTED
I would greatly appreciate if someone on this forum can help me resolve this issue or give me pointers on the list of things that I should do to further investigate this issue.
Appreciate your help in advance!
Thanks,
Gayathri
Both the slaves have frequent replication failures with the following message which leaves the slave I/O thread in the stopped state whereas the SQL thread is still running. This replication can be fixed by issuing the start slave or start slave I/O thread command and the replication would start without any issues till it stops after sometime the same message but the co-ordinates are different.
[ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: 'bogus data in log event; the first event 'XXX-mysql-bin.000018' at 29257479, the last event read from './XXX-mysql-bin.000018' at 29529116, the last byte read from './XXX-mysql-bin.000018' at 29529135.', Error_code: 1236
[ERROR] Error reading packet from server: bogus data in log event; the first event 'XXX-mysql-bin.000018' at 29257479, the last event read from './XXX-mysql-bin.000018' at 29529116, the last byte read from './XXX-mysql-bin.000018' at 29529135. ( server_errno=1236)
Research done so far:
1) Binary log corruption is ruled out because the same issue repeats irrespective of the database host and I have tried setting up a brand new instance with replication etc. and still find the same issue
2) The failure doesn't seem to be associated with any particular query being executed by the application
3) pt-checksum tool revealed no missing data
4) pt-variable advisor report didn't report anything bad
4) TCP dump revealed that the master host is sending a 'F' flag (No more data from sender) followed by 'R' flag (Reset connection). No loss of packets
5) SAR reports don't mention of any concern on system resources
6) Confirmed UUID is unique on all database hosts
7) max_allowed_packet is set to 128M which is plenty I assume
8) slave_net_timeout, net_read_timeout and net_write_timeout values are set appropriately.
9) max_binlog_size and max_relay_log_size is currently set to 32MB
Additional information:
a) OS: Linux version 2.6.32-71.el6.x86_64 (Red Hat 4.4.4-13)
b) MySQL: 5.6.10 Community Server
c) The database host is using NAS storage (NFS)
d) MySQL configuration parameters:
* innodb_flush_log_at_trx_commit=1
* innodb_buffer_pool_instances=8
* innodb_flush_log_at_trx_commit=1
* innodb_log_buffer_size=8M
* innodb_flush_method=O_DIRECT
* sync_binlog=1
* net_read_timeout=300
* net_write_timeout=600
* slave_net_timeout=28800
* max_allowed_packet=128M
* sync_relay_log=1 (Changed from default value of 10000 to experiment but didn't fix the issue)
* sync_relay_log_info=1 (Changed from default value of 10000 to experiment but didn't fix the issue)
* sync_master_info=1 (Changed from default value of 10000 to experiment but didn't fix the issue)
* transaction_isolation=READ-COMMITTED
I would greatly appreciate if someone on this forum can help me resolve this issue or give me pointers on the list of things that I should do to further investigate this issue.
Appreciate your help in advance!
Thanks,
Gayathri
↧
Aborted connection 'number' to db: 'database' user: 'dbuser' host: 'localhost' (Got an error writing communication packets) (no replies)
Hi,
I would be grateful for any hint to solve my problem. Recently in mysql error log i saw warnings:
131021 21:48:52 [Warning] Aborted connection 318 to db: 'db_north' user: 'north' host: 'localhost' (Got an error writing communication packets)
Not many but few per day. Somtimes it's hapening, for eg. in phpmyadmin (I got error: #2013 - Lost connection to MySQL server during query) during import db backup (5Mb, gziped, uzipped 77Mb). Everytime it's impotring few tables and break in exactly the same place (the same backup works on other servers). I tried remove few lines but does not helping and breaking few lines after. With other backup files is the same problem (diferrent databases, etc).
I double check configuration, fallowed instruction in:
http://dev.mysql.com/doc/refman/5.6/en/gone-away.html
http://dev.mysql.com/doc/refman/5.6/en/communication-errors.html
Everythink looks ok, at least I think so.
My server is hetzner Dedicated Server:
Debien Wheeze, mysql 5.5.31
Intel(R) Core(TM) i7 CPU 930 @ 2.80GHz, 12Gb RAM
I have similar server (diferent version i7, 32Gb RAM) with exactly the same configuration and problem does not occur. I comapred etc/ config files and everythink looks exactly the same including mysql versione, etc. Same APT source list.
Using bash, eg. mysql -u nort -p db_north < db_backup.sql works ok.
I have tried to find solution but I exhausted ideas (even checked drives for badblocks). It's not wait timeout and max_allowed_packet. I would be grateful for any hint. Sorry for my poor english.
Best Regards,
Andrzej
my.cnf:
[mysql]
# CLIENT #
port = 3306
default-character-set =utf8
socket = /var/run/mysqld/mysqld.sock
[mysqld]
# GENERAL #
bind-address = 127.0.0.1
skip-name-resolve
user = mysql
default_storage_engine = InnoDB
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
# MyISAM #
key_buffer_size = 1G
myisam_recover = FORCE,BACKUP
# SAFETY #
max_allowed_packet = 128M
max_connect_errors = 1000000
# DATA STORAGE #
datadir = /var/lib/mysql/
# BINARY LOGGING #
log_bin = /var/log/mysql/mysql-bin
binlog-format = mixed
expire_logs_days = 14
sync_binlog = 1
# CACHES AND LIMITS #
tmp_table_size = 50M
max_heap_table_size = 50M
query_cache_type = 1
query_cache_size = 512M
query_cache_limit = 1M
max_connections = 500
thread_cache_size = 50
open_files_limit = 65535
table_definition_cache = 10240
table_open_cache = 10240
# INNODB #
innodb_flush_method = O_DIRECT
innodb_log_files_in_group = 2
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 6G
# LOGGING #
log_error = /var/log/mysql/mysql-error.log
log_queries_not_using_indexes = 1
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
log-warnings=2
#general_log=1
#log = /var/log/mysql/query.log
!includedir /etc/mysql/conf.d/
===============================SOLVED========================================
Problem Solved. It turned out that the cause was a daemon that impose a limit on CPU usage (using cpulimit) for the user processes over more than a specified amount of time. So it was't mysql problem.
=============================================================================
I would be grateful for any hint to solve my problem. Recently in mysql error log i saw warnings:
131021 21:48:52 [Warning] Aborted connection 318 to db: 'db_north' user: 'north' host: 'localhost' (Got an error writing communication packets)
Not many but few per day. Somtimes it's hapening, for eg. in phpmyadmin (I got error: #2013 - Lost connection to MySQL server during query) during import db backup (5Mb, gziped, uzipped 77Mb). Everytime it's impotring few tables and break in exactly the same place (the same backup works on other servers). I tried remove few lines but does not helping and breaking few lines after. With other backup files is the same problem (diferrent databases, etc).
I double check configuration, fallowed instruction in:
http://dev.mysql.com/doc/refman/5.6/en/gone-away.html
http://dev.mysql.com/doc/refman/5.6/en/communication-errors.html
Everythink looks ok, at least I think so.
My server is hetzner Dedicated Server:
Debien Wheeze, mysql 5.5.31
Intel(R) Core(TM) i7 CPU 930 @ 2.80GHz, 12Gb RAM
I have similar server (diferent version i7, 32Gb RAM) with exactly the same configuration and problem does not occur. I comapred etc/ config files and everythink looks exactly the same including mysql versione, etc. Same APT source list.
Using bash, eg. mysql -u nort -p db_north < db_backup.sql works ok.
I have tried to find solution but I exhausted ideas (even checked drives for badblocks). It's not wait timeout and max_allowed_packet. I would be grateful for any hint. Sorry for my poor english.
Best Regards,
Andrzej
my.cnf:
[mysql]
# CLIENT #
port = 3306
default-character-set =utf8
socket = /var/run/mysqld/mysqld.sock
[mysqld]
# GENERAL #
bind-address = 127.0.0.1
skip-name-resolve
user = mysql
default_storage_engine = InnoDB
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
# MyISAM #
key_buffer_size = 1G
myisam_recover = FORCE,BACKUP
# SAFETY #
max_allowed_packet = 128M
max_connect_errors = 1000000
# DATA STORAGE #
datadir = /var/lib/mysql/
# BINARY LOGGING #
log_bin = /var/log/mysql/mysql-bin
binlog-format = mixed
expire_logs_days = 14
sync_binlog = 1
# CACHES AND LIMITS #
tmp_table_size = 50M
max_heap_table_size = 50M
query_cache_type = 1
query_cache_size = 512M
query_cache_limit = 1M
max_connections = 500
thread_cache_size = 50
open_files_limit = 65535
table_definition_cache = 10240
table_open_cache = 10240
# INNODB #
innodb_flush_method = O_DIRECT
innodb_log_files_in_group = 2
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_buffer_pool_size = 6G
# LOGGING #
log_error = /var/log/mysql/mysql-error.log
log_queries_not_using_indexes = 1
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
log-warnings=2
#general_log=1
#log = /var/log/mysql/query.log
!includedir /etc/mysql/conf.d/
===============================SOLVED========================================
Problem Solved. It turned out that the cause was a daemon that impose a limit on CPU usage (using cpulimit) for the user processes over more than a specified amount of time. So it was't mysql problem.
=============================================================================
↧