时间:2021-07-01 10:21:17 帮助过:19人阅读
优化方法 | 性能提升效果 | 优化成本 | 说明 |
---|---|---|---|
减少数据访问 能不访问就不访问-减少磁盘IO |
1~1000 | 低 | 缓存服务器缓存mysql数据,Redis、memorycache |
返回更少的数据 较少网络传输和磁盘IO |
1~100 | 低 | 光折射损耗,请求距离/光速=响应时间 |
减少交互次数 较少网络传输 |
1~20 | 低 | 存储过程(缺乏扩展性),但很少使用,因此应用层实现 |
原始队列 | 100 500 101 10 56 1000 |
---|---|
NOOP队列 | 100 101 500 10 56 1000 |
CFQ队列 | 100 101,500 1000,10 56(56可能被饿死) |
[root@docker02 queue]# dmesg |grep -i scheduler
[ 0.796082] io scheduler noop registered
[ 0.796084] io scheduler deadline registered (default)
[ 0.796103] io scheduler cfq registered
[root@docker02 queue]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
默认是deadline
[root@docker02 queue]# echo noop > /sys/block/sda/queue/scheduler
[root@docker02 queue]# cat scheduler
[noop] deadline cfq
https://www.cnblogs.com/zhenjing/archive/2012/06/20/linux_writeback.html
服务器资源无外乎:CPU/MEM/IO/NET
cmd
wmic
cpu get *
Name NumberOfCores NumberOfEnabledCore
... 2.20GHz 2 2
#NumberOfCores CPU核数
#NumberOfEnableCore 线程数
#笔记本,1物理CPU2核4线程
[root@docker02 queue]# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c
4 Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
[root@docker02 queue]# cat /proc/cpuinfo|grep "physic"
physical id : 0
address sizes : 40 bits physical, 48 bits virtual
physical id : 0
address sizes : 40 bits physical, 48 bits virtual
physical id : 1
address sizes : 40 bits physical, 48 bits virtual
physical id : 1
address sizes : 40 bits physical, 48 bits virtual
[root@docker02 queue]# cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l
2
[root@docker02 queue]# cat /proc/cpuinfo| grep "cpu cores"| uniq
cpu cores : 2
[root@docker02 queue]# cat /proc/cpuinfo| grep "processor"| wc -l
4
这里的核数可设置为mysql的线程数: thread_concurrency
[root@docker02 ~]# sysctl -a |grep swappiness
vm.swappiness = 30
[root@docker02 ~]# cat /proc/sys/vm/swappiness
30
[root@docker02 ~]# sysctl vm.swappiness=10
vm.swappiness = 10
[root@docker02 ~]# vi etc/sysctl.conf
https://www.cnblogs.com/yubo/archive/2010/04/23/1718810.html
需要安装numactl工具: yum -y install numactl
[root@docker02 grub2]# dmesg | grep -i numa
[ 0.000000] NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x0fffffff] -> [mem 0x00000000-0x0fffffff]
[ 0.000000] NUMA: Node 0 [mem 0x00000000-0x0fffffff] + [mem 0x10000000-0xbfffffff] -> [mem 0x00000000-0xbfffffff]
[ 0.000000] NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x13fffffff] -> [mem 0x00000000-0x13fffffff]
[root@docker02 device]# grep -i numa /var/log/dmesg
[ 0.000000] NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x0fffffff] -> [mem 0x00000000-0x0fffffff]
[ 0.000000] NUMA: Node 0 [mem 0x00000000-0x0fffffff] + [mem 0x10000000-0xbfffffff] -> [mem 0x00000000-0xbfffffff]
[ 0.000000] NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x13fffffff] -> [mem 0x00000000-0x13fffffff]
说明numa为enable
[root@docker02 grub2]# numastat
node0
numa_hit 10677303
numa_miss 0
numa_foreign 0
interleave_hit 14453
local_node 10677303
other_node 0
[root@docker02 grub2]# numactl --hardwar
available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 4095 MB
node 0 free: 1769 MB
node distances:
node 0
0: 10
问题:在esxi上分配的是2 * 2的CPU,这里只能看到1个节点4个CPU,是否意味这4个CPU实际上在一个物理CPU内?
如果每个插槽的内核数 (cpuid.coresPerSocket) 大于 1,且虚拟机中的虚拟内核数大于 8,则虚拟 NUMA 节点大小与虚拟插槽大小相匹配。如果每个插槽的内核数小于或等于 1,则会创建虚拟 NUMA 节点以匹配首个打开虚拟机电源的物理主机的拓扑。
http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.resmgmt.doc%2FGUID-17B629DE-75DF-4C23-B831-08107007FBB9.html
[root@docker02 grub2]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
Stepping: 4
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-3
查看每个逻辑CPU的繁忙程度:yum -y install sysstat
[root@docker02 grub2]# mpstat -P ALL
Linux 3.10.0-229.el7.x86_64 (docker02.bluemoon.com.cn) 03/01/2018 _x86_64_ (4 CPU)
01:37:49 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
01:37:49 PM all 0.12 0.00 0.11 0.02 0.00 0.00 0.00 0.00 0.00 99.75
01:37:49 PM 0 0.11 0.00 0.10 0.02 0.00 0.00 0.00 0.00 0.00 99.77
01:37:49 PM 1 0.13 0.00 0.11 0.02 0.00 0.00 0.00 0.00 0.00 99.73
01:37:49 PM 2 0.12 0.00 0.10 0.02 0.00 0.00 0.00 0.00 0.00 99.75
01:37:49 PM 3 0.12 0.00 0.11 0.01 0.00 0.01 0.00 0.00 0.00 99.75
[root@docker02 device]# cat /sys/class/net/eth0/device/numa_node
-1
http://blog.csdn.net/shaoyunzhe/article/details/53606584
https://www.cnblogs.com/wjoyxt/p/4804081.html
[root@docker01 ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 2047 MB
node 0 free: 1405 MB
node 1 cpus: 8 9 10 11 12 13 14 15
node 1 size: 2047 MB
node 1 free: 1642 MB
node distances:
node 0 1
0: 10 20
1: 20 10
[root@docker01 ~]# numastat
node0 node1
numa_hit 202582 254284
numa_miss 0 0
numa_foreign 0 0
interleave_hit 7283 7172
local_node 200189 245831
other_node 2393 8453
[root@docker01 ~]# numactl --hardwar
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 2047 MB
node 0 free: 1405 MB
node 1 cpus: 8 9 10 11 12 13 14 15
node 1 size: 2047 MB
node 1 free: 1642 MB
node distances:
node 0 1
0: 10 20
1: 20 10
[root@docker01 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
Stepping: 4
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.resmgmt.doc%2FGUID-3E956FB5-8ACB-42C3-B068-664989C3FF44.html
[root@docker01 grub2]# vi /boot/grub2/grub.cfg
linux16 /vmlinuz-3.10.0-229.el7.x86_64 root=/dev/mapper/centos-root ro net.ifnames=0 biosdevname=0 rd.lvm.lv=centos/root rd.lvm.lv=centos/swap numa=off rhgb quiet
[root@docker01 ~]# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-229.el7.x86_64 root=/dev/mapper/centos-root ro net.ifnames=0 biosdevname=0 rd.lvm.lv=centos/root rd.lvm.lv=centos/swap numa=off rhgb quiet
[root@docker01 grub2]# dmesg |grep -i numa
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.10.0-229.el7.x86_64 root=/dev/mapper/centos-root ro net.ifnames=0 biosdevname=0 rd.lvm.lv=centos/root rd.lvm.lv=centos/swap numa=off rhgb quiet
[ 0.000000] NUMA turned off
[ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.10.0-229.el7.x86_64 root=/dev/mapper/centos-root ro net.ifnames=0 biosdevname=0 rd.lvm.lv=centos/root rd.lvm.lv=centos/swap numa=off rhgb quiet
lscpu NUMA节点也变成1个了:
[root@docker01 grub2]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
Stepping: 4
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-15
[root@docker01 grub2]# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 0 size: 4095 MB
node 0 free: 3111 MB
node distances:
node 0
0: 10
启动的时候加入 --innodb-numa-interleave
[root@docker01 ~]# mysqld --verbose --help|grep numa
2018-03-01 14:23:29 0 [Note] mysqld (mysqld 5.6.39-log) starting as process 1950 ...
2018-03-01 14:23:29 1950 [Note] Plugin ‘FEDERATED‘ is disabled.
--innodb-numa-interleave
innodb-numa-interleave FALSE
2018-03-01 14:23:29 1950 [Note] Binlog end
2018-03-01 14:23:29 1950 [Note] Shutting down plugin ‘MyISAM‘
2018-03-01 14:23:29 1950 [Note] Shutting down plugin ‘CSV‘
https://www.cnblogs.com/conanwang/p/6180894.html
[root@docker01 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
MASTER=bond0
SLAVE=yes
ONBOOT=yes
[root@docker01 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
NAME=eth1
DEVICE=eth1
MASTER=bond0
SLAVE=yes
ONBOOT=yes
[root@docker01 network-scripts]# cat ifcfg-bond0
TYPE=Ethernet
BOOTPROTO=no
NAME=bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=172.16.3.153
NETMASK=255.255.255.0
GATEWAY=172.16.3.254
DNS1=172.16.13.13
[root@docker01 network-scripts]# cat /etc/modprobe.d/mlx4.conf
alias bond0 bonding
options bond0 miimon=100 mode=1
[root@docker01 network-scripts]# modprobe bonding
[root@docker01 network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
link/ether 00:50:56:a3:30:bb brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
link/ether 00:50:56:a3:30:bb brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 00:50:56:a3:30:bb brd ff:ff:ff:ff:ff:ff
inet 172.16.3.153/24 brd 172.16.3.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fea3:30bb/64 scope link
valid_lft forever preferred_lft forever
eth0 eth1 bond0三张网卡MAC地址一样
[root@docker01 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:50:56:a3:30:bb
Slave queue ID: 0
Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:50:56:a3:8a:c2
Slave queue ID: 0
netstat -n | awk ‘/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}‘
[root@docker02 device]# sar -n DEV
Linux 3.10.0-229.el7.x86_64 (docker02) 03/01/2018 _x86_64_ (4 CPU)
01:30:01 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
01:40:01 PM eth0 4.96 3.11 0.34 0.22 0.00 0.00 0.00
01:40:01 PM lo 5.96 5.96 0.34 0.34 0.00 0.00 0.00
01:50:01 PM eth0 5.20 3.25 0.37 0.25 0.00 0.00 0.00
01:50:01 PM lo 5.96 5.96 0.34 0.34 0.00 0.00 0.00
02:00:01 PM eth0 1.72 0.12 0.11 0.01 0.00 0.00 0.00
02:00:01 PM lo 6.04 6.04 0.34 0.34 0.00 0.00 0.00
02:10:01 PM eth0 2.06 0.08 0.13 0.00 0.00 0.00 0.00
02:10:01 PM lo 5.96 5.96 0.34 0.34 0.00 0.00 0.00
02:20:01 PM eth0 2.47 0.07 0.15 0.00 0.00 0.00 0.00
02:20:01 PM lo 5.95 5.95 0.34 0.34 0.00 0.00 0.00
02:30:01 PM eth0 2.66 0.68 0.20 0.30 0.00 0.00 0.00
02:30:01 PM lo 6.13 6.13 0.35 0.35 0.00 0.00 0.00
02:40:01 PM eth0 1.90 0.07 0.13 0.00 0.00 0.00 0.00
02:40:01 PM lo 5.95 5.95 0.34 0.34 0.00 0.00 0.00
02:50:01 PM eth0 1.92 0.08 0.12 0.00 0.00 0.00 0.00
02:50:01 PM lo 5.96 5.96 0.34 0.34 0.00 0.00 0.00
03:00:01 PM eth0 1.74 0.07 0.11 0.00 0.00 0.00 0.00
03:00:01 PM lo 6.00 6.00 0.34 0.34 0.00 0.00 0.00
Average: eth0 2.74 0.84 0.19 0.09 0.00 0.00 0.00
Average: lo 5.99 5.99 0.34 0.34 0.00 0.00 0.00
[root@docker02 device]# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 1000baseT/Full
10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
MDI-X: Unknown
Supports Wake-on: uag
Wake-on: d
Link detected: yes
https://www.cnblogs.com/digdeep/p/4869010.html
[root@docker02 device]# ifconfig eth0 mtu 9000 up
[root@docker02 device]# ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
/proc/sys/net/ipv4/tcp_mem 系统全局参数,所有TCP的buffer配置,单位为内存页(4k),超过上限时可能丢弃报文
[root@docker02 device]# sysctl -a |grep tcp_mem
net.ipv4.tcp_mem = 92451 123271 184902
/proc/sys/net/ipv4/tcp_rmen 第一个值为TCP接收buffer的最少字节数,第二个值是默认值,会被rmem_default覆盖,第三个值是接收buffer的最大字节数,会被rmem_max覆盖
[root@docker02 device]# sysctl -a |grep tcp_rmem
net.ipv4.tcp_rmem = 4096 87380 6291456
/proc/sys/net/ipv4/tcp_wmem 上面是接收,这里是发送字节数
[root@docker02 device]# sysctl -a |grep tcp_wmem
net.ipv4.tcp_wmem = 4096 16384 4194304
/proc/sys/net/core/wmem_default TCP数据发送窗口默认字节数
[root@docker02 device]# sysctl -a |grep wmem_default
net.core.wmem_default = 212992
/proc/sys/net/core/wmem_max: TCP数据发送窗口最大字节数
[root@docker02 device]# sysctl -a |grep wmem_max
net.core.wmem_max = 212992
/proc/sys/net/core/rmem_default: TCP数据接收窗口默认字节数
[root@docker02 device]# sysctl -a |grep rmem_default
net.core.rmem_default = 212992
/proc/sys/net/core/rmem_max: TCP数据接收窗口最大字节数
[root@docker02 device]# sysctl -a |grep rmem_max
net.core.rmem_max = 212992
使用XFS或者EXT4的文件系统格式
noatime
Do not update inode access times on this filesystem (e.g., for faster
access on the news spool to speed up news servers).
barrier|nobarrier
Enables/disables the use of block layer write barriers for writes into
the journal and for data integrity operations. This allows for drive
level write caching to be enabled, for devices that support write bar‐
riers.
tatol memory size=max_threads(当前活跃连接数) * (
read_buffer_size + #顺序读缓冲,提高顺序读效率
read_rnd_buffer_size + #随机读缓冲,提高随机读效率
sort_buffer_size + #排序缓冲,提高排序效率
join_buffer_size + #表连接缓冲,提高表连接效率
binlog_cache_size + #二进制日志缓冲,提高二进制日志写入效率
tmp_table_size + #内存临时表,提高临时表存储效率
thread_stack + #线程堆栈,暂时寄存SQL语句或者存储过程
thread_cache_size + #线程缓冲池,避免多次反复打开线程开销
net_buffer_length + #线程池连接缓冲以及读取结果缓冲
bulk_insert_buffer_size + #MyISAM表批量写入数据缓冲
)
global buffer size=
innodb_buffer_pool_size + # innodb高速缓冲,行数据、索引缓冲、事务锁、自适应哈希
innodb_additional_mem_pool_size + #innodb数据字典额外内存,缓存所有表数据字典
innodb_log_buffer_size + #innodb redo日志缓冲,提高redo日志写入效率
key_buffer_size + #MyISAM表索引高速缓冲,提高MyISAM表索引读写效率
query_cache_size + #高速查询韩村,缓存大汛结果,提高反复查询返回效率
table_cache + #表空间文件描述缓存,提高数据表打开效率
table_definition_cache + #表定义文件描述符缓存,就是缓存表结构,提高数据表打开效率
http://blog.csdn.net/gua___gua/article/details/44916207
like ‘%keyvalue%‘
或者like ‘%keyvalue
not in、!=、not like
or
并存的时候,mysql优化器并没有很好的解决其执行计划优化问题union all
或者是union
的方式来替代or
会得到更好的效果union all
代替union
union
需要建多个结果集合并后再进行唯一性过滤操作union all
select id * 10.2 from t1 where id>0 -> select id from t1 where id >0;
尽量减少在DB上的计算
select id,name from t1 limit 100000,10; -> select id,name from t1 where id>100000 limit 10;
先返回10W条数据再分页10条 直接返回10W后的10条
%
开始mysql小白系列_09 mysql性能优化关键点
标签:body write --help 更新 vmw esx 增加 iat 虚拟机