一次解决磁盘IO读取慢全过程
作者:互联网
在两台型号相同的机器上(snap1 和snap3)测试磁盘的读取速度,发现两台机器的读取速度差的很大:
-
#dd if=/dev/dm-93 of=/dev/null bs=4M count=1024
-
711MB/s on snap1.
-
178MB/s on snap3.
接下来比较snap1和snap3两台机器上关于dm-93磁盘(raid)的以下字段输出都是一样
-
/sys/block/<device>/queue/max_sectors_kb
-
/sys/block/<device>/queue/nomerges
-
/sys/block/<device>/queue/rq_affinity
-
/sys/block/<device>/queue/scheduler
-
字段解释可以参考:
-
https://www.kernel.org/doc/Documentation/block/queue-sysfs.txt
然后用blktrace监控一下磁盘IO处理过程:
#blktrace /dev/dm-93
使用blkparse查看blktrace收集的日志:
-
253,108 1 1 7.263881407 21072 Q R 128 + 128 [dd]
-
在snap3上请求读取一页(64k每页)
-
253,108 1 2 7.263883907 21072 G R 128 + 128 [dd]
-
253,108 1 3 7.263885017 21072 I R 128 + 128 [dd]
-
253,108 1 4 7.263886077 21072 D R 128 + 128 [dd]
-
提交IO到磁盘
-
253,108 0 1 7.264883548 3 C R 128 + 128 [0]
-
大约1ms之后IO处理完成
-
253,108 1 5 7.264907601 21072 Q R 256 + 128 [dd]
-
磁盘处理IO完成之后,dd才开始处理下一个IO
-
253,108 1 6 7.264908587 21072 G R 256 + 128 [dd]
-
253,108 1 7 7.264908937 21072 I R 256 + 128 [dd]
-
253,108 1 8 7.264909470 21072 D R 256 + 128 [dd]
-
253,108 0 2 7.265757903 3 C R 256 + 128 [0]
-
但是在snap1上则完全不同,上一个IO没有完成的情况下,dd紧接着处理下一个IO
-
253,108 17 1 5.020623706 23837 Q R 128 + 128 [dd]
-
253,108 17 2 5.020625075 23837 G R 128 + 128 [dd]
-
253,108 17 3 5.020625309 23837 P N [dd]
-
253,108 17 4 5.020626991 23837 Q R 256 + 128 [dd]
-
253,108 17 5 5.020627454 23837 M R 256 + 128 [dd]
-
253,108 17 6 5.020628526 23837 Q R 384 + 128 [dd]
-
253,108 17 7 5.020628704 23837 M R 384 + 128 [dd]
现在怀疑是snap3上读取磁盘数据时没有预读,但是检查两台机器上read_ahead_kb的值都是一样的,都是512.
-
#/sys/block/<device>/queue/read_ahead_kb
-
512
没办法了,发绝招:用kprobe探测一下相关函数参数:
-
#ra_trace.sh
-
#!/bin/bash
-
if [ "$#" != 1 ]; then
-
echo "Usage: ra_trace.sh <device>"
-
exit
-
fi
-
echo 'p:do_readahead __do_page_cache_readahead mapping=%di offset=%dx pages=%cx' >/sys/kernel/debug/tracing/kprobe_events
-
echo 'p:submit_ra ra_submit mapping=%si ra=%di rastart=+0(%di) rasize=+8(%di):u32 rapages=+16(%di):u32' >>/sys/kernel/debug/tracing/kprobe_events
-
echo 'p:sync_ra page_cache_sync_readahead mapping=%di ra=%si rastart=+0(%si) rasize=+8(%si):u32 rapages=+16(%si):u32' >>/sys/kernel/debug/tracing/kprobe_events
-
echo 'p:async_ra page_cache_async_readahead mapping=%di ra=%si rastart=+0(%si) rasize=+8(%si):u32 rapages=+16(%si):u32' >>/sys/kernel/debug/tracing/kprobe_events
-
echo 1 >/sys/kernel/debug/tracing/events/kprobes/enable
-
dd if=$1 of=/dev/null bs=4M count=1024
-
echo 0 >/sys/kernel/debug/tracing/events/kprobes/enable
-
cat /sys/kernel/debug/tracing/trace_pipe&
-
CATPID=$!
-
sleep 3
-
kill $CATPID
发现在snap3上预读磁盘的时候,rasize=0,确实在读数据时没有预读数据。
-
<...>-35748 [009] 2507549.022375: submit_ra: (.ra_submit+0x0/0x38) mapping=c0000001bbd17728 ra=c000000191a261f0 rastart=df0b rasize=0 rapages=8
-
<...>-35748 [009] 2507549.022376: do_readahead: (.__do_page_cache_readahead+0x0/0x208) mapping=c0000001bbd17728 offset=df0b pages=0
-
<...>-35748 [009] 2507549.022694: sync_ra: (.page_cache_sync_readahead+0x0/0x50) mapping=c0000001bbd17728 ra=c000000191a261f0 rastart=df0b rasize=0 rapages=8
-
<...>-35748 [009] 2507549.022695: submit_ra: (.ra_submit+0x0/0x38) mapping=c0000001bbd17728 ra=c000000191a261f0 rastart=df0c rasize=0 rapages=8
接下来仔细研读一下预读相关的代码,发现预读页与node上的内存相关:
-
unsigned long max_sane_readahead(unsigned long nr)
-
{
-
return min(nr, (node_page_state(numa_node_id(), NR_INACTIVE_FILE)
-
+ node_page_state(numa_node_id(), NR_FREE_PAGES)) / 2);
-
}
比较一下snap1与snap3上node上的内存情况,发现snap3上node0上的内存和空闲内存都为0 ( 根因找到 :-)
-
snap1:# /usr/bin/numactl --hardware
-
available: 1 nodes (0)
-
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
-
node 0 size: 8192 MB
-
node 0 free: 529 MB
-
node distances:
-
node 0
-
0: 10
-
snap3:
-
# /usr/bin/numactl --hardware
-
available: 2 nodes (0,2)
-
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
-
node 0 size: 0 MB
-
node 0 free: 0 MB
-
node 2 cpus:
-
node 2 size: 8192 MB
-
node 2 free: 888 MB
-
node distances:
-
node 0 2
-
0: 10 40
-
2: 40 10
发现内核中有两个patch解决了这个问题,IO的预读不再以当前cpu上node上的内存情况来判断:
-
commit:6d2be915e589
-
mm/readahead.c: fix readahead failure for memoryless NUMA nodes and limit readahead pages
-
+#define MAX_READAHEAD ((512*4096)/PAGE_CACHE_SIZE)
-
/*
-
* Given a desired number of PAGE_CACHE_SIZE readahead pages, return a
-
* sensible upper limit.
-
*/
-
unsigned long max_sane_readahead(unsigned long nr)
-
{
-
- return min(nr, (node_page_state(numa_node_id(), NR_INACTIVE_FILE)
-
- + node_page_state(numa_node_id(), NR_FREE_PAGES)) / 2);
-
+ return min(nr, MAX_READAHEAD);
-
}
-
commit:600e19afc5f8
-
mm: use only per-device readahead limit
Note: 以上内核代码基于Linux内核主线代码 Linux3.0
标签:node,读取,dd,108,ra,IO,253,128,磁盘 来源: https://blog.csdn.net/yiyeguzhou100/article/details/117826180