linux – 由于单个驱动器读取错误导致软件RAID-1导致内核故障
作者:互联网
我在两个相同的希捷1GB硬盘上运行Fedora 19(内核3.11.3-201.fc19.x86_64),并安装了软件RAID-1(mdadm):
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb3[1] sda3[0]
973827010 blocks super 1.2 [2/2] [UU]
unused devices: <none>
最近,在两个驱动器之一上出现了一些错误:smartd检测到“1个当前不可读(待定)扇区”和“1个脱机不可纠正扇区”. RAID阵列“重新安排”了一些部门.然后大约一天后,内核产生了各种I / O消息/异常:
Oct 18 06:39:20 x smartd[461]: Device: /dev/sdb [SAT], 1 Currently unreadable (pending) sectors
Oct 18 06:39:20 x smartd[461]: Device: /dev/sdb [SAT], 1 Offline uncorrectable sectors
...
Oct 18 07:09:20 x smartd[461]: Device: /dev/sdb [SAT], 1 Currently unreadable (pending) sectors
Oct 18 07:09:20 x smartd[461]: Device: /dev/sdb [SAT], 1 Offline uncorrectable sectors
...
Oct 18 07:30:28 x kernel: [467502.192792] md/raid1:md1: sdb3: rescheduling sector 1849689328
Oct 18 07:30:28 x kernel: [467502.192822] md/raid1:md1: sdb3: rescheduling sector 1849689336
Oct 18 07:30:28 x kernel: [467502.192846] md/raid1:md1: sdb3: rescheduling sector 1849689344
Oct 18 07:30:28 x kernel: [467502.192870] md/raid1:md1: sdb3: rescheduling sector 1849689352
Oct 18 07:30:28 x kernel: [467502.192895] md/raid1:md1: sdb3: rescheduling sector 1849689360
Oct 18 07:30:28 x kernel: [467502.192919] md/raid1:md1: sdb3: rescheduling sector 1849689368
Oct 18 07:30:28 x kernel: [467502.192943] md/raid1:md1: sdb3: rescheduling sector 1849689376
Oct 18 07:30:28 x kernel: [467502.192966] md/raid1:md1: sdb3: rescheduling sector 1849689384
Oct 18 07:30:28 x kernel: [467502.192991] md/raid1:md1: sdb3: rescheduling sector 1849689392
Oct 18 07:30:28 x kernel: [467502.193035] md/raid1:md1: sdb3: rescheduling sector 1849689400
...
Oct 19 06:26:08 x kernel: [550035.944191] ata3.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
Oct 19 06:26:08 x kernel: [550035.944224] ata3.01: BMDMA stat 0x64
Oct 19 06:26:08 x kernel: [550035.944248] ata3.01: failed command: READ DMA EXT
Oct 19 06:26:08 x kernel: [550035.944274] ata3.01: cmd 25/00:08:15:fb:9c/00:00:6c:00:00/f0 tag 0 dma 4096 in
Oct 19 06:26:08 x kernel: [550035.944274] res 51/40:00:1c:fb:9c/40:00:6c:00:00/10 Emask 0x9 (media error)
Oct 19 06:26:08 x kernel: [550035.944322] ata3.01: status: { DRDY ERR }
Oct 19 06:26:08 x kernel: [550035.944340] ata3.01: error: { UNC }
Oct 19 06:26:08 x kernel: [550036.573438] ata3.00: configured for UDMA/133
Oct 19 06:26:08 x kernel: [550036.621444] ata3.01: configured for UDMA/133
Oct 19 06:26:08 x kernel: [550036.621507] sd 2:0:1:0: [sdb] Unhandled sense code
Oct 19 06:26:08 x kernel: [550036.621516] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550036.621523] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Oct 19 06:26:08 x kernel: [550036.621530] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550036.621537] Sense Key : Medium Error [current] [descriptor]
Oct 19 06:26:08 x kernel: [550036.621555] Descriptor sense data with sense descriptors (in hex):
Oct 19 06:26:08 x kernel: [550036.621562] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00
Oct 19 06:26:08 x kernel: [550036.621606] 6c 9c fb 1c
Oct 19 06:26:08 x kernel: [550036.621626] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550036.621638] Add. Sense: Unrecovered read error - auto reallocate failed
Oct 19 06:26:08 x kernel: [550036.621646] sd 2:0:1:0: [sdb] CDB:
Oct 19 06:26:08 x kernel: [550036.621653] Read(10): 28 00 6c 9c fb 15 00 00 08 00
Oct 19 06:26:08 x kernel: [550036.621692] end_request: I/O error, dev sdb, sector 1822227228
Oct 19 06:26:08 x kernel: [550036.621719] raid1_end_read_request: 9 callbacks suppressed
Oct 19 06:26:08 x kernel: [550036.621727] md/raid1:md1: sdb3: rescheduling sector 1816361448
Oct 19 06:26:08 x kernel: [550036.621782] ata3: EH complete
Oct 19 06:26:08 x kernel: [550041.155637] ata3.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
Oct 19 06:26:08 x kernel: [550041.155669] ata3.01: BMDMA stat 0x64
Oct 19 06:26:08 x kernel: [550041.155694] ata3.01: failed command: READ DMA EXT
Oct 19 06:26:08 x kernel: [550041.155719] ata3.01: cmd 25/00:08:15:fb:9c/00:00:6c:00:00/f0 tag 0 dma 4096 in
Oct 19 06:26:08 x kernel: [550041.155719] res 51/40:00:1c:fb:9c/40:00:6c:00:00/10 Emask 0x9 (media error)
Oct 19 06:26:08 x kernel: [550041.155767] ata3.01: status: { DRDY ERR }
Oct 19 06:26:08 x kernel: [550041.155785] ata3.01: error: { UNC }
Oct 19 06:26:08 x kernel: [550041.343437] ata3.00: configured for UDMA/133
Oct 19 06:26:08 x kernel: [550041.391438] ata3.01: configured for UDMA/133
Oct 19 06:26:08 x kernel: [550041.391501] sd 2:0:1:0: [sdb] Unhandled sense code
Oct 19 06:26:08 x kernel: [550041.391510] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550041.391517] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Oct 19 06:26:08 x kernel: [550041.391525] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550041.391532] Sense Key : Medium Error [current] [descriptor]
Oct 19 06:26:08 x kernel: [550041.391546] Descriptor sense data with sense descriptors (in hex):
Oct 19 06:26:08 x kernel: [550041.391553] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00
Oct 19 06:26:08 x kernel: [550041.391596] 6c 9c fb 1c
Oct 19 06:26:08 x kernel: [550041.391616] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550041.391624] Add. Sense: Unrecovered read error - auto reallocate failed
Oct 19 06:26:08 x kernel: [550041.391636] sd 2:0:1:0: [sdb] CDB:
Oct 19 06:26:08 x kernel: [550041.391643] Read(10): 28 00 6c 9c fb 15 00 00 08 00
Oct 19 06:26:08 x kernel: [550041.391681] end_request: I/O error, dev sdb, sector 1822227228
Oct 19 06:26:08 x kernel: [550041.391737] ata3: EH complete
Oct 19 06:26:08 x kernel: [550041.409686] md/raid1:md1: read error corrected (8 sectors at 1816363496 on sdb3)
Oct 19 06:26:08 x kernel: [550041.409705] handle_read_error: 9 callbacks suppressed
Oct 19 06:26:08 x kernel: [550041.409709] md/raid1:md1: redirecting sector 1816361448 to other mirror: sda3
计算机继续将一些条目记录到系统日志中,直到大约一个小时后,然后最终变得完全没有响应. syslog中没有记录“内核oops”.必须重新启动计算机,此时raid阵列处于重新同步状态.重新同步后,一切都显示良好,驱动器似乎正常工作.
我还注意到,所有重新安排的部门恰好分开了8个扇区,这对我来说似乎很奇怪.
最后,重启后大约一两天,但在raid重新同步完成后显着,驱动器重置了不可读(待定)和离线不可纠正的扇区计数,我认为这是正常的,因为驱动器使这些扇区脱机并重新映射它们:
Oct 20 01:05:42 x kernel: [ 2.186400] md: bind<sda3>
Oct 20 01:05:42 x kernel: [ 2.204826] md: bind<sdb3>
Oct 20 01:05:42 x kernel: [ 2.209618] md: raid1 personality registered for level 1
Oct 20 01:05:42 x kernel: [ 2.210079] md/raid1:md1: not clean -- starting background reconstruction
Oct 20 01:05:42 x kernel: [ 2.210087] md/raid1:md1: active with 2 out of 2 mirrors
Oct 20 01:05:42 x kernel: [ 2.210122] md1: detected capacity change from 0 to 997198858240
Oct 20 01:05:42 x kernel: [ 2.210903] md: resync of RAID array md1
Oct 20 01:05:42 x kernel: [ 2.210911] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Oct 20 01:05:42 x kernel: [ 2.210915] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
Oct 20 01:05:42 x kernel: [ 2.210920] md: using 128k window, over a total of 973827010k.
Oct 20 01:05:42 x kernel: [ 2.241676] md1: unknown partition table
...
Oct 20 06:33:10 x kernel: [19672.235467] md: md1: resync done.
...
Oct 21 05:35:50 x smartd[455]: Device: /dev/sdb [SAT], No more Currently unreadable (pending) sectors, warning condition reset after 1 email
Oct 21 05:35:50 x smartd[455]: Device: /dev/sdb [SAT], No more Offline uncorrectable sectors, warning condition reset after 1 email
smartctl -a / dev / sdb现在显示
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
...
5 Reallocated_Sector_Ct 0x0033 096 096 036 Pre-fail Always - 195
...
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
...
出现了一些问题:
1)为什么所有重新安排的部门都要分开8个?
2)为什么内核无响应并需要重启?难道这不是RAID-1在没有系统爆炸的情况下应该处理的情况吗?
3)为什么raid重新同步完成后仅23小时才会重置不可读和离线不可纠正的计数?
解决方法:
1) Why would all the rescheduled sectors be exactly 8 apart?
行业数量的这种差距是可以预期的,问题是这些差距会有多大(4k或更大). 8x 512字节是4k,这是大多数文件系统使用的扇区大小.因此文件系统可能要求从RAID读取4k,RAID要求/ dev / sdb获取该数据.该读取的第一个扇区失败(这是您在日志中看到的扇区号),RAID切换到/ dev / sda并从那里服务4k.然后文件系统请求读取下一个4k,返回到/ dev / sdb,扇区号为8再次失败,这也是你在日志中看到的…
2) Why would the kernel become unresponsive and require a reboot?
不应该正常发生.问题是重新分配的情况是你能得到的最贵的.每次失败的读取都必须重定向到另一个磁盘,必须在原始磁盘上重写等.如果它同时填充您的日志文件,则会导致新的写入请求,而这些请求又必须重新分配在这种情况下,将磁盘完全取出会更便宜.
这也是其他硬件(如SATA控制器)如何处理故障驱动器的问题.如果控制器本身出现打嗝,则会对性能产生更大的影响.
如果没有日志条目,很难说出究竟发生了什么;这是Linux内核的一个弱点,没有直接的解决方案可以在事情真正发生时保留最后的消息.
3) Why would the unreadable and offline uncorrectable counts reset only 23 hours after the raid resync was complete?
某些值仅在您执行脱机数据收集(UPDATED Offline列)时更新,这可能需要一些时间.如果磁盘设置为自动执行此操作,则它取决于磁盘,例如每四个小时一次.如果您不想依赖磁盘,则应使用smartmontools进行设置.
标签:linux,fedora,software-raid,raid1 来源: https://codeday.me/bug/20190813/1649017.html