5

Lately, I have been getting Buffer I/O errors during boot time. Seemingly, it appears randomly and I could not be able to reproduce it. However, it happens in every 5-8 boot on average. Here is the relevant part from dmesg:

[   24.630158] blk_update_request: I/O error, dev loop5, sector 0
[   24.630199] blk_update_request: I/O error, dev loop5, sector 0
[   24.630229] Buffer I/O error on dev loop5, logical block 0, async page read
[   24.630272] blk_update_request: I/O error, dev loop5, sector 0
[   24.630301] Buffer I/O error on dev loop5, logical block 0, async page read
[   24.630340] blk_update_request: I/O error, dev loop5, sector 0
[   24.630369] Buffer I/O error on dev loop5, logical block 0, async page read
[   24.630411] blk_update_request: I/O error, dev loop5, sector 0
[   24.630454] Buffer I/O error on dev loop5, logical block 0, async page read
[   24.630494] blk_update_request: I/O error, dev loop5, sector 0
[   24.630522] Buffer I/O error on dev loop5, logical block 0, async page read
[   24.630566] blk_update_request: I/O error, dev loop5, sector 0
[   24.630593] Buffer I/O error on dev loop5, logical block 0, async page read
[   24.630634] blk_update_request: I/O error, dev loop5, sector 0
[   24.630661] Buffer I/O error on dev loop5, logical block 0, async page read
[   24.630698] blk_update_request: I/O error, dev loop5, sector 0
[   24.630726] Buffer I/O error on dev loop5, logical block 0, async page read
[   24.630763] blk_update_request: I/O error, dev loop5, sector 0
[   24.630791] Buffer I/O error on dev loop5, logical block 0, async page read
[   24.630832] Buffer I/O error on dev loop5, logical block 0, async page read

This loop device is generated by snap, here is the the lsblk output:

$ lsblk
NAME                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0                        7:0    0  17.3M  1 loop  /snap/node/1767
loop1                        7:1    0    91M  1 loop  /snap/core/6350
loop2                        7:2    0  91.1M  1 loop  /snap/core/6259
loop3                        7:3    0    91M  1 loop  /snap/core/6405
loop4                        7:4    0  17.3M  1 loop  /snap/node/1729
loop5                        7:5    0  17.3M  1 loop  /snap/node/1793
sda                          8:0    0 238.5G  0 disk  
├─sda1                       8:1    0   524M  0 part  /boot/efi
├─sda2                       8:2    0   954M  0 part  /boot
└─sda3                       8:3    0   237G  0 part  
  └─sda3_crypt             254:0    0   237G  0 crypt 
    ├─xxx--vg-root--lv 254:1    0 229.1G  0 lvm   /
    └─xxx--vg-swap--lv 254:2    0   7.9G  0 lvm   [SWAP]

The OS is Debian 9 with kernel version 4.9.0-8-amd64 and snapd version 2.37.2.

Edit: Per @frostschutz's comment, below is df -h output. So what I understand is that the total size of the loop devices created by snap is equal to the used size. And there is much empty space in /dev/mapper/xxx--vg-root--lv (where the / is), so I don't think it is the problem.

$ df -h
Filesystem                        Size  Used Avail Use% Mounted on
udev                              3.9G     0  3.9G   0% /dev
tmpfs                             788M  9.5M  778M   2% /run
/dev/mapper/xxx--vg-root--lv  225G  123G   91G  58% /
tmpfs                             3.9G   23M  3.9G   1% /dev/shm
tmpfs                             5.0M  4.0K  5.0M   1% /run/lock
tmpfs                             3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/loop0                         18M   18M     0 100% /snap/node/1767
/dev/loop1                         91M   91M     0 100% /snap/core/6350
/dev/loop2                         92M   92M     0 100% /snap/core/6259
/dev/loop3                         91M   91M     0 100% /snap/core/6405
/dev/loop4                         18M   18M     0 100% /snap/node/1729
/dev/sda2                         939M   70M  823M   8% /boot
/dev/sda1                         523M  132K  523M   1% /boot/efi
tmpfs                             788M   20K  788M   1% /run/user/118
tmpfs                             788M   40K  788M   1% /run/user/1000
/dev/loop5                         18M   18M     0 100% /snap/node/1793

2
  • I don't use snap myself, but maybe the backing filesystem is full? Check df -h for available free space. Commented Mar 11, 2019 at 14:13
  • I updated the question to include the output, @frostschutz , but that does not seem to be the problem to me. Commented Mar 11, 2019 at 14:49

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.