Skip to main content
31 events
when toggle format what by license comment
Oct 19 at 14:41 history edited Jeff Schaller CC BY-SA 4.0
added 1 character in body
Oct 11 at 12:29 vote accept Neros
Oct 11 at 12:29 answer added Neros timeline score: 1
Oct 8 at 13:53 history became hot network question
Oct 8 at 12:34 comment added darxmurf no, it's a RAID command to tell the RAID controller or software raid to patrol the disks and see if everything is clean in the replication.
Oct 8 at 10:49 history edited Neros CC BY-SA 4.0
added 555 characters in body
Oct 8 at 9:19 comment added Neros @darxmurf im not sure what you mean by that, there is no function like that in the provided utiliy, it only lets you configure the RAID and see the current status (normal). If you are talking about something like chkdsk i have already run that and it found no issues.
Oct 8 at 9:09 answer added Tom Yan timeline score: 5
Oct 8 at 9:06 comment added darxmurf @Neros can you run a "consistency check" from your Windows utility? if yes, run it.
Oct 8 at 8:52 comment added Neros @darxmurf nothing interesting, the util is for windows and everything looks fine and works perfectly on windows.
Oct 8 at 8:29 comment added darxmurf @Neros good to know, what does the utility says about the disks?
Oct 8 at 8:26 comment added Neros @darxmurf it is 4x10TB in RAID 5, there is a raid utility provided by the manufacturer which is how I setup that configuration after installing the disks.
Oct 8 at 8:15 comment added darxmurf How many disks do you have in your device? if 2 and the storage is around 30Tb, it's a RAID0 which is not the thing to do to keep your files safe. if 3x16Tb it's potentially a RAID5 which is better except if one of the 3 disks is already dead. Do you have a software to check the configuration of your device? And I assume you don't have any backup of your files?
Oct 8 at 8:04 comment added Neros It's just reporting a bad number, it looked fine in the disk manager in Win10.
Oct 8 at 7:49 comment added Tom Yan Curious btw, if was saying that crazy number ever since it was first setup, how did you manage to create the partition that just fits the real capacity?
Oct 8 at 7:35 comment added Neros The enclosure is ORICO-NS400RU3 and it sounds like I should contact the manufacturer about this..
Oct 8 at 7:31 comment added darxmurf To be honest it sounds pretty bad. What kind of RAID device is it? I remember seeing those fancy LACIE bigDisk made of 2 disks in RAID0 which is probably the dumbest design ever.
Oct 8 at 7:30 comment added Tom Yan was saying that crazy number ever since it was first setup in that case, you can try move the backup partition table in gdisk (go to the expert menu with x then k), then enter 58598420480 as the new starting location. (I do not guarantee this is entirely safe to your data given that your enclosure is flawed, if not broken.)
Oct 8 at 7:21 comment added Neros I have no idea how to fix that without very likely ruining the file system. Also it was saying that crazy number ever since it was first setup and it WAS working on arch like this, the only thing that has changed is the UUID has gone missing and its not mounting either from fstab or manual mount calls.
Oct 8 at 7:17 comment added Tom Yan @Neros The warning about the backup GPT is probably red herring. What you really need to fix is Disk /dev/sdb: 115.45 PiB, 129986248068418560 bytes, 253879390758630 sectors. This number most likely doesn't come from any data or metadata (such as the partition table) stored on the drives (although I don't know for sure how this sort of USB RAID works behind the scene), but the USB bridge chip (RAID controller). (Which is why I suggest checking with sg_readcap to see whether this is really more of a hardware thing.)
Oct 8 at 7:06 comment added Neros Sorry typo, its 4x10tb drives in a RAID 5 setup, so its about 27tb of actual storage space. It shows up and works perfectly on win10, and it WAS working fine on arch until very recently. I am very worried about using gdisk to mess with the partitions because I dont know if the commands there are going to kill the data, it seems like it shouldnt but i dont know what im doing and theres no "repair" function. I think i should go into the "r" menu and use the main to rebuild the backup, does that sound right?
Oct 8 at 6:59 comment added Max Power Oh I should add, it may be better to shuck the drives and dd clone them individually rather than to trust the enclosure.
Oct 8 at 6:56 history edited Neros CC BY-SA 4.0
edited body
Oct 8 at 6:56 comment added frostschutz I also want a 100 Petabyte drive... if it gets detected like that in Linux, you probably can't trust anything the drive gives you in that state. If it works in Windows, do your recovery there. If this is a RAID, recover the individual drives.
Oct 8 at 6:54 comment added Max Power Attempt repair; if it goes wrong do a full drive health check, make a new partition label and copy in your backups. If you don't have a backup >:[ start by doing dd if=/dev/sdb of=/muhBackup conv=fsync bs=4M satus=progress this will block for block clone the whole 10T volume; partition label, filesystem superblocks, and all. fsync just means it won't claim to be done until the data is written (Not while still half buffered in memory.) bs= is read and write blocksize (Default is 512 B, that's a lot of read-request overhead. 4M is still small enough to work well with drive caches.)
Oct 8 at 6:53 comment added Tom Yan Never quite seen anything like this before. It's almost like the enclosure has gone haywire. ~10TB RAID Which mode of RAID? Is 10TB the expected RAID capacity or the size of each drive? Does the size of 27.3TiB (i.e. 30TB) make any sense to you? Do you mind installing sg3_utils and add the output of sg_readcap -l /dev/sdb?
Oct 8 at 6:32 history edited Neros CC BY-SA 4.0
added 1743 characters in body
Oct 8 at 5:45 comment added Tom Yan You should check the kernel messages (dmesg / journalctl -k) after plugging in the drive. It sounds like somehow the kernel is ignoring the partition table. Can you add full output of fdisk -l /dev/sdb? (Maybe also see if opening the drive with gdisk instead give you some hint about what's wrong.)
Oct 8 at 4:22 comment added Neros It is GPT. sdb doesn't show up at all in blkid so the grep result is empty. I already ran that mount command twice, once with type specified and once without, neither work now (when previously the typed one would work). I'm not sure what I'm looking at in those 2 directories, should there be entries for ntfs in both? I don't see ntfs in proc, and i see a ntfs3 in the kernel one.
Oct 8 at 4:07 comment added Max Power Is this a GPT or MBR partition label? What lines do you get from sudo blkid | grep -i "microsoft" ? Also tell it the fs type as ntfs is an oddball sudo mount -t ntfs /dev/sdb2 /mnt/test See /proc/filesystems and /lib/modules/$(uname -r)/kernel/fs for a complete list of the filesystems
Oct 8 at 2:47 history asked Neros CC BY-SA 4.0