Skip to main content
added 49 characters in body
Source Link
Imme22009
  • 173
  • 3
  • 7

As discussed ext/ext2/3/4 are very much the same derived from UFS/FFS (see also 1 and 2) they order the data in the same way in terms of inodes, blocks and pointers (indirect blocks etc), which are all common to FFS - the first filesystem for Unix. Linux is a derivative of unix and ext is FFS with some additional features. The linux filesystem API is built into the kernel and is compatible with all versions of ext

In terms of recreating the inode table the core of the filesystem is a list of pointers to blocks containing the data of each file. Without this list the blocks corresponding to each file can only be guessed. Luckily however ext2/3/4 areExt4 in particular is good at allocating blocks that are contiguous (it incorporates a 'multiple block allocation' or mballoc feature) so this guess may be fine. But for large files where there may be fragmentation the task becomes difficult and as far as I know there is no known tool that can reconstruct a file from blocks that are fragmented. I would reiterate that ext2 is good at allocating contiguous blocks (blocks of a given file are almost always close together). you may read about the low level structure of ext2/3/4 here. In the case of fragmentation you would have to manually locate the blocks yourself which for a large file would be extremely difficult. Larger files are more likely to be fragmented.

The metadata (the inode corresponding to each file) is as important as the data itself and without the metadata your data can no longer be considered present - even if it is still present unchanged on the disk - strictly speaking there is no ability to reconstruct the blocks in order so a large fragmented file is as good as lost without the inode table

The only options are to use a data carving tool (such as PhotoRec or foremost) alongside your own heuristics of what the data consists of, which may help to reconstruct your files.

Given how important the metadata is and how easy it can be corrupted, you may consider regularly scheduling e2image which allows backup of only metadata - which is a fraction of the size of a full backup. Of course if the filesystem changes after the backup the metadata will become irrelevant

As discussed ext/ext2/3/4 are very much the same derived from UFS/FFS (see also 1 and 2) they order the data in the same way in terms of inodes, blocks and pointers (indirect blocks etc), which are all common to FFS - the first filesystem for Unix. Linux is a derivative of unix and ext is FFS with some additional features. The linux filesystem API is built into the kernel and is compatible with all versions of ext

In terms of recreating the inode table the core of the filesystem is a list of pointers to blocks containing the data of each file. Without this list the blocks corresponding to each file can only be guessed. Luckily however ext2/3/4 are good at allocating blocks that are contiguous so this guess may be fine. But for large files where there may be fragmentation the task becomes difficult and as far as I know there is no known tool that can reconstruct a file from blocks that are fragmented. I would reiterate that ext2 is good at allocating contiguous blocks (blocks of a given file are almost always close together). you may read about the low level structure of ext2/3/4 here. In the case of fragmentation you would have to manually locate the blocks yourself which for a large file would be extremely difficult. Larger files are more likely to be fragmented.

The metadata (the inode corresponding to each file) is as important as the data itself and without the metadata your data can no longer be considered present - even if it is still present unchanged on the disk - strictly speaking there is no ability to reconstruct the blocks in order so a large fragmented file is as good as lost without the inode table

The only options are to use a data carving tool (such as PhotoRec or foremost) alongside your own heuristics of what the data consists of, which may help to reconstruct your files.

Given how important the metadata is and how easy it can be corrupted, you may consider regularly scheduling e2image which allows backup of only metadata - which is a fraction of the size of a full backup. Of course if the filesystem changes after the backup the metadata will become irrelevant

As discussed ext/ext2/3/4 are very much the same derived from UFS/FFS (see also 1 and 2) they order the data in the same way in terms of inodes, blocks and pointers (indirect blocks etc), which are all common to FFS - the first filesystem for Unix. Linux is a derivative of unix and ext is FFS with some additional features. The linux filesystem API is built into the kernel and is compatible with all versions of ext

In terms of recreating the inode table the core of the filesystem is a list of pointers to blocks containing the data of each file. Without this list the blocks corresponding to each file can only be guessed. Ext4 in particular is good at allocating blocks that are contiguous (it incorporates a 'multiple block allocation' or mballoc feature) so this guess may be fine. But for large files where there may be fragmentation the task becomes difficult and as far as I know there is no known tool that can reconstruct a file from blocks that are fragmented. I would reiterate that ext2 is good at allocating contiguous blocks (blocks of a given file are almost always close together). you may read about the low level structure of ext2/3/4 here. In the case of fragmentation you would have to manually locate the blocks yourself which for a large file would be extremely difficult. Larger files are more likely to be fragmented.

The metadata (the inode corresponding to each file) is as important as the data itself and without the metadata your data can no longer be considered present - even if it is still present unchanged on the disk - strictly speaking there is no ability to reconstruct the blocks in order so a large fragmented file is as good as lost without the inode table

The only options are to use a data carving tool (such as PhotoRec or foremost) alongside your own heuristics of what the data consists of, which may help to reconstruct your files.

Given how important the metadata is and how easy it can be corrupted, you may consider regularly scheduling e2image which allows backup of only metadata - which is a fraction of the size of a full backup. Of course if the filesystem changes after the backup the metadata will become irrelevant

added 98 characters in body
Source Link
Imme22009
  • 173
  • 3
  • 7

As discussed ext/ext2/3/4 are very much the same derived from UFS/FFS (see also 1 and 2) they order the data in the same way in terms of inodes, blocks and pointers (indirect blocks etc), which are all common to UFSFFS - the first filesystem for Unix. Linux is a derivative of unix and ext is UFSFFS with some additional features. The linux filesystem API is built into the kernel and is compatible with all versions of ext

In terms of recreating the inode table the core of the filesystem is a list of pointers to blocks containing the data of each file. Without this list the blocks corresponding to each file can only be guessed. Luckily however ext2/3/4 are good at allocating blocks that are contiguous so this guess may be fine. But for large files where there may be fragmentation the task becomes difficult and as far as I know there is no known tool that can reconstruct a file from blocks that are fragmented. I would reiterate that ext2 is good at allocating contiguous blocks (blocks of a given file are almost always close together). you may read about the low level structure of ext2/3/4 here. In the case of fragmentation you would have to manually locate the blocks yourself which for a large file would be extremely difficult. Larger files are more likely to be fragmented.

The metadata (the inode corresponding to each file) is as important as the data itself and without the metadata your data can no longer be considered present - even if it is still present unchanged on the disk - strictly speaking there is no ability to reconstruct the blocks in order so a large fragmented file is as good as lost without the inode table

The only options are to use a data carving tool (such as PhotoRec or foremost) alongside your own heuristics of what the data consists of, which may help to reconstruct your files.

Given how important the metadata is and how easy it can be corrupted, you may consider regularly scheduling e2image which allows backup of only metadata - which is a fraction of the size of a full backup. Of course if the filesystem changes after the backup the metadata will become irrelevant

As discussed ext/ext2/3/4 are very much the same derived from UFS they order the data in the same way in terms of inodes, blocks and pointers (indirect blocks etc), which are all common to UFS - the first filesystem for Unix. Linux is a derivative of unix and ext is UFS with some additional features. The linux filesystem API is built into the kernel and is compatible with all versions of ext

In terms of recreating the inode table the core of the filesystem is a list of pointers to blocks containing the data of each file. Without this list the blocks corresponding to each file can only be guessed. Luckily however ext2/3/4 are good at allocating blocks that are contiguous so this guess may be fine. But for large files where there may be fragmentation the task becomes difficult and as far as I know there is no known tool that can reconstruct a file from blocks that are fragmented. I would reiterate that ext2 is good at allocating contiguous blocks (blocks of a given file are almost always close together). you may read about the low level structure of ext2/3/4 here. In the case of fragmentation you would have to manually locate the blocks yourself which for a large file would be extremely difficult. Larger files are more likely to be fragmented.

The metadata (the inode corresponding to each file) is as important as the data itself and without the metadata your data can no longer be considered present - even if it is still present unchanged on the disk - strictly speaking there is no ability to reconstruct the blocks in order so a large fragmented file is as good as lost without the inode table

The only options are to use a data carving tool (such as PhotoRec or foremost) alongside your own heuristics of what the data consists of, which may help to reconstruct your files.

Given how important the metadata is and how easy it can be corrupted, you may consider regularly scheduling e2image which allows backup of only metadata - which is a fraction of the size of a full backup. Of course if the filesystem changes after the backup the metadata will become irrelevant

As discussed ext/ext2/3/4 are very much the same derived from UFS/FFS (see also 1 and 2) they order the data in the same way in terms of inodes, blocks and pointers (indirect blocks etc), which are all common to FFS - the first filesystem for Unix. Linux is a derivative of unix and ext is FFS with some additional features. The linux filesystem API is built into the kernel and is compatible with all versions of ext

In terms of recreating the inode table the core of the filesystem is a list of pointers to blocks containing the data of each file. Without this list the blocks corresponding to each file can only be guessed. Luckily however ext2/3/4 are good at allocating blocks that are contiguous so this guess may be fine. But for large files where there may be fragmentation the task becomes difficult and as far as I know there is no known tool that can reconstruct a file from blocks that are fragmented. I would reiterate that ext2 is good at allocating contiguous blocks (blocks of a given file are almost always close together). you may read about the low level structure of ext2/3/4 here. In the case of fragmentation you would have to manually locate the blocks yourself which for a large file would be extremely difficult. Larger files are more likely to be fragmented.

The metadata (the inode corresponding to each file) is as important as the data itself and without the metadata your data can no longer be considered present - even if it is still present unchanged on the disk - strictly speaking there is no ability to reconstruct the blocks in order so a large fragmented file is as good as lost without the inode table

The only options are to use a data carving tool (such as PhotoRec or foremost) alongside your own heuristics of what the data consists of, which may help to reconstruct your files.

Given how important the metadata is and how easy it can be corrupted, you may consider regularly scheduling e2image which allows backup of only metadata - which is a fraction of the size of a full backup. Of course if the filesystem changes after the backup the metadata will become irrelevant

added 268 characters in body
Source Link
Imme22009
  • 173
  • 3
  • 7

As discussed ext/ext2/3/4 are very much the same derived from UFS they order the data in the same way in terms of inodes, blocks and pointers (indirect blocks etc), which are all common to UFS - the first filesystem for Unix. Linux is a derivative of unix and ext is UFS with some additional features. The linux filesystem API is built into the kernel and is compatible with all versions of ext

In terms of recreating the inode table that will be extremely difficult, since the core of the filesystem is a list of pointers to blocks containing the data of each file. Without this list the blocks corresponding to each file can only be guessed - if the files. Luckily however ext2/3/4 are small theygood at allocating blocks that are likely to be contiguous blocks so this guess may be fine. But iffor large files where there is may be fragmentation the task becomes extremely difficult and as far as I know there is no known tool that can reconstruct a file from blocks that are highly fragmented. I would reiterate that ext2 is good at allocating contiguous blocks (blocks of a given file are almost always close together). you may read about the low level structure of ext2/3/4 here. In thisthe case of fragmentation you would have to manually locate the blocks yourself which for a large file would be extremely difficult. Larger files are more likely to be fragmented.

Unfortunately theThe metadata (the inode corresponding to each file) is as important as the data itself and without the metadata your data can no longer be considered present - even if it is still present unchanged on the disk - strictly speaking there is no ability to reconstruct the blocks in order so a large fragmented file is as good as lost without the inode table

The only options are to use a data carving tool (such as PhotoRec or foremost) alongside your own heuristics of what the data consists of, which may help to reconstruct your files.

Given how important the metadata is and how easy it can be corrupted, you may consider regularly scheduling e2image which allows backup of only metadata - which is a fraction of the size of a full backup. Of course if the filesystem changes after the backup the metadata will become irrelevant

As discussed ext/ext2/3/4 are very much the same derived from UFS they order the data in the same way in terms of inodes, blocks and pointers (indirect blocks etc), which are all common to UFS - the first filesystem for Unix. Linux is a derivative of unix and ext is UFS with some additional features. The linux filesystem API is built into the kernel and is compatible with all versions of ext

In terms of recreating the inode table that will be extremely difficult, since the core of the filesystem is a list of pointers to blocks containing the data of each file. Without this list the blocks corresponding to each file can only be guessed - if the files are small they are likely to be contiguous blocks so this guess may be fine. But if there is fragmentation the task becomes extremely difficult and as far as I know there is no known tool that can reconstruct a file from blocks that are highly fragmented. In this case you would have to manually locate the blocks yourself which for a large file would be extremely difficult. Larger files are more likely to be fragmented.

Unfortunately the metadata (the inode corresponding to each file) is as important as the data itself and without the metadata your data can no longer be considered present - even if it is still present unchanged on the disk - there is no ability to reconstruct the blocks in order so a large fragmented file is as good as lost without the inode table

The only options are to use a data carving tool (such as PhotoRec or foremost) alongside your own heuristics of what the data consists of, which may help to reconstruct your files.

Given how important the metadata is and how easy it can be corrupted, you may consider regularly scheduling e2image which allows backup of only metadata - which is a fraction of the size of a full backup. Of course if the filesystem changes after the backup the metadata will become irrelevant

As discussed ext/ext2/3/4 are very much the same derived from UFS they order the data in the same way in terms of inodes, blocks and pointers (indirect blocks etc), which are all common to UFS - the first filesystem for Unix. Linux is a derivative of unix and ext is UFS with some additional features. The linux filesystem API is built into the kernel and is compatible with all versions of ext

In terms of recreating the inode table the core of the filesystem is a list of pointers to blocks containing the data of each file. Without this list the blocks corresponding to each file can only be guessed. Luckily however ext2/3/4 are good at allocating blocks that are contiguous so this guess may be fine. But for large files where there may be fragmentation the task becomes difficult and as far as I know there is no known tool that can reconstruct a file from blocks that are fragmented. I would reiterate that ext2 is good at allocating contiguous blocks (blocks of a given file are almost always close together). you may read about the low level structure of ext2/3/4 here. In the case of fragmentation you would have to manually locate the blocks yourself which for a large file would be extremely difficult. Larger files are more likely to be fragmented.

The metadata (the inode corresponding to each file) is as important as the data itself and without the metadata your data can no longer be considered present - even if it is still present unchanged on the disk - strictly speaking there is no ability to reconstruct the blocks in order so a large fragmented file is as good as lost without the inode table

The only options are to use a data carving tool (such as PhotoRec or foremost) alongside your own heuristics of what the data consists of, which may help to reconstruct your files.

Given how important the metadata is and how easy it can be corrupted, you may consider regularly scheduling e2image which allows backup of only metadata - which is a fraction of the size of a full backup. Of course if the filesystem changes after the backup the metadata will become irrelevant

Source Link
Imme22009
  • 173
  • 3
  • 7
Loading