Skip to main content
edited body
Source Link

The problem was already discussed here. But there was not consensus on this topic.

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg etx4ext4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

I need your opinions on this topic. What drawbacks / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that someday such functionality will be implemented because:

  1. Data volumes grow rapidly and so files also do
  2. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.

The problem was already discussed here. But there was not consensus on this topic.

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg etx4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

I need your opinions on this topic. What drawbacks / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that someday such functionality will be implemented because:

  1. Data volumes grow rapidly and so files also do
  2. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.

The problem was already discussed here. But there was not consensus on this topic.

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg ext4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

I need your opinions on this topic. What drawbacks / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that someday such functionality will be implemented because:

  1. Data volumes grow rapidly and so files also do
  2. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.
edited tags
Source Link

The problem was already discussed here. But there was not consensus on this topic.

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg etx4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

I need your opinions on this topic. What drawbacks / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that sometimesomeday such functionality will be implemented because:

  1. Data volumes grow rapidly and so files also do
  2. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.

The problem was already discussed here. But there was not consensus on this topic.

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg etx4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

I need your opinions on this topic. What drawbacks / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that sometime such functionality will be implemented because:

  1. Data volumes grow rapidly and so files also do
  2. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.

The problem was already discussed here. But there was not consensus on this topic.

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg etx4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

I need your opinions on this topic. What drawbacks / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that someday such functionality will be implemented because:

  1. Data volumes grow rapidly and so files also do
  2. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.
added 1 character in body
Source Link

The problem was already discussed here. But there was not consensus on this topic.

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg etx4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

I need your opinions on this topic. What drawbackdrawbacks / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that sometime such functionality will be implemented because:

  1. Data volumes grow rapidly and so files also do
  2. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.

The problem was already discussed here. But there was not consensus on this topic.

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg etx4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

I need your opinions on this topic. What drawback / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that sometime such functionality will be implemented because:

  1. Data volumes grow rapidly and so files also do
  2. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.

The problem was already discussed here. But there was not consensus on this topic.

I have some thoughts on how insert operation can be implemented for some popular file systems. If FS has extent-based structure (eg etx4, ntfs, probably btrfs) we can utilize this feature to make modification of middle file's parts independent of other its parts. I suppose it will require handling of length for each such part independently of other's. But in some situations advantage may be drastically big. From my experience: I sometimes faced with the problem of slow processing of one big file. So the functionality may be in demand. And I don't even mention databases here which always required such funcionality.

A good usage case would be distribution of read/write operations across multiple disk layers. It is rather relevant for modern multidisk (often ssd-based) multicore and mulithreaded SMP (or even NUMA) systems.

I already took a look at MPI-IO v.2 system. It has something simialar (especially regarding parallel processing). But it does not provide dynamic file resizing capability which I suppose to introduce.

I need your opinions on this topic. What drawbacks / shortcomings can arise in trying to implement such feature. One of such drawback can be odd irregular length of contigous file blocks resulting in breaking memory mapping mechanisms for example. I just want to notice that sometime such functionality will be implemented because:

  1. Data volumes grow rapidly and so files also do
  2. Parallel processing is already a modern reality. And there are still no other good ways of future technology improvement in terms of performance.
added 134 characters in body
Source Link
Loading
Source Link
Loading