3

I have a text file with the size of 1505MB contains float data. The file has about 73000 rows and 1500 columns. I would like to read the content of the file into a numpy array and then perform some analysis on the array but my machine has been getting slow using numpy.readtxt to read the file. What is the fastest way to read this file into an array using python?

9
  • 2
    You say "getting slow". How slow are we talking here? And how much memory are you working with? Commented Apr 5, 2016 at 0:13
  • Is it a sparse matrix? Commented Apr 5, 2016 at 0:13
  • @user2357112 I have four cpu on my machine and they reached all 4 to 100% performance and basically I could not use my machine to do anything else. Commented Apr 5, 2016 at 0:15
  • 2
    Check stackoverflow.com/questions/15096269/… (Using pandas.read_csv with space as separators) Commented Apr 5, 2016 at 0:17
  • @ChrisP The file contains the probability distributions for around 73000 objects. I don't know how sparse it is?!! Commented Apr 5, 2016 at 0:17

2 Answers 2

6

You can also use the pandas reader, which is optimized :

In [3]: savetxt('data.txt',rand(10000,100))

In [4]: %time u=loadtxt('data.txt')
Wall time: 7.21 s

In [5]: %time u= read_large_txt('data.txt',' ')
Wall time: 3.45 s

In [6]: %time u=pd.read_csv('data.txt',' ',header=None).values
Wall time: 1.41 s
Sign up to request clarification or add additional context in comments.

Comments

3

The following function allocates the right amount of memory needed to read a text file.

def read_large_txt(path, delimiter=None, dtype=None):
    with open(path) as f:
        nrows = sum(1 for line in f)
        f.seek(0)
        ncols = len(f.next().split(delimiter))
        out = np.empty((nrows, ncols), dtype=dtype)
        f.seek(0)
        for i, line in enumerate(f):
            out[i] = line.split(delimiter)
    return out

It allocates the memory by knowing beforehand the number of rows, columns and the data type. You could easily add some extra arguments found in np.loadtxt or np.genfromtxt such as skiprows, usecols and so forth.

Important:

As well observed by @Evert, out[i] = line.split(delimiter) seems wrong, but NumPy converts the string to dtype without requiring additional handling of data types here. There are some limits though.

7 Comments

There's no cast to the datatype. line.split returns an array of strings, so you'll want to cast that to a 1D numpy array of dtype first.
@Evert believe me, it works. Probably NumPy is doing the convertion while assigning the values to the array
That, in a sense, scares the hell out of me: can it break, and when (under what conditions)? Is this behaviour documented somewhere?
@Dalek: it shouldn't be too hard to modify that function to ignore those lines, right?
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.