I could only find the topics reading multiple txt files to one single dataframe. But I want to store them each as a different dataframe ( df1, df2, ... ) and later concate them together to one dataframe. Is there a fast way to do this ? Better what is the fastest way to do this ? That's one big point for me. The data names should not be used, they have the format (year.month.day.hour.minute.second) no txt in the end of the files to find. Thank you in advance.
Right now I am just reading and putting in one file:
f in glob.glob("path_in_dir"):
df = pd.read_table(f, delim_whitespace=True,
names=('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'),
dtype={'A': np.float32, 'B': np.float32, 'C': np.float32,
'D': np.float32,'E': np.float32, 'F': np.float32,
'G': np.float32,'H': np.float32})
all_data = all_data.append(df,ignore_index=True)