I am trying to read in files in a loop and append them all into one dataset. However my code seems to be reading the data in fine, but the loop is not appending the data to a dataframe. Instead it just uses one of the imported datasets (final_Access hr dataframe).
What is wrong with my loop? why arent my looped files being appended? My dataframe access_HR_attestaion has 77 records, when I am expecting 2639 records as I am reading in 3 files.
for file in files_path:
mainframe_access_HR = pd.read_pickle(file)
mainframe_access_HR = mainframe_access_HR.astype(str)
if mainframe_access_HR.shape[0]:
application = mainframe_access_HR['Owner'].unique()[0]
filtered_attestation_data = attestation_data[attestation_data['cleaned_MAL_CODE']==application]
final_access_hr = pd.DataFrame()
column_list = pd.DataFrame(['HRACF2'])
for column in range(len(column_list)):
mainframe_access_HR_new = mainframe_access_HR.copy()
#Drop rows containing NAN for column c_ACF2_ID for new merge
mainframe_access_HR_new.dropna(subset=[column_list.iloc[column,0]], inplace = True)
#Creating a new column for merge
mainframe_access_HR_new['ID'] = mainframe_access_HR_new[column_list.iloc[column,0]]
#case folding
mainframe_access_HR_new['ID'] = mainframe_access_HR_new['ID'].str.strip().str.upper()
#Merge data
merged_data = pd.merge(filtered_attestation_data, mainframe_access_HR_new, how='right', left_on=['a','b'], right_on =['a','b'])
#Concatinating all data together
final_access_hr = final_access_hr.append(merged_data)
#Remove duplicates
access_HR_attestaion = final_access_hr.drop_duplicates()
pd.concat(list_of_dataframes, axis=0)