I have a directory with a lot of files. Each file have the same pattern <id>_data_<date>.csv. What i want to do is to delete all files but to keep the newest per <id>. 
Example directory:
10020077_data_2017-07-18_001.csv
10020078_data_2017-07-18_001.csv
10020209_data_2019-04-23_001.csv
10020209_data_2019-04-24_001.csv
10020209_data_2019-04-25_001.csv
10020209_data_2019-04-26_001.csv
10020209_data_2019-04-27_001.csv
10020209_data_2019-04-28_001.csv
10020272_data_2019-04-23_001.csv
10020272_data_2019-04-24_001.csv
10020272_data_2019-04-25_001.csv
10020272_data_2019-04-26_001.csv
10020272_data_2019-04-27_001.csv
10020272_data_2019-04-28_001.csv
10020286_data_2019-04-23_001.csv
Expected result:
10020077_data_2017-07-18_001.csv
10020078_data_2017-07-18_001.csv
10020209_data_2019-04-23_001.csv <-- delete
10020209_data_2019-04-24_001.csv <-- delete
10020209_data_2019-04-25_001.csv <-- delete
10020209_data_2019-04-26_001.csv <-- delete 
10020209_data_2019-04-27_001.csv <-- delete
10020209_data_2019-04-28_001.csv
10020272_data_2019-04-23_001.csv <-- delete
10020272_data_2019-04-24_001.csv <-- delete
10020272_data_2019-04-25_001.csv <-- delete
10020272_data_2019-04-26_001.csv <-- delete
10020272_data_2019-04-27_001.csv <-- delete
10020272_data_2019-04-28_001.csv
10020286_data_2019-04-23_001.csv
In this case i can not work with find -mtime because some ids get new files daily and some others only once a month or sometimes only once a year. 
My idea was to group the filenames per id and than to keep the last item. How can i solve this with bash?


My idea was to group the filenames per id and than to keep the last item- so how you will know10020209_data_2019-04-23_001.csvis last file or10020209_data_2019-04-28_001.csv? maybe you meant keep the last file by newst data in its name for each same Ids?