435

I tried to rm -rf a folder, and got "device or resource busy".

In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one. Although they're useful, I'm currently interested in just ASimpleMethodThatWorks™)

6
  • 5
    Thanks this was handy - I was coming from Linux to Windows, was looking for the equivalent of lsof - LockHunter. Commented Sep 4, 2013 at 2:28
  • 5
    What the hell? Unix does not prevent you from deleting open files like Windows does. This is why you can delete your whole system by running rm -rf /... it will happily delete every single file, including /bin/rm. Commented Oct 10, 2014 at 15:35
  • 2
    @psusi, that is incorrect. You either have a bad source of information or are just making stuff up. Linux, like Windows, has file and device locking. It's kind of broken, though. 0pointer.de/blog/projects/locking.html Commented Jan 10, 2015 at 1:05
  • 3
    @foobarbecue, normally those are only advisory locks and the man page at least seems to indicate they are only for read/write, not unlink. Commented Jan 10, 2015 at 23:34
  • Solutions on this page don't work for me, still not be able to delete the file, but in my case i'm bothered by the size the file, so i do this little trick: vim unwanted_file, then simply delete the content inside the file in edit mode, this way i release the disk, but the file is still there. Commented Feb 27, 2021 at 13:13

9 Answers 9

416

The tool you want is lsof, which stands for list open files.

It has a lot of options, so check the man page, but if you want to see all open files under a directory:

lsof +D /path

That will recurse through the filesystem under /path, so beware doing it on large directory trees.

Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.

11
  • 106
    What if there were no results? Commented Feb 4, 2014 at 15:04
  • 42
    @marines: Check if another filesystem is mounted beneath /path. That is one cause of hidden "open files". Commented Feb 5, 2014 at 9:16
  • 3
    lsof command directly to the path does not work. So basically need to go in the path location and then run lsof busy_file then kill all the process Commented Jul 4, 2016 at 11:56
  • 8
    lsof seems to do nothing for me: lsof storage/logs/laravel.log returned nothing, and so did lsof +D storage/logs/. umount responded with not mounted. Commented May 25, 2018 at 1:01
  • 13
    Just to elaborate on @camh answer: Use mount | grep <path>. That shows any /dev/<abc> might be mounted on the the <path>. Use sudo umount -lf /dev/<abc> and then try to remove <path>. Works for me. Thanks @camh Commented Jun 27, 2018 at 23:33
217

sometimes it's the result of mounting issues, so I'd unmount the filesystem or directory you're trying to remove:

umount /path

4
  • 1
    in my case, Jenkins didn't unmount chroot dir after task abort Commented Nov 3, 2016 at 3:46
  • 4
    In my case I cannot unmount because the device is busy. Commented Apr 13, 2018 at 18:32
  • 2
    You would think the mount command would first do a umount to ensure the path was clear... Commented Aug 7, 2020 at 15:04
  • Late to the party but maybe useful for feature checks, mount the dir rather than mounting the file, because that was causing me the issue. Commented Sep 28, 2021 at 3:04
32

I had this same issue, built a one-liner starting with @camh recommendation:

lsof +D ./ | awk '{print $2}' | tail -n +2 | xargs -r kill -9
  • awk grabs the PIDs.
  • tail gets rid of the pesky first entry: "PID".
  • xargs executes kill -9 on the PIDs. The -r / --no-run-if-empty, prevents kill command failure, in case lsof did not return any PID.
3
  • @ChoyltonB.Higginbottom as you asked for a safer way to prevent kill <no PID> failure (if lsof returns nothing) - Use xargs with -r / --no-run-if-empty. For non-GNU xargs, see this alternative: stackoverflow.com/a/19038748 Commented Jun 10, 2020 at 7:29
  • 1
    You can pipe tail -n +2 output through sort -u before killing the jobs IDs Commented Sep 16, 2021 at 10:58
  • 2
    kill -9 is a favorite for use but does have serious implications. This signal is "non-catchagable, non-ignorable" to the process. Thus, the process may terminate without saving critical state data. Perhaps a simple kill first, and if that doesn't work, then the -9? Finally, bear in mind that if the process is blocked on I/O, kill -9 isn't going to work. That's not an oversight in this suggestion, just something to keep in mind. Commented Jun 22, 2022 at 21:26
22

I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem, since the files are typically named like .nfs000000123089abcxyz.

My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file will have been removed automatically, at which point I am free to delete the directory.

This typically happens in directories where I am installing or compiling software libraries.

2
  • I also have the same problem with .nfsxxx files dropped seemingly in random places. However, I am not sure how this suggestion can make sense - obviously renaming the parent directory does not work because its contents are locked. Wouldn't get the error in the first instance otherwise. I tried it and simply nothing happens, the renaming refuses to happen. Do you want to elaborate/have any other suggestion? Commented Aug 31, 2022 at 7:27
  • 1
    renaming the parent directory always worked for me. No clue why. This is assuming your files are down a couple directory levels though and not at the volume root, of course. Sorry I dont have a better answer than "it just works for me". Commented Aug 31, 2022 at 15:59
19

I use fuser for this kind of thing. It will list which process is using a file or files within a mount.

6
  • 1
    fuser helps only in the specific case when you want to unmount a filesystem. Here the problem is to find what's using a specific file. Commented Apr 13, 2011 at 19:09
  • @Gilles: Also works for files. Commented Apr 14, 2011 at 0:36
  • Sorry, wrong objection: fuser doesn't help here because the problem is to find all the open files in a directory tree. You can tell lsof to show all files and filter, or make it recurse; fuser has no such mode and needs to be invoked on every file. Commented Apr 14, 2011 at 7:57
  • 1
    @Giles: fuser works will lists. Try fuser /var/log/*, if any logs are open it will tell which ones and who has it open. If a simple wildcard, won't work, find with or without xargs will do the job. Commented Apr 14, 2011 at 17:23
  • 2
    lsof was not in my path while fuser was, allowing me to find the offending process ID to kill, so +1+thanks. Commented Oct 13, 2015 at 19:48
19

Here is the solution:

  1. Go into the directory and type ls -a
  2. You will find a .xyz file
  3. vi .xyz and look into what is the content of the file
  4. ps -ef | grep username
  5. You will see the .xyz content in the 8th column (last row)
  6. kill -9 job_ids - where job_ids is the value of the 2nd column of corresponding error caused content in the 8th column
  7. Now try to delete the folder or file.
3
  • 6
    It would be interesting to know where those mysterious files are coming from. Commented Aug 12, 2014 at 20:59
  • 1
    This doesn't work in my situation, there simply is no .xyz file. Commented Oct 15, 2020 at 10:57
  • For me lsof does not work but I am able to use this Commented Sep 19, 2021 at 14:08
15

Riffing off of Prabhat's answer, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved it, but this

ps -ef | grep name-of-busy-dir

Showed me the process and the PID (column two).

sudo kill -15 pid-here

fixed it. Where -15 is defined as SIGTERM under 'Signal numbering for standard signals' in man 7 signal.

3
  • 1
    This worked for me too. Whats the -15? Commented Aug 27, 2019 at 23:55
  • 1
    @O.rka 15 is the id of the SIGTERM signal, see here: unix.stackexchange.com/questions/317492/list-of-kill-signals Commented Sep 14, 2020 at 16:14
  • kill -TERM is easier to remember, where supported Commented Jun 30, 2024 at 22:02
9

If you have the server accessible, Try

Deleting that dir from the server

Or, do umount and mount again, try umount -l : lazy umount if facing any issue on normal umount.

I too had this problem where

lsof +D path : gives no output

ps -ef : gives no relevant information

9

I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and fuser, were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused for ages because I couldn't get rid of it -- I kept getting "Device or resource busy"!

By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the mount command, i.e. sudo umount path

Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn't get rid of it by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder again and I could delete it.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.