Skip to main content
Commonmark migration
Source Link

I really enjoying using control+r to recursively search my command history. I've found a few good options I like to use with it:

# ignore duplicate commands, ignore commands starting with a space
export HISTCONTROL=erasedups:ignorespace

# keep the last 5000 entries
export HISTSIZE=5000

# append to the history instead of overwriting (good for multiple connections)
shopt -s histappend

The only problem for me is that erasedups only erases sequential duplicates - so that with this string of commands:

ls
cd ~
ls

The ls command will actually be recorded twice. I've thought about periodically running w/ cron:

cat .bash_history | sort | uniq > temp.txt
mv temp.txt .bash_history

This would achieve removing the duplicates, but unfortunately the order would not be preserved. If I don't sort the file first I don't believe uniq can work properly.

How can I remove duplicates in my .bash_history, preserving order?

###Extra Credit:

Extra Credit:

Are there any problems with overwriting the .bash_history file via a script? For example, if you remove an apache log file I think you need to send a nohup / reset signal with kill to have it flush it's connection to the file. If that is the case with the .bash_history file, perhaps I could somehow use ps to check and make sure there are no connected sessions before the filtering script is run?

I really enjoying using control+r to recursively search my command history. I've found a few good options I like to use with it:

# ignore duplicate commands, ignore commands starting with a space
export HISTCONTROL=erasedups:ignorespace

# keep the last 5000 entries
export HISTSIZE=5000

# append to the history instead of overwriting (good for multiple connections)
shopt -s histappend

The only problem for me is that erasedups only erases sequential duplicates - so that with this string of commands:

ls
cd ~
ls

The ls command will actually be recorded twice. I've thought about periodically running w/ cron:

cat .bash_history | sort | uniq > temp.txt
mv temp.txt .bash_history

This would achieve removing the duplicates, but unfortunately the order would not be preserved. If I don't sort the file first I don't believe uniq can work properly.

How can I remove duplicates in my .bash_history, preserving order?

###Extra Credit:

Are there any problems with overwriting the .bash_history file via a script? For example, if you remove an apache log file I think you need to send a nohup / reset signal with kill to have it flush it's connection to the file. If that is the case with the .bash_history file, perhaps I could somehow use ps to check and make sure there are no connected sessions before the filtering script is run?

I really enjoying using control+r to recursively search my command history. I've found a few good options I like to use with it:

# ignore duplicate commands, ignore commands starting with a space
export HISTCONTROL=erasedups:ignorespace

# keep the last 5000 entries
export HISTSIZE=5000

# append to the history instead of overwriting (good for multiple connections)
shopt -s histappend

The only problem for me is that erasedups only erases sequential duplicates - so that with this string of commands:

ls
cd ~
ls

The ls command will actually be recorded twice. I've thought about periodically running w/ cron:

cat .bash_history | sort | uniq > temp.txt
mv temp.txt .bash_history

This would achieve removing the duplicates, but unfortunately the order would not be preserved. If I don't sort the file first I don't believe uniq can work properly.

How can I remove duplicates in my .bash_history, preserving order?

Extra Credit:

Are there any problems with overwriting the .bash_history file via a script? For example, if you remove an apache log file I think you need to send a nohup / reset signal with kill to have it flush it's connection to the file. If that is the case with the .bash_history file, perhaps I could somehow use ps to check and make sure there are no connected sessions before the filtering script is run?

Tweeted twitter.com/#!/StackUnix/status/248844273649319936
Source Link
cwd
  • 47k
  • 73
  • 155
  • 172

How can I remove duplicates in my .bash_history, preserving order?

I really enjoying using control+r to recursively search my command history. I've found a few good options I like to use with it:

# ignore duplicate commands, ignore commands starting with a space
export HISTCONTROL=erasedups:ignorespace

# keep the last 5000 entries
export HISTSIZE=5000

# append to the history instead of overwriting (good for multiple connections)
shopt -s histappend

The only problem for me is that erasedups only erases sequential duplicates - so that with this string of commands:

ls
cd ~
ls

The ls command will actually be recorded twice. I've thought about periodically running w/ cron:

cat .bash_history | sort | uniq > temp.txt
mv temp.txt .bash_history

This would achieve removing the duplicates, but unfortunately the order would not be preserved. If I don't sort the file first I don't believe uniq can work properly.

How can I remove duplicates in my .bash_history, preserving order?

###Extra Credit:

Are there any problems with overwriting the .bash_history file via a script? For example, if you remove an apache log file I think you need to send a nohup / reset signal with kill to have it flush it's connection to the file. If that is the case with the .bash_history file, perhaps I could somehow use ps to check and make sure there are no connected sessions before the filtering script is run?