I learned Bash a million years ago. I just wrote this simple script used to get the first lot of HTML comments from a file, and spit it out in order to create a README.md file.
It is just. So. Ugly. I read bits and pieces over the years, and I am sure it can be improved so much...
Here we go:
#!/bin/bash
IFS=''
active='0';
cat hot-form-validator.html | while read "line";do
echo $line | grep '\-\->' > /dev/null
if [ $active = '1' -a $? = '0' ];then
exit 0;
fi;
suppress=0;
echo $line | grep '^ *@' > /dev/null
if [ $? = '0' ];then suppress='1'; fi;
if [ $active = '1' -a $suppress = '0' ];then echo $line;fi;
echo $line | grep "<!--" > /dev/null
if [ $? = '0' ];then active='1'; fi;
done
Questions:
Is there a better way to do
grepand then check$?? Back in the day it was the way to go, but...Should
activebe a proper number rather than a string with a number? I know, it could be anything... but having a string that can be0or1just feels wrong.Is there a better way to preserve spaces, rather than zapping
IFS?Any more pearls of wisdom, other than quitting my (short lived) career of bash scripter?