No need for anything except a straightforward sort with a uniqueness constraint.
sort -u old.txt new.txt >new.txt.tmp &&
mv -f new.txt.tmp new.txt
rm -f new.txt.tmp
I see that POSIX does define the ability for sort to write directly to an input file, so you could also do this, but I haven't tested to see how robust it is in the event of failure (the previous version guarantees either to keep the original or replace it with the new, without loss):
sort -o new.txt -u old.txt new.txt
Alternatively you could use awk. This version keeps the order of the lines in the files intact, starting with the first file and adding only new lines from subsequent files:
awk '
FNR<NR && h[$0] { next } # Skip seen lines in secondary files
{ h[$0]=1; print } # Record the line and output it
' new.txt old.txt
I've split this over several lines so I can add comments. Remove those and you could crash it into a single line, but it's generally better to write readable code. It intentionally does not remove duplicate lines already present in the first file, new.txt. If you want that,
awk '! h[$0]++' new.txt old.txt
This increments an associative array value for each line seen, but prints it only if the value was zero (unset).