1

I have a Kodi video library where each movie is in its own folder because I had to place .nfo files with links to TMDB in each folder to ensure correct identification. The movies are in their original Blu-ray resolution, stored on a drive shared over Samba on a gigabit LAN.

I need to transcode all these files with FFmpeg to max. 1334×750 px.

Setup: Intel Core i-7 3930K, 2 x NVIDIA GTX 980 6 GB GPU, KDE on Debian Testing, custom FFmpeg compiled with h264_nvenc enabled. Although the GPUs are connected with an SLI bridge, they're not in SLI mode due to NVIDIA's Linux driver limitation (v550.163.01). GPU 0 is used by the system, GPU 1 is idle.

How to do it efficiently?

2 Answers 2

2

I've tried combinations of hwaccel and h264_nvenc.

  • hwaccel is for decoding
  • all streams other than video (i.e. audio and subtitles) are just copied without transcoding
  • scale=1334:-2 tells the filter to scale the input to 1334 pixels wide and to calculate the height divisible by 2 while maintaining original aspect ratio.
  • as the question is about GPU acceleration, I've omitted the classic ffmpeg invocation, processed purely on the CPU.

Results

  1. Slow: CPU load 90%, transcode speed 4x.

    ffmpeg -hwaccel cuda -i "in.mkv" -map 0 -codec:a copy -codec:s copy \
    -filter:v scale=1334:-2 "out.mkv"
    
  2. Faster: CPU load 30%, transcode speed 11x

    ffmpeg -hwaccel cuda -i "in.mkv" -map 0 -codec:a copy -codec:s copy \
    -codec:v h264_nvenc -gpu 1 -filter:v scale=1334:-2 "out.mkv"
    
  3. Fastest: CPU load 80%, transcode speed 12x

    ffmpeg -i "in.mkv" -map 0 -codec:a copy -codec:s copy \
    -codec:v h264_nvenc -gpu 1 -filter:v scale=1334:-2 "out.mkv"
    

Script

#!/bin/bash

# Rescale files in subdirectories (Kodi library) using 1334 as driving width 
# from which height is calculated. Output format is mkv.

for directory in * ; do
  [[ -d "$directory" ]] || continue
  for file in "$directory"/*.{mkv,mp4,mpg,webm} ; do
    [[ -e "$file" ]] || continue
      ffmpeg -i "$file" -map 0 -codec:a copy -codec:s copy -codec:v h264_nvenc -gpu 1 -filter:v scale="1334:-2" "$file"-w1334.mkv
  done
done
6
  • 1
    Interesting find. How is the GPU selected in the final script? Commented Aug 4 at 18:19
  • With Blu-Ray data for input, I would assume the common drive and/or the network throughput might introduce an I/O bottle-neck. What are your observations regarding that aspect? Commented Aug 4 at 18:20
  • 1
    RE: the I/O bottleneck mentioned by @Hermann: if you're reading from the samba drive and writing to it at the same time, that's going to seriously cripple the I/O performance, partly from sharing the network bandwidth but mostly from causing the samba server to thrash the disks, moving the disk heads back and forth from the sectors its reading to the sectors its writing. You can improve this a lot by either copying the source file to local disk and writing the output to the samba share or reading from samba, writing to local, and mv-ing the output file to samba after it has been transcoded. Commented Aug 5 at 2:56
  • 1
    If you have enough RAM for a big enough tmpfs ramdisk, you can even write the transcode output to a local ramdisk before moving it on completion, which will be even faster then writing to a local SATA or NVME SSD. BTW, this might seem pointless because ultimately you're still writing the file back to the samba server - but the final mv write is being done in one large sequential operation rather than lots of smaller random writes and without competing for either network bandwidth or file-server disk bandwidth. Commented Aug 5 at 2:58
  • 1
    @Hermann some test results: SSD to the same SSD: 1 - 2x, 2 - 10x, 3 - 13x. SSD to HDD: 1 - 4x, 2 - 11x, 3 - 13x. SSD to Ram disk: 3 - 13x. Ram disk to SSD: 3 - 12x. Numbers refer to the slow - faster - fastest commands in the answer. Commented Aug 6 at 1:31
2

Example modified script writing to local /tmp, then moving to the server if the transcode was successful:

#!/bin/bash

# Rescale files in subdirectories (Kodi library) using 1334
# as driving width from which height is calculated.
# Output format is mkv.

for directory in * ; do
  [[ -d "$directory" ]] || continue
  for file in "$directory"/*.{mkv,mp4,mpg,webm} ; do
    [[ -e "$file" ]] || continue
      filename=$(basename "$file")
      output_file="/tmp/${filename%.*}-w1334.mkv"

      ffmpeg -i "$file" -map 0 -codec:a copy \
        -codec:s copy -codec:v h264_nvenc -gpu 1 \
        -filter:v scale="1334:-2" "$output_file" &&
            mv "$output_file" "$directory"
  done
done

I also modified the script to avoid horizontal scrolling on SE due to long lines.

I find this also makes long and complicated command lines like those often used with ffmpeg to be much more readable and easier to understand and debug and modify.

Another bash script readability tip with commands like ffmpeg (and rsync and many others where long command lines are common) is to put some or all of the args in an array (do this before the main loop, it only needs to be done once, not repeatedly) and use "${arrayname[@]}" in the ffmpeg command line, e.g.

#!/bin/bash

# Rescale files in subdirectories (Kodi library) using 1334
# as driving width from which height is calculated.
# Output format is mkv.

ffargs=(-map 0
        -codec:a copy
        -codec:s copy
        -codec:v h264_nvenc
        -gpu 1
        -filter:v
        scale="1334:-2"
       )

for directory in * ; do
  [[ -d "$directory" ]] || continue
  for file in "$directory"/*.{mkv,mp4,mpg,webm} ; do
    [[ -e "$file" ]] || continue
      filename=$(basename "$file")
      output_file="/tmp/${filename%.*}-w1334.mkv"

      ffmpeg -i "$file" "${ffargs[@]}" "$output_file" &&
          mv "$output_file" "$directory"
  done
done

BTW, I'm not sure why you're bothering to check if the input file exists - the script is looping over filenames found in the directory, so it's pretty much guaranteed to exist. If you need that at all, it might be more useful to check that it's a regular (i.e. not a symlink or pipe or whatever), non-empty file that is readable by your uid rather than just testing for existence, e.g. instead of [[ -e "$file" ]]:

[[ -f "$file" && -s "$file" && -r "$file" ]] || continue

And maybe also check if the directory is both readable & writable by you:

[[ -d "$directory" && -r "$directory" && -w "$directory" ]] || continue
1
  • 1
    glad it was useful for you. Two more things: 1. instead of for directory in *; do and then checking it it's a directory, you can do for directory in */ ; do to match only directories. It's probably still worth checking if the dir is RW by you. 2. if you don't want to waste time transcoding files that have already been transcoded, add something like [[ -s "$directory/$output_file" ]] && continue after the output_file=... line. Commented Aug 5 at 12:04

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.