1

I'm trying to quantize a tfjs model to float16 from the standard float32. This is how I loaded my .keras model and converted it to tfjs. This part works.

keras_input_file_path = '/content/cnn_model.keras'
tfjs_model_dir = '/content/tfjs_cnn_model' # This will hold your float32 TF.js model

model = tf.keras.models.load_model(keras_input_file_path, compile=False)

save_keras_model(model, tfjs_model_dir)

then I run the following command:

!tensorflowjs_converter \
  --input_format=tfjs_layers_model\
  --output_format=tfjs_layers_model\
  --quantize_float16\
  /content/tfjs_cnn_model\
  /content/tfjs_cnn_f16

but it gives me this error:

ValueError: Missing output_path argument.

I've looked up documentation but didn't find a solution. Maybe it's got to do with different keras, tf, and tfjs versions not working well together? tf:2.18.0, tfjs: 4.22.0, keras: 3.8.0 Any ideas are welcome, thanks.

1 Answer 1

-1

The ValueError occurs because tensorflowjs_converter misinterprets the input and output paths when --quantize_float16 is used without explicitly specifying --input_path and --output_path. Always use these explicit flags to avoid argument parsing issues, especially with quantization options.

Try using the below commands:

!tensorflowjs_converter \ 
 --input_format=tfjs_layers_model \ 
 --output_format=tfjs_layers_model \
 --quantize_float16 \
 --input_path=/content/tfjs_cnn_model \ 
 --output_path=/content/tfjs_cnn_f16
Sign up to request clarification or add additional context in comments.

1 Comment

--input_path or output_path cannot be passed in this way, however you are correct that --quantize_float16 or the other quantization options are probably "eating" the next parameter incorrectly. I opened an issue -- github.com/tensorflow/tfjs/issues/8589 --- for now, the solution I found was to call the CLI command with --quantize_uint8 "*"

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.