I'm trying to quantize a tfjs model to float16 from the standard float32. This is how I loaded my .keras model and converted it to tfjs. This part works.
keras_input_file_path = '/content/cnn_model.keras'
tfjs_model_dir = '/content/tfjs_cnn_model' # This will hold your float32 TF.js model
model = tf.keras.models.load_model(keras_input_file_path, compile=False)
save_keras_model(model, tfjs_model_dir)
then I run the following command:
!tensorflowjs_converter \
--input_format=tfjs_layers_model\
--output_format=tfjs_layers_model\
--quantize_float16\
/content/tfjs_cnn_model\
/content/tfjs_cnn_f16
but it gives me this error:
ValueError: Missing output_path argument.
I've looked up documentation but didn't find a solution. Maybe it's got to do with different keras, tf, and tfjs versions not working well together? tf:2.18.0, tfjs: 4.22.0, keras: 3.8.0 Any ideas are welcome, thanks.