0

I am trying to extract intermediate outputs from a quantized TFLite model using the TFLite interpreter. The goal is to verify that the model's intermediate outputs match the mathematically expected values( tensorflow version 2.6.2 , python version 3.6.8)

Steps I followed:

  1. Loaded a quantized TFLite model using tf.lite.Interpreter.
  2. Set the model input using: interpreter.set_tensor(input_index, quantized_input)
  3. Invoked the interpreter: interpreter.invoke()
  4. Retrieved op details: op_details = interpreter._get_ops_details()
  5. For each layer, fetched the output tensor index: out_tensor_index = op_details[i]["outputs"][0]
  6. Got the tensor values: output = interpreter.get_tensor(out_tensor_index)

Saved the output of each layer as .npy files for further analysis.

Issue:

The output values obtained from interpreter.get_tensor() do not match the mathematically calculated (expected) values, even after considering quantization parameters (e.g., scale and zero-point).

Questions:

  1. Is this the correct approach to extract intermediate layer outputs from a quantized TFLite model?
  2. Are there any limitations with using interpreter._get_ops_details() and get_tensor() on quantized models?

Any help is welcome, let me know if more info is needed

1
  • Welcome to Stack Overflow! Please try to provide a minimal, reproducible example so that others can reproduce your problem. Also try to be more specific regarding the problem that you observe. In particular, what do you mean by "The output values obtained from interpreter.get_tensor() do not match the mathematically calculated (expected) values"? Do you mean that the values do not match exactly (in the sense that A == B would be False, but numpy.allclose(A, B) would be True? Or do you mean that they are completely different? Commented May 23 at 7:59

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.