5

I'm a little new to Tensor Flow and would like to understand why the following codes does not accept my input and how to resolve it. Prior to this, I was using mode_save but I have now converted this model to TFLite and would like to use it to predict the category of the inputted text.

I first load TFLite model and allocate tensors.

interpreter = tf.lite.Interpreter(model_path="/model.tflite")
interpreter.allocate_tensors()

Then get input and output tensors.

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

Then tokenized my input data. I take my text and use texts_to_sequences which is just tokenizer_from_json in Tensorflow, then pad it as I do in my model training. So my input is something like this...

[[144 122 557 136   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0]]

This is the shape I would expect as I used the same one for my model training.

Now I just want to use tensor flow lite predict, however running the code below gives the following error:

input_shape = input_details[0]['shape']
text = 'We know what we are, but know not what we may be.'
seq = self.tokeniser.texts_to_sequences(text)
input_tensor= np.array(seq, dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_tensor)

Cannot set tensor: Dimension mismatch. Got 60 but expected 1 for dimension 1 of input 0.

Why?

This is what I have tried based on suggestions.....

output = interpreter.get_output_details()[0]  # Model has single output.
input = interpreter.get_input_details()[0]  # Model has single input.
input_data = tf.constant(padded_text, shape=[1, 1])
interpreter.set_tensor(input['index'], input_data)
interpreter.invoke()
print(interpreter.get_tensor(output['index']).shape)

which gives the following error:

Eager execution of tf.constant with unsupported shape. Tensor [[144 122 557 136 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]

3
  • 1
    What does print(interpreter.get_input_details()) return? Commented Jun 9, 2022 at 6:38
  • Try seq = self.tokeniser.texts_to_sequences([text]). I think the input should be list of texts. Commented Jun 12, 2022 at 16:04
  • @AbhinavMathur input shape [{'name': 'serving_default_embedding_input:0', 'index': 0, 'shape': array([1, 1], dtype=int32), 'shape_signature': array([-1, -1], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}] padded shape (1, 60) Commented Jun 15, 2022 at 15:02

1 Answer 1

1

Your input tensor is the wrong size. The docs show that your input data should be of shape 1,1:

input_data = tf.constant(1., shape=[1, 1])
interpreter.set_tensor(input['index'], input_data)

I suspect changing the line:

interpreter.set_tensor(input_details[0]['index'], input_tensor)

to

interpreter.set_tensor(input_details['index'], input_tensor)

should fix your issue.

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you for the response. I have updated my question with what happens with your solution and I still get an error

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.