I am a newbie to ML, and want to perform the simpliest classification with Keras: if y > 0.5, then label = 1 (x no matter), and y < 0.5 then label = 0 (x no matter)
As far as I understand, 1 neuron with sigmoid activation can peform this linear classification.
import tensorflow.keras as keras
import math
import numpy as np
import matplotlib as mpl
train_data = np.empty((0,2),float)
train_labels = np.empty((0,1),float)
train_data = np.append(train_data, [[0, 0]], axis=0)
train_labels = np.append(train_labels, 0)
train_data = np.append(train_data, [[1, 0]], axis=0)
train_labels = np.append(train_labels, 0)
train_data = np.append(train_data, [[0, 1]], axis=0)
train_labels = np.append(train_labels, 1)
train_data = np.append(train_data, [[1, 1]], axis=0)
train_labels = np.append(train_labels, 1)
model = keras.models.Sequential()
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(1, input_dim = 2, activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_data, train_labels, epochs=20)
Training:
Epoch 1/5
4/4 [==============================] - 1s 150ms/step - loss: 0.4885 - acc: 0.7500
Epoch 2/5
4/4 [==============================] - 0s 922us/step - loss: 0.4880 - acc: 0.7500
Epoch 3/5
4/4 [==============================] - 0s 435us/step - loss: 0.4875 - acc: 0.7500
Epoch 4/5
4/4 [==============================] - 0s 396us/step - loss: 0.4869 - acc: 0.7500
Epoch 5/5
4/4 [==============================] - 0s 465us/step - loss: 0.4863 - acc: 0.7500
And predicting is not good:
predict_data = np.empty((0,2),float)
predict_data = np.append(predict_data, [[0, 0]], axis=0)
predict_data = np.append(predict_data, [[1, 0]], axis=0)
predict_data = np.append(predict_data, [[1, 1]], axis=0)
predict_data = np.append(predict_data, [[1, 1]], axis=0)
predict_labels = model.predict(predict_data)
print(predict_labels)
[[0.49750862]
[0.51616406]
[0.774486 ]
[0.774486 ]]
How to solve this problem?
After all, I tried to train model on 2000 points (in my mind, it's more than enough for this simple problem), but with no sucess...
train_data = np.empty((0,2),float)
train_labels = np.empty((0,1),float)
for i in range(0, 1000):
train_data = np.append(train_data, [[i, 0]], axis=0)
train_labels = np.append(train_labels, 0)
train_data = np.append(train_data, [[i, 1]], axis=0)
train_labels = np.append(train_labels, 1)
model = keras.models.Sequential()
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(1, input_dim = 2, activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_data, train_labels, epochs=5)
Epoch 1/5
2000/2000 [==============================] - 1s 505us/step - loss: 7.9669 - acc: 0.5005
Epoch 2/5
2000/2000 [==============================] - 0s 44us/step - loss: 7.9598 - acc: 0.5010
Epoch 3/5
2000/2000 [==============================] - 0s 45us/step - loss: 7.9511 - acc: 0.5010
Epoch 4/5
2000/2000 [==============================] - 0s 50us/step - loss: 7.9408 - acc: 0.5010
Epoch 5/5
2000/2000 [==============================] - 0s 53us/step - loss: 7.9279 - acc: 0.5015
<tensorflow.python.keras.callbacks.History at 0x7f4bdbdbda90>
Prediction:
predict_data = np.empty((0,2),float)
predict_data = np.append(predict_data, [[0, 0]], axis=0)
predict_data = np.append(predict_data, [[1, 0]], axis=0)
predict_data = np.append(predict_data, [[1, 1]], axis=0)
predict_data = np.append(predict_data, [[1, 1]], axis=0)
predict_labels = model.predict(predict_data)
print(predict_labels)
[[0.6280617 ]
[0.48020774]
[0.8395983 ]
[0.8395983 ]]
0.6280617 for (0,0) is very bad.


