(1) (e, x), y = d
What you are doing here is essentially unpacking the values from the variable d into the tuple (e, x) and the variable y. This only works if d is in the same format of the left-hand side.
For instance, suppose d = ((3, 3), 4). Then:
d = ((3, 3), 4)
(e, x), y = d
print(e) # prints 3
print(x) # prints 3
print(y) # prints 4
Simpler examples would be:
a, b = 2, 3
print(a) # prints 2
print(b) # prints 3
(x, y, z) = (10, 20, 30)
print(x) # prints 10
print(y) # prints 20
print(z) # prints 30
The only difference is that in your example you are using a tuple, thus you need to unpack all values correctly.
(2) yhat = atom_count @ w + b
The @ operator is an operator used for matrix multiplication, added in Python 3.5.
Here you are essentially declaring a variable yhat that is equal to the matrix multiplication of the matrices atom_count and w+b. For instance:
import numpy as np
A = np.array([
[1, 2],
[3, 4]
])
B = np.array([
[4, 5],
[6, 7]
])
print(A @ B) # prints [[19 22]
# [43 50]]
(3) baseline_val_loss = [0.0 for _ in range(epochs)]
This is called list comprehension. It is a way to initialize lists instead of writing verbose for loops. Here you are initializing a list called baseline_val_loss with an amount of epochs zeros.
If you have epochs = 10, then you would have a python list with 10 zeros. The range function creates a list from 0 to epochs - 1. You can use _ as variable to iterate because you simply do not care about the current loop value; you won't do anything with it.
@which is a matrix multiplication, that I didn't even know was a built in operator until just now.baseline_val_loss = [0.0] * epochsFor MACHINE-LEARNING use [machine-learning] tag PLEASE.