In the last layer of our NN we were told to use softmax, but we activate it on a single float (a number)

according to softmax defenition in tensorflow, it makes sense that it will return 1 every time, if we fed it with a float:

softmax = exp(logits) / reduce_sum(exp(logits), dim)

but if we get 1 all the time we will never be able to progress.

can someone please help me solve this problem?