You can cast a float32 tensor to be float64 and vice versa.

E.g using:

'tensor = tf.cast(tensor, tf.float32)' ]]>

I ran into the same problem. can you expend a bit more please about how do you correct the precision to be X-bit?

The precision of what?

and where exactly do i correct it?

all my variables and placeholders are initialized to be with type float32.

i didn't found a way to force the softmax to return a result with 32 bit.

also, I tried to normalize the vector by dividing in it's sum, but from sum reason it's sum is still a bit bigger than 1…

]]>So the if the line that causes the error uses Y-bit precision, make sure that what you feed to it is normalized in the correct precision.

]]>all my placeholders and variables are float_32

and by normalizing you mean to divide by the sum (in order to guarantee a sum of 1)? or reduce the mean and then divide by the std?

I've tried the former and it doesn't work, the sum is in fact 1 but I still get this error

thanks in advance!

]]>This is what I've done in my implementation anyway when I had this issue and it worked for me. ]]>

when using tf.nn.softmax in my agent and np.random.multinomial on the result, once in maybe a 10000 episodes I get this error -

in mtrand.RandomState.multinomial (numpy/random/mtrand/mtrand.c:37769)

ValueError: sum(pvals[:-1]) > 1.0

I guess its because the result of tf.nn.softmax in that case doesn't sum exactly to 1 for some reason.

Is there any way to fix this issue?

would a try catch be the best option? or should I normalize again just in case (or after checking if it sums to 1)

thanks!

]]>