memo: TF | Print Tensors

tf 1.x

enable_eager_execution()

Invoke the function tf.enable_eager_execution() at program startup, and then you can use the .numpy() method. ¹

1
2
3
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
pred_id = tf.multinomial(tf.exp(preds), num_samples=1)[0][0].numpy()

(eager execution is enabled by default in TensorFlow 2.0.)

tf.get_static_value(x)

Answer from ¹. Link to docs

1
tf.get_static_value( tensor, partial=False )

Create sess

Without enabling the eager execution, a tensor can be evaluated in a session. The session can be created as a runtime context by with tf.Session() as sess:, or using InteractiveSession() in place .

The value of a Tensor can only be obtained after calling .eval(), because Tensor (in TF) is of lazy evaluation. ²

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
import tensorflow as tf

a = tf.constant([1, 1.5, 2.5], dtype=tf.float32)
b = tf.constant([1, -2, 3], dtype=tf.float32)
c = a * b

with tf.Session() as sess:
    result = c.eval()
   
print(result)

Use InteractiveSession()

Initialize an InteractiveSession as the default session. The following snippet is from 2.

1
2
3
4
5
6
7
8
9
import tensorflow as tf
a = tf.constant([1, 1]) # SparseTensor,只存储值及其对应的index
b = tf.constant([2, 2])
c = tf.add(a, b)        # Tensor,完整的矩阵

sess = tf.InteractiveSession()  # v1.15
print("a[0]=%s, a[1]=%s" % (a[0].eval(), a[1].eval()))
print("c = %s" % c.eval())
sess.close()

Session.run()

通过 Session.run() 获取变量的值 6, 4

1
2
3
4
5
6
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
bias = tf.Variable(1.0)
y_pred = x**2 + bias
loss = (y - y_pred)**2
print('Loss(x,y) = %.3f' % session.run(loss, {x:3.0, y:9.0})) # 程序开头已定义sess

报错:

1
2
3
4
5
6
FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Attempting to use uninitialized value Variable_1
 [[{{node Variable_1/read}}]]
 [[pow_3/_33]]
(1) Failed precondition: Attempting to use uninitialized value Variable_1
 [[{{node Variable_1/read}}]]

原因: bias is a tf.Variable which needs initialization 5, like:

1
2
3
with tf.Session() as sess:
   sess.run(tf.global_variables_initializer())
   print(session.run(loss, {x:3.0, y:9.0})

tf.keras.backend.get_value(x)

Answer form DesiKeki⁴. Link to docs

1
2
3
4
print(product)
>>tf.Tensor([[12.]], shape=(1, 1), dtype=float32)
print(tf.keras.backend.get_value(product))
>>[[12.]]

tf.print()

Python print() cannot be run in a tf.Graph, but tf.print() can be. tf_guide

1
2
3
4
5
@tf.function
def double(a):
    print("Tracing with", a)        # only be executed when (re)tracing
    tf.print("Executing with", a)   # executed every time the graph is called
    return a + a

(Deprecated) tf.Print()

The pythonic print(feat_tensor) will only print the description of tensors once when the input function graph is constructed.

tf.Print() is a graph node that splices (打结) the “print call” into the graph by taking the tensor you wanna print as input, and output a “same” variable that will be passed to the later node in the graph.

1
2
3
4
5
6
7
def input_fn(dataset):
  def _fn():
    feat_tensor = tf.constant(dataset.data)
    feat_tesnor = tf.Print(feat_tensor, 
                  data=[feat_tensor, tensor2, tensor3], # feat_tensor will be printed & returned, tensor2,tensor3 are printed btw. 
                  message="Inputs are: ")
    feat = {feat_name: feat_tensor}

Using tf.Print() in TensorFlow -Yufeng Guo


(2023-04-16)

LambdaCallback doesn’t work

sumNotes

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.callbacks import LambdaCallback
tf.enable_eager_execution()

# Define the model architecture
model = tf.keras.Sequential([
    Flatten(input_shape=(28, 28)),
    Dense(128, activation='relu'),
    Dense(10)
])

# Define the loss function and optimizer
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()

# Define a function to save the intermediate tensor
def save_intermediate_tensor(epoch,):
    intermediate_tensor = model.layers[0].output # Replace this with the layer whose output you want to save
    print(intermediate_tensor)  # only show the description once
    # tf.print(intermediate_tensor) # AttributeError: 'Tensor' object has no attribute '_datatype_enum'
    # np.savez(f'vec{epoch}', intermediate_tensor.numpy())  # AttributeError: 'Tensor' object has no attribute 'numpy' (with egar execution enabled)
    # tf.io.write_file(f'vec{epoch}', tf.strings.as_string(intermediate_tensor))  # rank has to be 0

save_callback = LambdaCallback(on_epoch_end=save_intermediate_tensor)

# Load the MNIST dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

# Normalize the pixel values
x_train = x_train[:100] / 255.0

# Compile the model
model.compile(optimizer=optimizer, loss=loss_fn, metrics=['accuracy'])

# Train the model
for epoch in range(10):
    for step, (x_batch_train, y_batch_train) in enumerate(zip(x_train, y_train)):
        x_batch_train = x_batch_train.reshape((-1, 28, 28))
        y_batch_train = y_batch_train.reshape((-1,))
        with tf.GradientTape() as tape:
            logits = model(x_batch_train)
            loss_value = loss_fn(y_batch_train, logits)
        grads = tape.gradient(loss_value, model.trainable_weights)
        optimizer.apply_gradients(zip(grads, model.trainable_weights))

    save_callback.on_epoch_end(epoch)

DDG search: ‘Tensor’ object has no attribute ’numpy’

TF 2.0 ‘Tensor’ object has no attribute ’numpy’ while using .numpy() although eager execution enabled by default#27519


TF 2.x

@tf.function

Create a dataflow graph for a Python function. Then its outputs can be printed. Docs

1
2
3
4
5
6
@tf.function
def foo():
    a = tf.random.uniform([1], maxval=5, seed=1)
    b = tf.random.uniform([1], maxval=5, seed=1)
    return a+b
print(foo())    # 2*a

Ref

  1. 在TensorFlow中怎么打印Tensor对象的值