memo: TF | Save Tensors

Table of contents

Callbacks

I asked chat-GPT: “How to save the intermediate tensor in a tensorflow model?”.

tf.keras.callbacks.LambdaCallback class will call a custom function after each epoch or each batch or the begining/ending of traininng. Docs.

Callbacks are useful to get a view on internal states and statistics of the model during training. Docs: Writing your own callbacks

Example of callbacks in model.fit() refer to: 7;

However, the NeRF model constitutes low-level operations without implementing the method model.fit(). Then the callback functions need to be called manually at the appropriate points in the training loop:


Asking Chat-GPT: “Please do not use model.fit(). Make the model and callback in low-level options. Please given an example of using callbacks and using TensorFlow 1.15.”

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.callbacks import LambdaCallback
tf.enable_eager_execution()

# Define the model architecture
model = tf.keras.Sequential([
    Flatten(input_shape=(28, 28)),
    Dense(128, activation='relu'),
    Dense(10)
])

# Define the loss function and optimizer
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()

# Define the learning rate decay schedule
def lr_decay(epoch, lr):
    if epoch % 10 == 0 and epoch > 0:
        return lr * 0.1
    else:
        return lr

# Define the callback to update the learning rate
lr_callback = LambdaCallback(on_epoch_end=lambda epoch, logs: optimizer.lr.assign(lr_decay(epoch, optimizer.lr)))

# Define a function to save the intermediate tensor
def save_intermediate_tensor(batch, logs):
    intermediate_tensor = model.layers[2].output
    tf.summary.scalar('intermediate_tensor', intermediate_tensor, step=epoch)
save_callback = LambdaCallback(on_batch_end=save_intermediate_tensor)

# Load the MNIST dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

# Normalize the pixel values
x_train = x_train[:100] / 255.0
x_test = x_test[:100] / 255.0
y_test = y_test[:100]

# Compile the model for model.evaluate()
model.compile(optimizer=optimizer, loss=loss_fn, metrics=['accuracy'])

# Train the model
for epoch in range(10):
    for step, (x_batch_train, y_batch_train) in enumerate(zip(x_train, y_train)):
        x_batch_train = x_batch_train.reshape((-1, 28, 28))
        y_batch_train = y_batch_train.reshape((-1,))
        with tf.GradientTape() as tape:
            logits = model(x_batch_train)
            loss_value = loss_fn(y_batch_train, logits)
        grads = tape.gradient(loss_value, model.trainable_weights)
        optimizer.apply_gradients(zip(grads, model.trainable_weights))

        # Call the save callback at the end of each batch
        save_callback.on_batch_end(batch,{})

    # Call the learning rate callback at the end of each epoch
    lr_callback.on_epoch_end(epoch, {'lr': optimizer.lr.numpy()})

    # Evaluate the model on the test set
    test_loss, test_acc = model.evaluate(x_test, y_test)
    print('Epoch {}, Test Loss: {:.4f}, Test Accuracy: {:.4f}'.format(epoch+1, test_loss, test_acc))

Error: couldn’t find summary writer

NotFoundError: Resource localhost/logdir:./logdir/exp1/N10tensorflow22SummaryWriterInterfaceE does not exist. [Op:FlushSummaryWriter]

tf.io.write_file()

How to save the value of a tensor in Tensorflow found by 1

1
2
one_string = tf.strings.as_string(tensor)
tf.io.write_file(filename, one_string)
  • Problem: the rank of tensor has to be 0?

tf TFRecordWriter

How do you save a Tensorflow dataset to a file? found by 1

  • This is for making (x,y) dataset.

io_ops._save()

Is there a way to save an intermediate output in Tensorflow to a file? found by 2

  • May have been deprecated.

Create a new model

Obtaining output of an Intermediate layer in TensorFlow/Keras found by 2

Get values of KerasTensor

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import tensorflow as tf

def init_nerf_model():
    ...
    print(output.numpy()) # AttributeError: 'KerasTensor' object has no attribute 'numpy'

    model = tf.keras.Model(inputs=inputs, outputs=outputs)
    return model

model= init_nerf_model(input_ch=63, input_ch_views=27, use_viewdirs=True)
# model.layers  # Check memory address of each layer

temp_mode = tf.keras.Model(model.input, model.layers[11].output)
temp_mode.summary() # print description of each layer

Oiginal NeRF model summary:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Model: "model"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 input_1 (InputLayer)           [(None, 90)]         0           []                               
 tf.split (TFOpLambda)          [(None, 63),         0           ['input_1[0][0]']                
                                 (None, 27)]                                                      
 dense (Dense)                  (None, 256)          16384       ['tf.split[0][0]']               
 dense_1 (Dense)                (None, 256)          65792       ['dense[0][0]']                  
 dense_2 (Dense)                (None, 256)          65792       ['dense_1[0][0]']                
 dense_3 (Dense)                (None, 256)          65792       ['dense_2[0][0]']                
 dense_4 (Dense)                (None, 256)          65792       ['dense_3[0][0]']                
 tf.concat (TFOpLambda)         (None, 319)          0           ['tf.split[0][0]',               
                                                                  'dense_4[0][0]']                
 dense_5 (Dense)                (None, 256)          81920       ['tf.concat[0][0]']              
 dense_6 (Dense)                (None, 256)          65792       ['dense_5[0][0]']                
 dense_7 (Dense)                (None, 256)          65792       ['dense_6[0][0]']                
 dense_9 (Dense)                (None, 256)          65792       ['dense_7[0][0]']                
 tf.concat_1 (TFOpLambda)       (None, 283)          0           ['dense_9[0][0]',                
                                                                  'tf.split[0][1]']               
 dense_10 (Dense)               (None, 128)          36352       ['tf.concat_1[0][0]']            
 dense_11 (Dense)               (None, 3)            387         ['dense_10[0][0]']               
 dense_8 (Dense)                (None, 1)            257         ['dense_7[0][0]']                
 tf.concat_2 (TFOpLambda)       (None, 4)            0           ['dense_11[0][0]',               
                                                                  'dense_8[0][0]']                
==================================================================================================
Total params: 595,844
Trainable params: 595,844
Non-trainable params: 0

Only keep the MLP part:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Model: "model_1"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 input_1 (InputLayer)           [(None, 90)]         0           []                               
 tf.split (TFOpLambda)          [(None, 63),         0           ['input_1[0][0]']                
                                 (None, 27)]                                                      
 dense (Dense)                  (None, 256)          16384       ['tf.split[0][0]']               
 dense_1 (Dense)                (None, 256)          65792       ['dense[0][0]']                  
 dense_2 (Dense)                (None, 256)          65792       ['dense_1[0][0]']                
 dense_3 (Dense)                (None, 256)          65792       ['dense_2[0][0]']                
 dense_4 (Dense)                (None, 256)          65792       ['dense_3[0][0]']                
 tf.concat (TFOpLambda)         (None, 319)          0           ['tf.split[0][0]',               
                                                                  'dense_4[0][0]']                
 dense_5 (Dense)                (None, 256)          81920       ['tf.concat[0][0]']              
 dense_6 (Dense)                (None, 256)          65792       ['dense_5[0][0]']                
 dense_7 (Dense)                (None, 256)          65792       ['dense_6[0][0]']                
 dense_9 (Dense)                (None, 256)          65792       ['dense_7[0][0]']                
                                                                                                  
==================================================================================================
Total params: 558,848
Trainable params: 558,848
Non-trainable params: 0

Ref

Built with Hugo
Theme Stack designed by Jimmy