When generating random numbers, tensorflow calls:
DDG search: “_pywrap_tensorflow.TFE_Py_FastPathExecute()”
-
Similar function: “Tile” Question about how TensorFlow API link with C++ code - reddit. The TF API name
Tileis used to map it to C++ class or function name by a table. -
Similar function “MatMul” Where can I find exactly how Tensorflow does matrix multiplication? - reddit:
1_result = _pywrap_tensorflow.TFE_Py_FastPathExecute( _ctx._context_handle, _ctx._eager_context.device_name, "MatMul", name, _ctx._post_execution_callbacks, a, b, "transpose_a", transpose_a, "transpose_b", transpose_b)- Run ’nm –demangle’ on _pywrap_tensorflow_internal.so
- grep for MatMul, and get: tensorflow::SparseMatMulOp
- file: “tensorflow/tensorflow/core/kernels/sparse_matmul_op.cc” code
-
“看python到C++调用关系” tensorflow二次开发 - 沉思语录20190227. Take matmul as an example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17me@Server:~$ cd /mnt/Server/anaconda3/envs/nerf/lib/python3.7/site-packages/tensorflow_core/python me@Server:~/anaconda3/envs/nerf/lib/python3.7/site-packages/tensorflow_core/python$ grep -rni "tf_export.*matmul" # 这个函数需要用 tf_export 导出 ops/math_ops.py:2565:@tf_export("linalg.matmul", "matmul") ops/math_ops.py:2859:tf_export(v1=["sparse_matmul"])(sparse_matmul) ops/gen_nn_ops.py:10155:tf_export("raw_ops.QuantizedMatMulWithBias")(QuantizedMatMulWithBias) ops/gen_nn_ops.py:10306:tf_export("raw_ops.QuantizedMatMulWithBiasAndRelu")(QuantizedMatMulWithBiasAndRelu) ops/gen_nn_ops.py:10471:tf_export("raw_ops.QuantizedMatMulWithBiasAndReluAndRequantize")(QuantizedMatMulWithBiasAndReluAndRequantize) ops/gen_sparse_ops.py:3078:tf_export("raw_ops.SparseTensorDenseMatMul")(SparseTensorDenseMatMul) ops/gen_linalg_ops.py:2531:tf_export("raw_ops.TridiagonalMatMul")(TridiagonalMatMul) ops/linalg/linalg_impl.py:552:@tf_export('linalg.tridiagonal_matmul') ops/sparse_ops.py:2188:@tf_export("sparse.sparse_dense_matmul", ops/gen_math_ops.py:1618:tf_export("raw_ops.BatchMatMul")(BatchMatMul) ops/gen_math_ops.py:1726:tf_export("raw_ops.BatchMatMulV2")(BatchMatMulV2) ops/gen_math_ops.py:6150:tf_export("raw_ops.MatMul")(MatMul) ops/gen_math_ops.py:7610:tf_export("raw_ops.QuantizedMatMul")(QuantizedMatMul) ops/gen_math_ops.py:10010:tf_export("raw_ops.SparseMatMul")(SparseMatMul)-
Read the usage description at math_ops.py:2565. It calls
gen_math_ops.batch_mat_mulorgen_math_ops.mat_mul. -
Go to
tensorflow.python.ops/gen_math_ops.py(This file maybe generated when compiling.) -
The function
batch_mat_mulcalls:1 2 3 4_result = _pywrap_tensorflow.TFE_Py_FastPathExecute( _ctx._context_handle, _ctx._thread_local_data.device_name, "BatchMatMul", name, _ctx.post_execution_callbacks, x, y, "adj_x", adj_x, "adj_y", adj_y) -
So the Op function in C++ should be “BatchMatMul”.
-
Seach all the place registering this Op by searching the definition of op in the source code/repo:
1 2 3 4 5 6 7 8# Cannot find anything in the python package installed by conda # yi@PC:/mnt/Server/anaconda3/pkgs/tensorflow-base-1.15.0-gpu_py37h9dcbed7_0$ grep -rni "REGISTER_OP(\"MatMul\")" # yi@PC:/mnt/Server/anaconda3/envs/nerf/lib/python3.7/site-packages/tensorflow_core$ grep -rni "REGISTER_OP(\"MatMul\")" yi@PC:~/Downloads/tensorflow_1.15$ grep -rni "REGISTER_OP(\"MatMul\")" tensorflow/core/ops/math_ops.cc:946:REGISTER_OP("MatMul") tensorflow/compiler/mlir/tfr/resources/decomposition_lib.mlir:83:// REGISTER_OP("MatMul") tensorflow/c/experimental/ops/README.md:15:since `REGISTER_OP("MatMul")` appears in ***core/math_ops.cc***, the "MatMul" -
Search the kernel implementation of this Op:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21yi@PC:~/Downloads/tensorflow_1.15$ grep -rni "Name(\"MatMul\")" tensorflow/core/transforms/remapper/tests/contraction.mlir:38: %MatMul, %ctl_1 = MatMul(%Placeholder, %Const) device("/device:CPU:0") name("MatMul") {T = f32, transpose_a = false, transpose_b = false} : (tensor<*xf32>, tensor<*xf32>) -> (tensor<*xf32>) tensorflow/core/transforms/remapper/tests/onednn_contraction.mlir:76: %MatMul, %ctl_1 = MatMul(%Placeholder, %Const) device("/device:CPU:0") name("MatMul") {T = f32, transpose_a = false, transpose_b = false} : (tensor<*xf32>, tensor<*xf32>) -> (tensor<*xf32>) tensorflow/core/grappler/utils/pattern_utils_test.cc:42: auto matmul = ops::MatMul(s.WithOpName("matmul"), input, weight); tensorflow/core/grappler/optimizers/remapper_test.cc:1225: auto matmul = ops::MatMul(s.WithOpName("matmul"), lhs, rhs); tensorflow/core/grappler/optimizers/remapper_test.cc:1433: auto matmul = ops::MatMul(s.WithOpName("matmul"), lhs, rhs); tensorflow/core/grappler/optimizers/remapper_test.cc:1610: auto matmul = ops::MatMul(s.WithOpName("matmul"), lhs, rhs); tensorflow/core/grappler/optimizers/mkl_remapper_test.cc:466: auto matmul = ops::MatMul(s.WithOpName("matmul"), input, filter); tensorflow/core/grappler/optimizers/mkl_remapper_test.cc:667: auto matmul = ops::MatMul(s.WithOpName("matmul"), lhs, rhs); tensorflow/core/grappler/optimizers/constant_folding_test.cc:2909: Output matmul = ops::MatMul(scope.WithOpName("matmul"), a, b); tensorflow/core/grappler/optimizers/arithmetic_optimizer_test.cc:1155: auto matmul_op = s.WithOpName("matmul"); tensorflow/core/grappler/optimizers/arithmetic_optimizer_test.cc:1227: Output matmul = ops::BatchMatMul(s.WithOpName("matmul"), trans_a, trans_b); tensorflow/core/grappler/costs/analytical_cost_estimator_test.cc:79: auto matmul = ops::MatMul(s.WithOpName("matmul"), flat, w2); tensorflow/core/kernels/matmul_op_test.cc:107: root.WithOpName("matmul"), tensorflow/core/kernels/matmul_op_test.cc:126: root.WithOpName("matmul"), tensorflow/core/kernels/mkl/mkl_fused_ops_test.cc:931: Output next_op = ops::MatMul(root.WithOpName("matmul"), input_op, tensorflow/core/kernels/matmul_op_impl.h:881: Name("MatMul").Device(DEVICE_CPU).TypeConstraint<TYPE>("T"), \ tensorflow/core/kernels/matmul_op_impl.h:892: Name("MatMul").Device(DEVICE_GPU).TypeConstraint<TYPE>("T"), \ tensorflow/core/framework/op_kernel_test.cc:1062:REGISTER_KERNEL_BUILDER(Name("MatMul").Device(DEVICE_CPU), DummyKernel); tensorflow/compiler/tf2tensorrt/convert/convert_nodes_test.cc:427: auto matmul = ops::MatMul(s.WithOpName("matmul"), feed, const_1); tensorflow/compiler/tf2xla/kernels/matmul_op.cc:102:REGISTER_XLA_OP(Name("MatMul").TypeConstraint("T", kMatmulTypes), MatMulOp);
-
DDG search: “How to read tensorflow c++ source code”