Using Tensorflow with GPU within RMarkdown
Tensorflow setting for using local GPU
I have set up the GPU on my workstation and the TensorFlow is able to access to the GPU. You may refer to my GPU installation guide.
However, when I tried to use TensorFlow under RMarkdown, it reported the following error:
2022-02-21 08:44:09.486498: E tensorflow/stream_executor/cuda/cuda_blas.cc:226] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED
2022-02-21 08:44:09.486805: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at matmul_op_impl.h:442 : INTERNAL: Attempting to perform BLAS operation using StreamExecutor without BLAS support
It seems the main error is “failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED”. The second error “Attempting to perform BLAS operation using StreamExecutor without BLAS support” should be due to the first error.
I googled but I couldn’t find any working solution except for this link. It seems the problem is due to the allocation of GPU memory. You may set the TensorFlow to dynamically allocate GPU memory using the following Python code.
- tf.config.list_physical_devices(): Return a list of physical devices
- tf.config.experimental.get_memory_growth(): allow memory allocation
import tensorflow as tf gpus = tf.config.list_physical_devices(device_type = 'GPU') tf.config.experimental.set_memory_growth(gpus, True)
Now the question is how I can incorporate the above Python code into the RMarkdown file. Thanks to the
package:reticulate and the
source_python(), I use the following R code chunk within the RMarkdown.
library(reticulate) use_python("C:\\ProgramData\\Anaconda3\\") source_python("gpusettings.py")
There are two benefits of doing so:
- keep a separate setting file for other usage
- write all within one R code chunk, ie, no need to have a separate Python code chunk.
I hope you will find this useful.