0
테러 흐름에 내 GPU를 볼 수 없습니다. 나는 optimus 설정을 사용하고 있습니다.tensorflow는 gpu를 사용하지 않지만, cuda는
엔비디아는-SMI는
[[email protected] bal]$ optirun nvidia-smi
Mon Mar 6 13:24:05 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 378.13 Driver Version: 378.13 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro K1100M Off | 0000:01:00.0 Off | N/A |
| N/A 40C P0 N/A/N/A | 7MiB/1999MiB | 2% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1847 G /usr/lib/xorg-server/Xorg 7MiB |
+-----------------------------------------------------------------------------+
CUDA는 GPU를보고 내 카드를 보여줍니다. 여기에 deviceQuery 출력
[[email protected] release]$ optirun ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "Quadro K1100M"
CUDA Driver Version/Runtime Version 8.0/8.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 1999 MBytes (2096300032 bytes)
(2) Multiprocessors, (192) CUDA Cores/MP: 384 CUDA Cores
GPU Max Clock rate: 706 MHz (0.71 GHz)
Memory Clock rate: 1400 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 262144 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID/Bus ID/location ID: 0/1/0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = Quadro K1100M
Result = PASS
하지만 내가 무엇을 할 수,
이import tensorflow as tf
# Creates a graph.
#with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
출력은 CPU가
[[email protected] bal]$ optirun python ex.py
Device mapping:
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
MatMul: (MatMul): /job:localhost/replica:0/task:0/cpu:0
b: (Const): /job:localhost/replica:0/task:0/cpu:0
a: (Const): /job:localhost/replica:0/task:0/cpu:0
[[ 22. 28.]
[ 49. 64.]]
이렇게 사용되는 것을 나타 내기 위해 보인다 GPU를 tensorflow 사용하지 않습니다 그 tensorflow 내 GPU를보고? 나는 archlinux를 사용하고 있는데, 나는 모든것에서 최신 버전을 가지고 있다고 가정한다. 내가 확인할 수있는 것들이 있니?
항상 동일합니다.[user @ system bal] $ TF_CPP_MIN_LOG_LEVEL = 0 파이썬 ex.py I tensorflow/core/common_runtime/gpu/gpu_device.cc : 948] 보이는 gpu 장치를 무시합니다 (장치 : 0, 이름 : Quadro K1100M, pci 버스 ID : 0000 : 01 : 00.0)와 Cuda 컴퓨팅 기능 3.0. 최소 요구되는 Cuda 능력은 3.5이다. 위대한 – Carsten
3.0을 사용하려면 직접 빌드하십시오. – etarion
현재 시도하고 있지만 tensorflw 빌드는 시스템에서 45 분 정도 걸립니다. 올바른 빌드 매개 변수를 제공했으면 좋겠다 – Carsten