2017-04-04 5 views
0

caffe에서 실시간 확대 작업을 수행 할 계획입니다.
1.Replace Data 네트워크 MemoryData와 레이어 :Caffe에서 Memory DataLayer를 사용하면 "Reset_을 호출하여 data_ MemoryDataLayer를 초기화해야합니다."

name: "test_network" 
layer { 
    name: "cifar" 
    type: "MemoryData" 
    top: "data" 
    top: "label" 
    include { 
    phase: TRAIN 
    } 
    memory_data_param { 
    batch_size: 32 
    channels: 3 
    height: 32 
    width: 32 
    } 

} 
layer { 
    name: "cifar" 
    type: "MemoryData" 
    top: "data" 
    top: "label" 
    include { 
    phase: TEST 
    } 
    memory_data_param { 
    batch_size: 32 
    channels: 3 
    height: 32 
    width: 32 
    } 
} 

이가 훈련 코드 :

caffe.set_mode_gpu() 
maxIter = 100 
batch_size = 32 
j = 0 
for i in range(maxIter): 
    #fetch images 
    batch = seq.augment_images(np.transpose(data_train[j: j+batch_size],(0,2,3,1))) 
    print('batch-{0}-{1}'.format(j,j+batch_size)) 
    #set input and solve 
    batch = batch.reshape(-1,3,32,32).astype(np.float32) 
    net.set_input_arrays(batch, label_train[j: j+batch_size].astype(np.float32)) 
    j = j + batch_size + 1 
    solver.step(1) 

하지만 때 코드 이들은 지금까지 촬영 한 단계입니다)이 net.set_input_arrays (에 도달,이 오류로 충돌 :

W0405 20:53:19.679730 4640 memory_data_layer.cpp:90] MemoryData does not transform array data on Reset() 
I0405 20:53:19.713727 4640 solver.cpp:337] Iteration 0, Testing net (#0) 
I0405 20:53:19.719229 4640 net.cpp:685] Ignoring source layer accuracy_training 
F0405 20:53:19.719229 4640 memory_data_layer.cpp:110] Check failed: data_ MemoryDataLayer needs to be initalized by calling Reset 
*** Check failure stack trace: *** 

내가 reset() 메소드를 찾을 수 없습니다, 승 모자해야합니까?

+0

오래 전 고정 맞춤법 https://github.com/BVLC/caffe/commit/09546dbe9130789f0571a76a36b0fc265cd81fe3 –

답변

0

Caffe에있는 MemoryDataLayerpycaffe 인터페이스를 통해 사용하지 않아야합니다. 솔루션에 관해서는

Link

Yeah it's discouraged to use the MemoryDataLayer in Python. Using it also transfers memory ownership from Python to C++ with the Boost bindings and therefore causes memory leaks. Memory will only be released after the network object is destructed in python. So if you're training a network for a long time, you'll run out of memory. It's encouraged to use InputLayer instead, where you can just assign data from a numpy array into the memory blobs.

, these answers는 좋은 대안이 될 것입니다.