SVM(共享虛擬記憶體)是為了解決向顯示卡傳輸資料中包含指標的問題。此時僅用cl::Buffer複製資料是不夠的,因為資料中的指標會因為複製變成野指標。這就需要SVM的幫助,它可以保證資料中的指標到達GPU後仍然可以使用。這裡給出一個計算單向連結串列中數字的和的例子。程式碼執行環境是VS2017,OpenCL3,顯示卡是Intel Core i5的核芯顯示卡。CPP檔案如下:
string kernelStr = R"( struct Element { struct Element* next; float data; }; struct MLinkedList { struct Element* first; struct Element* last; }; global volatile atomic_float sum = ATOMIC_VAR_INIT(0); kernel void add(global const struct MLinkedList* input, global float* output) { struct Element* iter = input->first; while (iter) { atomic_fetch_add(&sum, iter->data); iter = iter->next; } *output = atomic_load(&sum); })"; class MLinkedList { private: struct Element; public: using Pointer = cl::pointer<MLinkedList::Element, cl::detail::Deleter< cl::SVMAllocator<MLinkedList::Element, cl::SVMTraitCoarse<>>>>; MLinkedList(); void append(float value, vector<Pointer>& box); private: Element* first; Element* last; }; struct MLinkedList::Element { Element* next; float data; }; MLinkedList::MLinkedList() { first = 0; last = 0; } void MLinkedList::append(float value, vector<Pointer>& box) { Pointer elem = cl::allocate_svm<MLinkedList::Element, cl::SVMTraitCoarse<>>(); elem->data = value; elem->next = 0; if (!last) { first = elem.get(); last = first; } else { last->next = elem.get(); last = last->next; } box.push_back(std::move(elem)); } int main() { cl::Program program(kernelStr); try { program.build("-cl-std=CL2.0"); } catch (...) { cl_int buildErr = CL_SUCCESS; auto buildInfo = program.getBuildInfo<CL_PROGRAM_BUILD_LOG>(&buildErr); for (auto &pair : buildInfo) { std::cerr << pair.second << std::endl << std::endl; } return 1; } vector<MLinkedList::Pointer> svmBox; auto svmList = cl::allocate_svm<MLinkedList, cl::SVMTraitCoarse<>>(); svmList->append(100.5f, svmBox); svmList->append(200.5f, svmBox); svmList->append(300.5f, svmBox); svmList->append(400.5f, svmBox); vector<float> b(1, 0); cl::Buffer outputb(b.begin(), b.end(), false); auto kernel = cl::KernelFunctor<decltype(svmList)&, cl::Buffer&>(program, "add"); //std::for_each(svmBox.begin(), svmBox.end(), /* 註釋1,× */ // [&kernel](MLinkedList::Pointer& p) { kernel.setSVMPointers(p); }); //vector<void*> svmps; /* 註釋2,√ */ //std::for_each(svmBox.begin(), svmBox.end(), // [&svmps](MLinkedList::Pointer& p) { svmps.push_back(p.get()); }); //kernel.setSVMPointers(svmps); kernel.setSVMPointers(svmBox[0], svmBox[1], svmBox[2], svmBox[3]); int64 t1, t2; t1 = getTickCount(); kernel(cl::EnqueueArgs(cl::NDRange(1)), svmList, outputb); cl::copy(outputb, b.begin(), b.end()); cout << b[0] << endl; t2 = getTickCount(); cout << "CL1(ms):" << (t2 - t1) / getTickFrequency() * 1000 << endl; int c; cin >> c; return 0; }
本例僅用來展示SVM的用法,在運算效率上對CPU是沒有任何優勢的,因為此例GPU也是序列計算數字的和的。程式碼裡MLinkedList是一個單向連結串列,只有十幾行程式碼,對於有一定基礎的人很容易看懂。需注意setSVMPointers(...)函式只能呼叫一次,需一次性將核函式引用的所有SVM指標設定完畢。所以上述程式碼中的註釋1是錯的,它分4次設定指標。註釋2是對的,它是一次性設定完的,只不過用的是setSVMPointers(...)的另一個過載函式。下面是程式執行截圖: