如何使用GPU进行数学
我期待在利用GPU进行运算公式一些,但无法弄清楚如何从C#访问它。我知道,XNA和DirectX框架,允许您使用着色器才能访问的GPU,但我将如何去访问它没有这些框架?
I am looking at utilising the GPU for crunching some equations but cannot figure out how I can access it from C#. I know that the XNA and DirectX frameworks allow you to use shaders in order to access the GPU, but how would I go about accessing it without these frameworks?
我没有从C#做了,但基本上你使用CUDA(假设你在这里使用的是nVidia显卡,当然),SDK和CUDA工具包把它关闭。
I haven't done it from C#, but basically you use the CUDA (assuming you're using an nVidia card here, of course) SDK and CUDA toolkit to pull it off.
NVIDIA已经移植(或书面?)一个BLAS实现对CUDA的设备上使用。他们已经提供了大量的如何做数字运算的例子,虽然你必须弄清楚你打算如何从C#把它关闭。我敢打赌,你将不得不写非托管C或C ++和链路与它的一些东西。
nVidia has ported (or written?) a BLAS implementation for use on CUDA-capable devices. They've provided plenty of examples for how to do number crunching, although you'll have to figure out how you're going to pull it off from C#. My bet is, you're going to have to write some stuff in un-managed C or C++ and link with it.
如果你不挂机上使用C#,看看 Theano 。这可能是您的需求有点大材小用,因为他们正在建设做机器学习从Python中的GPU架构,但是......它的工作原理,而且运作非常良好。
If you're not hung-up on using C#, take a look at Theano. It might be a bit overkill for your needs, since they're building a framework for doing machine learning on GPUs from Python, but ... it works, and works very well.