3.10。示例

原文: http://numba.pydata.org/numba-doc/latest/cuda/examples.html

3.10.1。矩阵乘法

这是使用 CUDA 内核的矩阵乘法的简单实现:

  1. @cuda.jit
  2. def matmul(A, B, C):
  3. """Perform square matrix multiplication of C = A * B
  4. """
  5. i, j = cuda.grid(2)
  6. if i < C.shape[0] and j < C.shape[1]:
  7. tmp = 0.
  8. for k in range(A.shape[1]):
  9. tmp += A[i, k] * B[k, j]
  10. C[i, j] = tmp

这种实现很简单直观但性能很差,因为相同的矩阵元素将从设备内存中多次加载,这很慢(某些设备可能有透明的数据缓存,但它们可能不够大,不能一次保存整个输入)。

如果我们使用阻塞算法来减少对设备内存的访问,则会更快。 CUDA 为块中的线程提供快速共享内存,以便在任务上协同计算。以下实现了使用共享内存的方形矩阵乘法的更快版本:

  1. from numba import cuda, float32
  2. # Controls threads per block and shared memory usage.
  3. # The computation will be done on blocks of TPBxTPB elements.
  4. TPB = 16
  5. @cuda.jit
  6. def fast_matmul(A, B, C):
  7. # Define an array in the shared memory
  8. # The size and type of the arrays must be known at compile time
  9. sA = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
  10. sB = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
  11. x, y = cuda.grid(2)
  12. tx = cuda.threadIdx.x
  13. ty = cuda.threadIdx.y
  14. bpg = cuda.gridDim.x # blocks per grid
  15. if x >= C.shape[0] and y >= C.shape[1]:
  16. # Quit if (x, y) is outside of valid C boundary
  17. return
  18. # Each thread computes one element in the result matrix.
  19. # The dot product is chunked into dot products of TPB-long vectors.
  20. tmp = 0.
  21. for i in range(bpg):
  22. # Preload data into shared memory
  23. sA[tx, ty] = A[x, ty + i * TPB]
  24. sB[tx, ty] = B[tx + i * TPB, y]
  25. # Wait until all threads finish preloading
  26. cuda.syncthreads()
  27. # Computes partial product on the shared memory
  28. for j in range(TPB):
  29. tmp += sA[tx, j] * sB[j, ty]
  30. # Wait until all threads finish computing
  31. cuda.syncthreads()
  32. C[x, y] = tmp

由于共享内存是有限的资源,因此代码一次从输入数组预加载小块。然后,它调用 syncthreads() 等待所有线程完成预加载并在共享内存上进行计算之前。它在计算后再次同步,以确保所有线程在共享内存中完成数据,然后在下一次循环迭代中覆盖它。