CUDA中线程与数据的对应关系
作者:互联网
引子
由于NVIDIA GPU采用的是SIMT的运行模式,CUDA编程中线程数量与数据的对应关系是什么呢?首先,我们来看一个经典的例子:
#include <stdio.h>
#define N (2048*2048)
#define THREADS_PER_BLOCK 512
__global__ void add(int *a, int *b, int *c)
{
int index = threadIdx.x + blockIdx.x * blockDim.x;
c[index] = a[index] + b[index];
}
int main()
{
int *a, *b, *c;
int *d_a, *d_b, *d_c;
int size = N * sizeof( int );
/* allocate space for device copies of a, b, c */
cudaMalloc( (void **) &d_a, size );
cudaMalloc( (void **) &d_b, size );
cudaMalloc( (void **) &d_c, size );
/* allocate space for host copies of a, b, c and setup input values */
a = (int *)malloc( size );
b = (int *)malloc( size );
c = (int *)malloc( size );
for( int i = 0; i < N; i++ )
{
a[i] = b[i] = i;
c[i] = 0;
}
cudaMemcpy( d_a, a, size, cudaMemcpyHostToDevice );
cudaMemcpy( d_b, b, size, cudaMemcpyHostToDevice );
add<<< std::ceil(N / (double)THREADS_PER_BLOCK), THREADS_PER_BLOCK>>>( d_a, d_b, d_c );
cudaDeviceSynchronize();
cudaMemcpy( c, d_c, size, cudaMemcpyDeviceToHost);
bool success = true;
for( int i = 0; i < N; i++ )
{
if( c[i] != a[i] + b[i] )
{
printf("c[%d] = %d\n",i,c[i] );
success = false;
break;
}
}
printf("%s\n", success ? "success" : "fail");
free(a);
free(b);
free(c);
cudaFree( d_a );
cudaFree( d_b );
cudaFree( d_c );
return 0;
}
在这个例子中,我们创建了两个N维数组,采用了N个线程来计算,每个线程计算数组中的一个值。那么,问题来了,数组维数和线程总数是否必须一样呢?
线程与数据的对应关系
像上面这种情况,数组维数和线程总数是相等的,CUDA编程中,线程总数可否小于数组维数呢,我们将上面的程序简单的修改如下:
#include <stdio.h>
#define N (2048*2048)
#define THREADS_PER_BLOCK 512
__global__ void add(int *a, int *b, int *c)
{
int index = threadIdx.x + blockIdx.x * blockDim.x;
c[index] = a[index] + b[index];
}
__global__ void add_stride(int *a, int *b, int *c)
{
int stride = N / 2;
int index = threadIdx.x + blockIdx.x * blockDim.x;
c[index] = a[index] + b[index];
c[index + stride] = a[index + stride] + b[index + stride];
}
int main()
{
int *a, *b, *c;
int *d_a, *d_b, *d_c;
int size = N * sizeof( int );
/* allocate space for device copies of a, b, c */
cudaMalloc( (void **) &d_a, size );
cudaMalloc( (void **) &d_b, size );
cudaMalloc( (void **) &d_c, size );
/* allocate space for host copies of a, b, c and setup input values */
a = (int *)malloc( size );
b = (int *)malloc( size );
c = (int *)malloc( size );
for( int i = 0; i < N; i++ )
{
a[i] = b[i] = i;
c[i] = 0;
}
cudaMemcpy( d_a, a, size, cudaMemcpyHostToDevice );
cudaMemcpy( d_b, b, size, cudaMemcpyHostToDevice );
add_stride<<< std::ceil(std::ceil(N / (double)THREADS_PER_BLOCK) / 2), THREADS_PER_BLOCK>>>( d_a, d_b, d_c );
cudaDeviceSynchronize();
cudaMemcpy( c, d_c, size, cudaMemcpyDeviceToHost);
bool success = true;
for( int i = 0; i < N; i++ )
{
if( c[i] != a[i] + b[i] )
{
printf("c[%d] = %d\n",i,c[i] );
success = false;
break;
}
}
printf("%s\n", success ? "success" : "fail");
free(a);
free(b);
free(c);
cudaFree( d_a );
cudaFree( d_b );
cudaFree( d_c );
return 0;
}
可以发现,现在block数变为了原来的一半,但每个block内的线程仍然和原来的相等,线程总数为N/2,数组维数依然为N,核函数中每个线程干的活是上一个例子中每个线程干的活的两倍:
__global__ void add_stride(int *a, int *b, int *c)
{
int stride = N / 2;
int index = threadIdx.x + blockIdx.x * blockDim.x;
c[index] = a[index] + b[index];
c[index + stride] = a[index + stride] + b[index + stride];
}
更进一步,我们将block数变为1,每个block内的线程仍然和原来的相等,线程总数为512(THREADS_PER_BLOCK),核函数如下:
__global__ void add_stride_one_block(int *a, int *b, int *c)
{
int index = threadIdx.x;
int stride = blockDim.x;
for (int i = index; i < N; i += stride) {
c[i] = a[i] + b[i];
}
}
这里,采用stride = blockDim.x的原因是因为index = threadIdx.x的取值范围为[0, blockDim.x), 与情况一相比,每个线程要干N/stride倍的活。
综上,线程数量可以比数据数量小很多,线程与数据并没有天然的1-1对应关系,CUDA提供了线程能力,如何给线程派活,即写核函数,是程序员自己的事。
当然,上面的情况是GPU实际上有能力产生和数据数量相等的线程,我们人为的限制了线程的个数,当数据量非常大的时候,GPU单个Grid可以产生的总的线程数远小于数据维数时,怎么处理呢?
GPU线程容量小于数据维数
如果单个GPU(Grid)一次产生的线程总数小于数据维数,我们可以通过进行Grid级别的stride来解决这个问题:
add<<< std::ceil(N / (double)THREADS_PER_BLOCK), THREADS_PER_BLOCK>>>( d_a, d_b, d_c );
__global__ void add(int *a, int *b, int *c)
{
for (int index = threadIdx.x + blockIdx.x * blockDim.x; index < N; index += blockDim.x * gridDim.x) {
c[index] = a[index] + b[index];
}
}
考虑到我们采用的是二维block和二维Grid,那么blockDim.x * gridDim.x就是GPU线程容量,上面的例子如果数据量不大,index没做index += blockDim.x * gridDim.x操作就可以把活干完的话,就退化到了第一个例子中的情况;如果数据量实在太大,那么,我们就再轮一遍,本质上和前一个例子差不多,只不过上个例子是一个block内的线程数不够,现在是一个Grid内的线程数不够,关于这一点,可以进一步参考这篇博客
Reference:
标签:__,index,int,程与,stride,线程,CUDA,中线,size 来源: https://blog.csdn.net/gigglesun/article/details/115286820