Wednesday, August 25, 2010

AMD’s Bulldozer vs Intel's Hyper-Threading?


AMD's so called Strong Thread approach in the Bulldozer module is that really compelling?

Extra cores are added when a processor can't operate at a faster clock speed, that's a good and easy way to expand a product line with effectively faster products, even though it may NOT be any faster depending on whether the applications are taking advantage of the multiple cores. But fully duplicating x86 core is expensive to scale up.

Intel hyper-threading is a good idea in certain cases, with only little more hardware it allows multiple threads to share the functional units in a core with lower context switch overhead, tolerating memory latency as memory latency is relatively high. That works well with
  • Complementing threads - Threads do not use the same types of functional units such as the integer units, floating units, etc. thus maximizing the hardware utilization. Or threads do not have conflicting memory accesses, especially long latency memory accesses.
  • Threads play nice with cache - A thread does not result in spilling out the data of another thread from the cache. Unfortunately, this would be difficult to ensure in practice as the dynamic OS thread scheduling, memory access pattern, etc. contribute to the cache usage.
On the other hand, AMD's Strong Thread includes two sets of integer units and L1 data cache in a Bulldozer module, which is heavier than the hyper-threading approach, but more lightweight than fully duplicating a x86 core. That effectively allowing a thread to enjoy full private L1 data cache during its execution quantum, while hyper-threading works in a shared L1 cache like environment. Whether the module supports cpu affinity i.e. binding a thread to a particular core of the chip, is something we should be looking for when more details are available.

Hyper-threading vs Bulldozer may provoke the argument of shared cache vs private cache: A thread can potentially access the entire shared cache, while a thread enjoys full bandwidth in accesses to the private cache. The downside is a thread is limited to the smaller private cache size even if the other private cache in the module is under utilized. To argue that further: a larger shared cache would have higher latency due to larger storage management overhead, while smaller private cache would have lower latency generally. Whether shared or private cache is better for the performance, it's very specific to the memory access patterns of multiple threads.

As L1 cache is usually very small, the performance impact of smaller private L1 data cache for a single threaded application could be compensated by the larger shared L2 cache. When an application has large working-set, doubling the L1 data cache is probably insufficient to keep the working-set anyway.

We should also note that the floating-point units connect to shared L2 cache bypassing the L1 data cache. They probably have a good reason for that. I can recall that Itanium II does not use L1 data cache for their floating-point too.

Overall, the AMD Bulldozer is an interesting architecture. It has great potential to exhibit higher performance at lower cost. Its benchmark data is something we should keep an eye on.

See also:

Tuesday, August 17, 2010

Parallelizing Matrix Multiplication using MPI

MPI is a popular mechanism in high performance computing. It works for both cluster and shared memory environment. Why don't we simply use MPI when it works for both environments? Why do we care about OpenMP? Cilk++? etc. Perhaps that depends on the complexity of the applications you are dealing with.

Parallel Matrix Multiplication using MPI

/* matrix-mpi.cpp */
#include <mpi.h>

const int size = 1000;

float a[size][size];
float b[size][size];
float c[size][size];

void multiply(int istart, int iend)
{
for (int i = istart; i <= iend; ++i) {
for (int j = 0; j < size; ++j) {
for (int k = 0; k < size; ++k) {
c[i][j] += a[i][k] * b[k][j];
}
}
}
}

int main(int argc, char* argv[])
{
int rank, nproc;
int istart, iend;

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);


if (rank == 0) {
// Initialize buffers.
for (int i = 0; i < size; ++i) {
for (int j = 0; j < size; ++j) {
a[i][j] = (float)i + j;
b[i][j] = (float)i - j;
c[i][j] = 0.0f;
}
}
}

// Broadcast matrices to all workers.
MPI_Bcast(a, size*size, MPI_FLOAT, 0,MPI_COMM_WORLD);
MPI_Bcast(b, size*size, MPI_FLOAT, 0,MPI_COMM_WORLD);
MPI_Bcast(c, size*size, MPI_FLOAT, 0,MPI_COMM_WORLD);


// Partition work by i-for-loop.
istart = (size / nproc) * rank;
iend = (size / nproc) * (rank + 1) - 1;

// Compute matrix multiplication in [istart,iend]
// of i-for-loop.
// C <- C + A x B
multiply(istart, iend);

// Gather computed results.
MPI_Gather(c + (size/nproc*rank),
size*size/nproc,
MPI_FLOAT,
c + (size/nproc*rank),
size*size/nproc,
MPI_FLOAT,
0,
MPI_COMM_WORLD);


if (rank == 0) {
// Compute remaining multiplications
// when size % nproc > 0.
if (size % nproc > 0) {
multiply((size/nproc)*nproc, size-1);
}
}

MPI_Finalize();
return 0;
}

$ g++ -O2 matrix.cpp -o matrix
$ mpicxx -O2 matrix-mpi.cpp -o matrix-mpi
$ time ./matrix
real 0m13.226s
user 0m12.529s
sys 0m0.065s
$ time mpirun -np 2 ./matrix-mpi
real 0m8.490s
user 0m6.346s
sys 0m0.178s
Phew .... what a hassle ... you can see the needs to:
  • perform data transfer to workers manually
  • perform work partitioning manually
  • perform many index calculations
  • handle remaining work when the amount of work is not divisible by the number of workers.
Furthermore, this MPI version uses more memory than the shared memory counterparts. The MPI program is launched with multiple processes as multiple workers, hence the memory consumption also multiply up. More work would be required to minimize the total memory consumption.When you must work with cluster environment, perhaps you don't have many choices with the current state of art programming tools.

Sunday, August 15, 2010

Parallelizing Matrix Multiplication using TBB

Parallelizing matrix multiplication using TBB isn't too difficult. It's just a little more work than OpenMP or Cilk++.

Parallel Matrix Multiplication using TBB

/* matrix-tbb.cpp */
#include <tbb/parallel_for.h>
#include <tbb/blocked_range.h>


using namespace tbb;

const int size = 1000;

float a[size][size];
float b[size][size];
float c[size][size];


class Multiply
{
public:
void operator()(blocked_range<int> r) const {
for (int i = r.begin(); i != r.end(); ++i) {
for (int j = 0; j < size; ++j) {
for (int k = 0; k < size; ++k) {
c[i][j] += a[i][k] * b[k][j];
}
}
}
}
};


int main()
{
// Initialize buffers.
for (int i = 0; i < size; ++i) {
for (int j = 0; j < size; ++j) {
a[i][j] = (float)i + j;
b[i][j] = (float)i - j;
c[i][j] = 0.0f;
}
}

// Compute matrix multiplication.
// C <- C + A x B
parallel_for(blocked_range<int>(0,size), Multiply());

return 0;
}
We've moved the computation of the matrix multiplication into the class Multiply which takes in the range of i-iterations to work on. The parallel_for internally split the range [0,size) into multiple blocks. Multiple workers can then work on different non-overlapping blocks in parallel.

$ g++ -O2 matrix.cpp -o matrix
$ g++ -O2 matrix-tbb.cpp -ltbb -o matrix-tbb
$ time ./matrix
real 0m12.971s
user 0m12.489s
sys 0m0.052s
$ time ./matrix-tbb
real 0m7.857s
user 0m12.734s
sys 0m0.282s
Once the computation is organized into functions which can dynamically work on different parts of the computation, it's relatively easy to proceed.

Parallelizing Matrix Multiplication using Cilk++ in Two Lines

Following the parallelization of matrix multiplication using OpenMP in Parallelizing Matrix Multiplication using OpenMP in One Line, can we do the same using Cilk++?

Parallel Matrix Multiplication using Cilk++

/* matrix.cilk */
const int size = 1000;

float a[size][size];
float b[size][size];
float c[size][size];

int cilk_main()
{
// Initialize buffers.
for (int i = 0; i < size; ++i) {
for (int j = 0; j < size; ++j) {
a[i][j] = (float)i + j;
b[i][j] = (float)i - j;
c[i][j] = 0.0f;
}
}

// Compute matrix multiplication.
// C <- C + A x B
cilk_for (int i = 0; i < size; ++i) {
for (int j = 0; j < size; ++j) {
for (int k = 0; k < size; ++k) {
c[i][j] += a[i][k] * b[k][j];
}
}
}

return 0;
}
The cilk_main() is the entry function synonymous to main() in a typical C++ program, but cilk_main() function uses Cilk++ linkage such that you can spawn and sync in the function. This update is trivial.Effectively, there is only one line of non-trivial update to parallelize the serial matrix multiplication.
$ g++ -O2 matrix.cpp -o matrix
$ cilk++ -O2 matrix.cilk -o matrix-cilk
$ time ./matrix
real 0m12.613s
user 0m12.392s
sys 0m0.030s
$ time ./matrix-cilk
real 0m6.775s
user 0m12.591s
sys 0m0.134s
The performance improvement is encouraging. You can try out with quad-core or hex-core if you have the machine.

Saturday, August 14, 2010

Parallelizing Matrix Multiplication using OpenMP in One Line

Matrix multiplication is often used for academic study. It's well suited for parallelization due to its intensive O(N^3) computation and independent computation. Parallel programming is hard. Does it surprise you if we parallelize matrix multiplication in merely one line of OpenMP directive?

Serial Matrix Multiplication

/* matrix.cpp */
const int size = 1000;

float a[size][size];
float b[size][size];
float c[size][size];

int main()
{
// Initialize buffers.
for (int i = 0; i < size; ++i) {
for (int j = 0; j < size; ++j) {
a[i][j] = (float)i + j;
b[i][j] = (float)i - j;
c[i][j] = 0.0f;
}
}

// Compute matrix multiplication.
// C <- C + A x B
for (int i = 0; i < size; ++i) {
for (int j = 0; j < size; ++j) {
for (int k = 0; k < size; ++k) {
c[i][j] += a[i][k] * b[k][j];
}
}
}

return 0;
}
Analyzing the data dependency of the operation c[i][j] += a[i][k] * b[k][j], we can see that each iteration i is independent of each other. The same applies to iteration j. For simplicity, we consider parallelizing only single for-loop.

Parallel Matrix Multiplication using OpenMP

/* matrix-omp.cpp */
const int size = 1000;

float a[size][size];
float b[size][size];
float c[size][size];

int main()
{
// Initialize buffers.
for (int i = 0; i < size; ++i) {
for (int j = 0; j < size; ++j) {
a[i][j] = (float)i + j;
b[i][j] = (float)i - j;
c[i][j] = 0.0f;
}
}

// Compute matrix multiplication.
// C <- C + A x B
#pragma omp parallel for default(none) shared(a,b,c)
for (int i = 0; i < size; ++i) {
for (int j = 0; j < size; ++j) {
for (int k = 0; k < size; ++k) {
c[i][j] += a[i][k] * b[k][j];
}
}
}

return 0;
}
With the OpenMP directive (#pragma), the i-for-loop is divided into multiple chunks, each chunk is assigned to a thread. Hence, multiple threads can compute assigned chunks in parallel. That's it.
$ g++ -O2 matrix.cpp -o matrix
$ g++ -O2 -fopenmp matrix-omp.cpp -o matrix-omp
$ time ./matrix
real 0m12.976s
user 0m12.460s
sys 0m0.059s
$ time ./matrix-omp
real 0m6.952s
user 0m12.556s
sys 0m0.050s
Bravo! The parallel version is about 1.86x faster than the serial version on a dual-core system. However, the code above is only for illustration purpose. There are still many optimization opportunities before it achieves commercial performance level.

Wednesday, August 11, 2010

Parallel Programming - Hello World

Many computer science/engineering students learn to write Hello World program at their first programming lecture. What's your first parallel program? What about Hello World program in OpenMP, MPI, Cilk++, TBB, Ruby thread, PThread?

Hello World in C

/* hello.c */
#include <stdio.h>

int main()
{
printf("hello world\n");
return 0;
}
$ gcc hello.c -o hello
$ ./hello
hello world

Hello World in OpenMP

/* hello-omp.c */
#include <stdio.h>
#include <omp.h>

int main()
{
#pragma omp parallel
{
int n = omp_get_num_threads();
int id = omp_get_thread_num();

printf("hello world %d/%d\n", n, id);
}
return 0;
}
$ gcc -fopenmp hello-omp.c -o hello-omp
$ ./hello-omp
hello world 1/2
hello world 0/2
$ ./hello-omp
hello world 0/2
hello world 1/2

Hello World in MPI

/* hello-mpi.c */
#include <stdio.h>
#include <mpi.h>

int main(int argc, char* argv[])
{
int n, id;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &n);
MPI_Comm_rank(MPI_COMM_WORLD, &id);


printf("hello world %d/%d\n", id, n);

MPI_Finalize();
return 0;
}
$ mpicc hello-mpi.c -o hello-mpi
$ mpirun -np 2 ./hello-mpi
hello world 0/2
hello world 1/2
$ mpirun -np 2 ./hello-mpi
hello world 1/2
hello world 0/2

Hello World in Cilk++

/* hello.cilk */
#include <stdio.h>

void hello()
{
printf("hello world\n");
}

int cilk_main()
{
cilk_spawn hello();
cilk_spawn hello();
cilk_sync;
return 0;
}
$ cilk++ hello.cilk -o hello-cilk
$ ./hello-cilk
hello world
hello world

Hello World in TBB

/* hello-tbb.cpp */
#include <iostream>
#include <tbb/parallel_for.h>

using namespace tbb;

class Hello
{
public:
void operator()(int x) const {
std::cout << "Hello world\n";
}
};

int main()
{
// parallelizing:
// for(int i = 0; i < 2; ++i) { ... }
parallel_for(0, 2, 1, Hello());

return 0;
}
$ g++ hello-tbb.cpp -ltbb -o hello-tbb
$ ./hello-tbb
Hello world
Hello world

Hello World in Ruby Thread

/* hello.rb */
#!/usr/bin/ruby

def hello
id = Thread.current.object_id
puts "hello world #{id}"
end

t1 = Thread.new do
hello
end
t2 = Thread.new do
hello
end

t1.join
t2.join

$ ./hello.rb
hello world 69858970515620
hello world 69858970515540

Hello World in PThread

/* hello-pthread.c */
#include <stdio.h>
#include <pthread.h>

void* hello(void* arg)
{
long id = (long)pthread_self();
printf("hello world %ld\n", id);
pthread_exit(NULL);
}

int main()
{
pthread_t tid1, tid2;
pthread_attr_t attr;

pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr,
PTHREAD_CREATE_JOINABLE);

pthread_create(&tid1, &attr, hello, NULL);
pthread_create(&tid2, &attr, hello, NULL);

pthread_join(tid1, NULL);
pthread_join(tid2, NULL);

pthread_attr_destroy(&attr);
pthread_exit(NULL);


return 0;
}
$ gcc hello-pthread.c -lpthread -o hello-pthread
$ ./hello-pthread
hello world 139999688689424
hello world 139999680296720