We use cookies to make your experience of our website better. You can delete or block cookies, but some parts of this website won't work without them. By using this website you accept our use of cookies.

29/03/2018 11:48:48

Technology is constantly pushing forward the limits of computer architecture. The current trend, started in the early 2000s, is to develop faster machines by dividing the processors into small and powerful units (the cores) which run in parallel and each one executes a fraction of the overall program.

As a result, modern desktop computers may have central processing units (CPUs) with tens of cores and graphic processing units (GPUs) with hundreds of cores, reaching performances that two decades ago could only be matched by huge and expensive supercomputers.

To keep pace with the rapid development of hardware, new software interfaces have been developed to assist developers with the challenging task of parallel programming. One of these, called OpenMP, has been extensively used in both industry and academia to exploit the performance of multi-core CPUs. NVIDIA’s CUDA architecture was the first to streamline GPU programming and transform graphic cards into more versatile computing devices.

Download the paper

Privacy & Terms

Copyright