(placeholder)
(placeholder)

Research

(placeholder)

LINKS AND NEWS


                    Publication

                  List

                 Upcoming

                 conferences


     •     PACT 2018

     •     PPoPP 2018


GRAD PHD STUDENTS

1. Francico Corbera (2001)

2. Adrián Tineo (2009)

3. Rosa Castillo (2009)

4. Antonio García (2013)

5. Rafael Larrosa (2016)

6. Antonio Vilches (2017)

7. Alejandro Villegas (2018)



PHD STUDENTS

• Denisa Constantinescu

• Jose Carlos Romero

(placeholder)

RESEARCH INTERESTS

     • Parallel programming languages

     • Programming models for heterogeneous arch.

     • Parallel architectures


RESEARCH GROUP

Parallel programming models and compilers


RESEARCH PROJECTS

P08-TIC-3500. Increasing the productivity in the parallelization of irregular codes


P11-TIC-8144. Acceleration techniques in parallel libraries and languages for many-core and heterogeneous architectures

Main topics

I'm heading a research group mainly worried about "productivity" in the context of high performance computing, or in other words, to achieve "performance without pain". Let me elaborate a little more about that. From the computer architecture point of view we are in the multi-core, many-core and heterogeneous era. We have several CPU cores in our PCs, but moreover, recently we have seen a significant increase in the number of commodity multicore processors that include an on-chip accelerator, like a GPU, FPGA and/or DSP. Current desktops, ultrabooks, smartphones, tablets, and other embedded devices are powered by heterogeneous chips that several CPU cores along with an integrated GPU. Examples of these are Intel recent architectures, AMD APU, Qualcomm Snapdragon and Samsung Exynos, to name a few. Other hetergeneous chips include an FPGA, like Altera Cyclone V and Xilinx Zynq UltraScale+. To fully exploit these new architectures is a challenge from the software point of view, because they are more difficult to program and error prone than the old sequential architectures. Our research goals are to find new tools and programming models to alleviate these new difficulties.

Regarding the tools, we have been working in a parallelizing compiler able to detect dynamic data structures (list, trees...) in a sequential C code, and to identify the parallel loops that traverse these data structures. We also propose TBB-based schedulers for the parallel-for and pipeline templates, that are able to dynamically distribute the workload among CPU cores, GPU and FPGA accelerators.

On the other hand we also explore the field of new programming models. We are interested in the Threading Building Blocks library, work stealing scheduling, pipeline and wavefront functional parallelism, and the Chapel parallel language.