Stephane Requena

Stephane Requena

Paris, Île-de-France, France
8 k abonnés + de 500 relations

À propos

Chair of EuroHPC INFRAG one of the advisory groups of the Industrial and Scientific…

Articles de Stephane

Voir tous les articles

Activité

S’inscrire pour voir toute l’activité

Expérience

Formation

  • Graphique IHEST

    IHEST

    -

    Thème de la session 2016-2017 "La connaissance comme bien commun - Valeur des sciences et des technologies aujourd'hui".

    Lieu de formation, d’échanges et de réflexion, l’IHEST accueille chaque année une nouvelle promotion d’auditeurs. Nommés par arrêté ministériel, les auditeurs de l’IHEST constituent un vivier de plus de 300 personnalités, scientifiques ou non, responsables de haut niveau de l’ensemble des secteurs d’activité de la société. Ils s’impliquent dans un débat éclairé sur les…

    Thème de la session 2016-2017 "La connaissance comme bien commun - Valeur des sciences et des technologies aujourd'hui".

    Lieu de formation, d’échanges et de réflexion, l’IHEST accueille chaque année une nouvelle promotion d’auditeurs. Nommés par arrêté ministériel, les auditeurs de l’IHEST constituent un vivier de plus de 300 personnalités, scientifiques ou non, responsables de haut niveau de l’ensemble des secteurs d’activité de la société. Ils s’impliquent dans un débat éclairé sur les sciences, les technologies, l’innovation et leurs impacts sociaux, avec pour mission de participer au renouvellement du rapport de confiance entre science et société.

  • -

  • -

  • -

Expériences de bénévolat

Publications

  • BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    arXiv

    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a…

    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.

    See publication
  • Big data and extreme-scale computing (BDEC) report on " Pathways to Convergence-Toward a shaping strategy for a future software and data ecosystem for scientific inquiry"

    Sage Journals

    Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the…

    Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers.

    Other authors
    See publication
  • Industrial Applications of High-Performance Computing: Best Global Practices

    Anwar Osseyran., and all Industrial Applications of High-Performance Computing: Best Global Practices. CRC Press, 2015. VitalBook file.

    "From telescopes to microscopes, from vacuums to hyperbaric chambers, from sonar waves to laser beams, scientists have perpetually strived to apply technology and invention to new frontiers of scientific advancement. Along the way, they have used abacuses, slide rules, calculators, computers, and - today - supercomputers, to crunch the increasingly complicated calculations used to understand and predict natural phenomena. In the course of practicality, science begets engineering, and…

    "From telescopes to microscopes, from vacuums to hyperbaric chambers, from sonar waves to laser beams, scientists have perpetually strived to apply technology and invention to new frontiers of scientific advancement. Along the way, they have used abacuses, slide rules, calculators, computers, and - today - supercomputers, to crunch the increasingly complicated calculations used to understand and predict natural phenomena. In the course of practicality, science begets engineering, and engineering begets innovation. Science, and therefore supercomputing, finds its way into industry around the world, affecting our lives in ways that are simple (cleaner clothes, faster speedboats, and more spin on a seven-iron) and profound (fewer drug interactions, safer cars, and new energy sources). Altogether, industry consumes more than half of all high performance computing usage worldwide. This book tells that story."

    Industrial Applications of High-Performance Computing: Best Global Practices supplies computer engineers and researchers with a state-of-the-art supercomputing reference. This book also keeps policymakers and industrial decision-makers informed about the economic impact of these powerful technological investments.

    Other authors
    See publication
  • DEUS Full Observable Universe Simulation: the numerical challenge

    EASC2013 - Solving Software Challenges for Exascale

    We have performed the first-ever numerical N-body simulations of the full observable universe with Dark Energy (DEUS "Dark Energy Universe Simulation" FUR "Full Universe Run"). Three different Dark Energy models have been performed. These have evolved 550 billion particles on an Adaptive Mesh Refinement grid with more than two and half trillion computing points along the entire evolutionary history of the universe and across 6 order of magnitudes length scales, from the size of the Milky Way to…

    We have performed the first-ever numerical N-body simulations of the full observable universe with Dark Energy (DEUS "Dark Energy Universe Simulation" FUR "Full Universe Run"). Three different Dark Energy models have been performed. These have evolved 550 billion particles on an Adaptive Mesh Refinement grid with more than two and half trillion computing points along the entire evolutionary history of the universe and across 6 order of magnitudes length scales, from the size of the Milky Way to that of the whole observable Universe. To date, these are the largest and most advanced complete cosmological simulations ever run. It provides unique information on the formation and evolution of the largest structure in the universe and an exceptional support to future observational programs dedicated to mapping the distribution of matter and galaxies in the universe. The simulations have run on 4752 (of 5040) thin nodes of BULL’ supercomputer CURIE, using more than 300 TB of memory for 10 million hours of computing time each. About 150 PBytes of data were generated throughout the run. Using an advanced and innovative reduction workflow the amount of useful stored data has been reduced to 1.5 PBytes.

    Other authors
    See publication
  • « DEUS Full Observable lambda CDM Universe Simulation : the numerical challenge »

    SC 2012

    We performed a massive N-body simulation of the full observable universe. This has evolved 550 billion particles on an Adaptive Mesh Refinement grid with more than two trillion computing points along the entire evolutionary history of the Universe, and across 6 orders of magnitudes length scales, from the size of the Milky Way to the whole observable Universe. To date, this is the largest and most advanced cosmological simulation ever run. It will have a major scientific impact and provide an…

    We performed a massive N-body simulation of the full observable universe. This has evolved 550 billion particles on an Adaptive Mesh Refinement grid with more than two trillion computing points along the entire evolutionary history of the Universe, and across 6 orders of magnitudes length scales, from the size of the Milky Way to the whole observable Universe. To date, this is the largest and most advanced cosmological simulation ever run. It will have a major scientific impact and provide an exceptional support to future observational programs dedicated to mapping the distribution of matter and galaxies in the Universe. The simulation has run on 4752 (of 5040) thin nodes of BULL supercomputer CURIE, using 300 TB of memory for 10 million hours of computing time. 50 PBytes of rough data were generated throughout the run, reduced to a useful amount of 500 TBytes using an advanced and innovative reduction workflow.

    Other authors
    See publication
  • Parallel Computing with GPUs

    Intro of the EuroGPU minisymposium (Parco 2009)

    The success of the gaming industry is now pushing processor technology like we have never seen before. Since recent graphics processors (GPU’s) have been improving both their programmability as well as have been adding more and more
    floating point processing, it makes them very appealing as accelerators for general purpose computing. This minisymposium gives an overview of some of these advancements by bringing together experts working on the development of techniques
    and tools that…

    The success of the gaming industry is now pushing processor technology like we have never seen before. Since recent graphics processors (GPU’s) have been improving both their programmability as well as have been adding more and more
    floating point processing, it makes them very appealing as accelerators for general purpose computing. This minisymposium gives an overview of some of these advancements by bringing together experts working on the development of techniques
    and tools that improve the programmability of GPU’s as well as the experts interested in utilizing the computational power of GPU’ scientific applications. This first EuroGPU Minisymposium brought together severl experts working on the development
    of techniques and tools that improve the programmability of GPU’s as well as the experts interested in utilizing the computational power of GPU’s for scientific applications. This short summary thus gives a very useful, but quick overview of
    some of the major recent advancement in modern GPU computing.

    Other authors
    • Anne C. ELSTER
    See publication
  • Solveur Linéaire pour la simulation de réservoir sur GPU (NVIDIA+CUDA)

    Aristote

    Our goal was to develop an hybrid version running on GPUs using CUDA of our internal linear solver used for reservoir and bassin modeling. this solver use a an ilu(0) or an AMG preconditioner and a BiCGStab solver as backend.

    Other authors
    • Thomas Guignon
    See publication
  • Pushing the Limits of 3D Basin Modeling with Linux Clusters

    Proceedings of the EAGE Conference

    In recent years the 3D modelling and simulation of sedimentary basins has become an integral part of the exploration of reservoirs for almost all major oil companies. The most computing intensive part of the simulation consists in solving numerically a complex system of non linear partial differential equations predicting the entire physical and chemical processes of hydrocarbon generation and accumulation through geological times. With a price/performance ratio and with services in constant…

    In recent years the 3D modelling and simulation of sedimentary basins has become an integral part of the exploration of reservoirs for almost all major oil companies. The most computing intensive part of the simulation consists in solving numerically a complex system of non linear partial differential equations predicting the entire physical and chemical processes of hydrocarbon generation and accumulation through geological times. With a price/performance ratio and with services in constant progress Linux clusters have become a very attractive alternative to “traditional” high performance computing facilities. ...

    Other authors
    See publication
  • Parallel Preconditioning for Sedimentary Basin Simulations

    Lecture Notes in Computer Science, 2004, Volume 2907/2004, 59-81, DOI: 10.1007/978-3-540-24588-9_9

    The simulation of sedimentary basins aims at reconstructing its historical evolution in order to provide quantitative predictions about phenomena leading to hydrocarbon accumulations. The kernel of this simulation is the numerical solution of a complex system of non-linear partial differential equations (PDE) of mixed parabolic-hyperbolic type in 3D. A discretisation and linearisation of this system leads to very large, ill-conditioned, non-symmetric systems of linear equations with three…

    The simulation of sedimentary basins aims at reconstructing its historical evolution in order to provide quantitative predictions about phenomena leading to hydrocarbon accumulations. The kernel of this simulation is the numerical solution of a complex system of non-linear partial differential equations (PDE) of mixed parabolic-hyperbolic type in 3D. A discretisation and linearisation of this system leads to very large, ill-conditioned, non-symmetric systems of linear equations with three unknowns per mesh cell, i.e. pressure, geostatic load, and oil saturation.
    This article describes the parallel version of a preconditioner for these systems, presented in its sequential form in [7]. It consists of three steps: in the first step a local decoupling of the pressure and saturation unknowns aims at concentrating in the “pressure block” the elliptic part of the system which is then, in the second step, preconditioned by AMG. The third step finally consists in recoupling the equations. Each step is efficiently parallelised using a partitioning of the domain into vertical layers along the y-axis and a distributed memory model within the PETSc library (Argonne National Laboratory, IL). The main new ingredient in the parallel version is a parallel AMG preconditioner for the pressure block, for which we use the BoomerAMG implementation in the hypre library [4].
    Numerical results on real case studies, exhibit (i) a significant reduction of CPU times, up to a factor 5 with respect to a block Jacobi preconditioner with an ILU(0) factorisation of each block, (ii) robustness with respect to heterogeneities, anisotropies and high migration ratios, and (iii) a speedup of up to 4 on 8 processors.

    Other authors
    See publication

Cours

  • HPC architectures

    -

  • Parallel Numerical algebra

    -

  • Programming langages

    -

Projets

  • EPI European Processor Initiative

    - aujourd’hui

    The European Processor Initiative (EPI) gets together 23 partners from 10 European countries, with the aim to bring to the market a low power microprocessor.

    It gathers experts from the High Performance Computing (HPC) research community, the major supercomputing centres, and the computing and silicon industry as well as the potential scientific and industrial users. Through a co-design approach, it will design and develop the first European HPC Systems on Chip and accelerators. Both…

    The European Processor Initiative (EPI) gets together 23 partners from 10 European countries, with the aim to bring to the market a low power microprocessor.

    It gathers experts from the High Performance Computing (HPC) research community, the major supercomputing centres, and the computing and silicon industry as well as the potential scientific and industrial users. Through a co-design approach, it will design and develop the first European HPC Systems on Chip and accelerators. Both elements will be implemented and validated in a prototype system that will become the basis for a full Exascale machine based on European technology.

    See project
  • PPI4HPC - Joint European Public Procurement of Innovations for High Performance Computing

    - aujourd’hui

    In this project a group of leading European supercomputing centres decided the formation of a buyers group to execute a joint Public Procurement of Innovative Solutions (PPI) for the first time in the area of High-Performance Computing (HPC). The co-funding by the European Commission (EC) will allow for a significant enhancement of the planned pre-exascale HPC infrastructure from 2019 and pave the path for future joint investments in Europe. The total investment is planned to be about € 73…

    In this project a group of leading European supercomputing centres decided the formation of a buyers group to execute a joint Public Procurement of Innovative Solutions (PPI) for the first time in the area of High-Performance Computing (HPC). The co-funding by the European Commission (EC) will allow for a significant enhancement of the planned pre-exascale HPC infrastructure from 2019 and pave the path for future joint investments in Europe. The total investment is planned to be about € 73 million.

    See project
  • Simseo - La simulation au service des entreprises

    SiMSEO, un projet national d’accompagnement des TPE, PME et ETI à l’usage de la simulation numérique dans l’industrie. Lancé par Teratec et GENCI dans le cadre de l’appel à Manifestation d’intérêt « Diffusion de la simulation numérique dans l’industrie » du Programme Investissements d’Avenir.
    Dans GENCI coordonne avec ses partenaires régionaux un accompagnement de proximité et sur mesure. Cet accompagnement est la démultiplication en région de l’initiative HPC-PME portée par GENCI, Inria…

    SiMSEO, un projet national d’accompagnement des TPE, PME et ETI à l’usage de la simulation numérique dans l’industrie. Lancé par Teratec et GENCI dans le cadre de l’appel à Manifestation d’intérêt « Diffusion de la simulation numérique dans l’industrie » du Programme Investissements d’Avenir.
    Dans GENCI coordonne avec ses partenaires régionaux un accompagnement de proximité et sur mesure. Cet accompagnement est la démultiplication en région de l’initiative HPC-PME portée par GENCI, Inria et BPIfrance et expérimenté par une soixantaine de PME.

    Other creators
    See project
  • EXDCI European eXtreme Data and Computing Initiative

    EXDCI’s objective is to coordinate the development and implementation of a common strategy for the European HPC Ecosystem.
    I'm chairing WP3 in charge of the roadvmapping of the needs of scientific and industrial european applications, in order to update the PRACE Scientific Case.

    Other creators
    See project
  • Fortissimo - Factories of the Future Resources, Technology, Infrastructure and Services for Simulation and Modelling

    Fortissimo is a collaborative project that will enable European SMEs to be more competitive globally through the use of simulation services running on a High Performance Computing cloud infrastructure

    Other creators
    See project
  • ECR : Exascale Computing Research

    ECR is a joint research lab between Intel, CEA, UVSQ and GENCI, it aims to work on co designing scientific applications and development tools and methodologies toward the next generation of HPC systems called Exascale.

    Other creators
    See project
  • HPC PME

    -

    HPC PME (HPC for SMB in english) is a french initiative with OSEO and INRIA which aims to foster HPC usage by French SMB in order to boost innovation and competitiveness.

    HPC-PME relies on an integrated offer with training/best practices, expertise from public research, access to HPC facilities and funding.

    Other creators
    See project
  • Mont-Blanc

    -

    Energy efficiency is already a primary concern for the design of any computer system and it is unanimously recognized that future Exascale systems will be strongly constrained by their power consumption. This is why the Mont-Blanc project, which was launched on 1st October, has set itself the following objective: to design a new type of computer architecture capable of setting future global High Performance Computing (HPC) standards that will deliver Exascale performance while using 15 to 30…

    Energy efficiency is already a primary concern for the design of any computer system and it is unanimously recognized that future Exascale systems will be strongly constrained by their power consumption. This is why the Mont-Blanc project, which was launched on 1st October, has set itself the following objective: to design a new type of computer architecture capable of setting future global High Performance Computing (HPC) standards that will deliver Exascale performance while using 15 to 30 times less energy.

    This new project is coordinated by the Barcelona Supercomputing Center (BSC) and has a budget of over 14 million Euros, including over 8 million Euros funded by the European Commission.

    Other creators
    See project
  • EESI2

    -

    The European Exascale Software Initiative (EESI2) goal is to build a European vision and roadmap to address the challenge of the new generation of massively parallel systems composed of millions of heterogeneous cores which will provide Petaflop performances in 2010 and Exaflop performances in 2020.

    Other creators
    • philippe ricoux
    See project

Prix et distinctions

  • HPCWire 2022 Award in Best Sustainability Innovation in HPC

    HPCWire

    HPE and AMD top four: One recent trend is that many of the most powerful supercomputers in the world are also among the most energy efficient. To that end, HPE and AMD supplied the top four most energy-efficient systems in the world (DOE/ORNL’s Frontier-TDS and Frontier; EuroHPC/CSC’s LUMI; and GENCI’s Adastra), with three of those systems also ranking in the top ten most powerful supercomputers.

  • HPCWire Award 2022 in Best Collaboration between academia / industry / gouvernement

    HPCWire

    Bloom is a global collaborative effort to develop the largest, open and multilingual NLP model in the world. Bloom is the result of BigScience, a collaborative effort of more than 1000 global researchers from academia, startups, large companies, HPC centers, Nvidia, Microsoft, and more. The effort was orchestrated by the HuggingFace startup. Researchers used the HPE-built Jean Zay supercomputer, GENCI, at The Institute for Development and Resources in Intensive Scientific Computing (IDRIS)…

    Bloom is a global collaborative effort to develop the largest, open and multilingual NLP model in the world. Bloom is the result of BigScience, a collaborative effort of more than 1000 global researchers from academia, startups, large companies, HPC centers, Nvidia, Microsoft, and more. The effort was orchestrated by the HuggingFace startup. Researchers used the HPE-built Jean Zay supercomputer, GENCI, at The Institute for Development and Resources in Intensive Scientific Computing (IDRIS). Having such models open and trained on public research infrastructure is very important and in some fields sovereign for many uses.

  • HPCWire Award 2018 - Best use of HPC in energy

    HPCwire

    PRACE, BSC and GENCI researchers use molecular simulations to improve processes for “blue energy,” a potential significant source of global electricity.

  • 2017 HPCwire Editors’ Choice – Top HPC Enabled Scientific Achievement award

    HPCWire

    The HPC resources from GENCI, PRACE, and Compute Canada facilitated, the simulation of the Sun’s magnetic cycle by scientists from CEA, CNRS, the University Paris-Diderot, Harvard-Smithsonian Center for Astrophysics, and the University of Montréal. The discovery of a scaling law for determining the magnetic cycle of a star is a pioneering research and the results will help to comprehend violent space weather phenomena. In addition the simulations of the magnetism of solar-type stars contribute…

    The HPC resources from GENCI, PRACE, and Compute Canada facilitated, the simulation of the Sun’s magnetic cycle by scientists from CEA, CNRS, the University Paris-Diderot, Harvard-Smithsonian Center for Astrophysics, and the University of Montréal. The discovery of a scaling law for determining the magnetic cycle of a star is a pioneering research and the results will help to comprehend violent space weather phenomena. In addition the simulations of the magnetism of solar-type stars contribute to the preparations for space missions including ESA’s Cosmic Vision Solar Orbiter and PLATO, whose launches are planned for 2018 and 2024. The results we published in the July 14, 2017 issue of Science.

  • 2014 HPCWire Best Use of HPC in Automotive

    HPCwire

    the French company Renault has received the "Best Use of HPC in Automotive" award. Renault was provided in 2014, in the context of PRACE, with 42 million core hours on Curie for optimizing crash test simulations. This work, which was a world premiere in terms of processed parameters (200 parameters, 20 million finite elements), allows Renault to prepare for the next safety standards EuroNCAP2015 now.

  • HPCWire’ People to watch in 2014

    HPCWire

    HPCWire People to watch is a recognition of a set of experts in HPC who made / contributed to major breakthrough and results in the field of the development or the use of HPC facilites

    https://www.hpcwire.com/people-watch-2014/stephane-requena/

  • Readers' Choice 2013 for the Best Application of Big Data on CURIE

    HPCWire

    Best Application of Big Data, for the use of the bullx supercomputer owned by GENCI, for the simulation of the evolving structure of the entire observable Universe from the Big Bang to the present day, carried out at the Observatoire de Paris

  • Prix de l'Innovation Big Data 2013

    Corps Event, JDN and Sopra Group

    Special Prize for an Innovative Big Data project with Observatoire de Paris.
    The team leaded by JM. Alimi performed last year 3 massive cosmological simulations on CURIE, our 2PFflops BULL HPC system at TGCC.
    These simulations used 76 000 cores and 304 TB of main memory, they generated a total of 150 PB (at 50 GB/s) of rough data which has been post processed on the fly in order to reduce to 1.5 PB of final data.

    This was the first ever FULL Universe simulations and a major…

    Special Prize for an Innovative Big Data project with Observatoire de Paris.
    The team leaded by JM. Alimi performed last year 3 massive cosmological simulations on CURIE, our 2PFflops BULL HPC system at TGCC.
    These simulations used 76 000 cores and 304 TB of main memory, they generated a total of 150 PB (at 50 GB/s) of rough data which has been post processed on the fly in order to reduce to 1.5 PB of final data.

    This was the first ever FULL Universe simulations and a major breakthrough into the understanding of the evolution of the Universe.

    http://www.journaldunet.com/solutions/dsi/trophees-de-l-innovation-big-data/prix-special.shtml

  • Award winner for PRACE in "Competitive Industries"

    10th e-infrastructure concertation meeting

    The 10th e-infrastructure concertation meeting awarded the Open R&D industrial offer provided by Prace since mid 2012. We engaged more than 10 companies including SMEs.

  • Reader's Choice "Best use of HPC in "edge HPC" application with CURIE (powered by bullx) supercomputer, owned by GENCI for the first full universe simulation, Observatoire de Paris"

    HPCwire

    We are very proud with that recognition from the Reader's of HPCwire for Curie our latest petascale supercomputer instaled at TGCC.
    This massive cosmology simulation carried our by Observatoire de Paris is really what we want to drive with Curie : leading edge simulation.

Langues

  • English

    Capacité professionnelle générale

  • Spanish

    Bilingue ou langue natale

  • French

    Bilingue ou langue natale

Recommandations reçues

Plus d’activités de Stephane

Voir le profil complet de Stephane

  • Découvrir vos relations en commun
  • Être mis en relation
  • Contacter Stephane directement
Devenir membre pour voir le profil complet

Autres profils similaires

Autres personnes nommées Stephane Requena