Dr. Kodavasal will discuss work done at Argonne to
develop both
capability and capacity computing for CONVERGE engine
simulations on
supercomputing platforms. Here capability computing refers
to running a
single, extremely high-fidelity simulation on thousands of
cores (for
example, to generate gold-standard simulation results),
while capacity
computing refers to running thousands of medium- to
high-fidelity
simulations in parallel (for example, for sensitivity
analyses, cyclic variability
studies, or design optimization). The Argonne team, in
collaboration with
Convergent Science, addressed computational bottlenecks in I/O,
communication, and load balancing to greatly enhance the
performance of
CONVERGE on both supercomputers and regular clusters. Based
on these
improvements, CONVERGE was scaled onto 4096 cores of Mira,
Argonne’s IBM Blue Gene/Q supercomputer, for a single
high-fidelity
engine simulation. An improved, stiffness-based chemistry
load balancing
scheme for the SAGE detailed chemistry solver was developed,
which
reduced the runtime near ignition by a factor of three. A
capacity
computing workflow to enable seamless pre-processing,
running, and
post-processing of thousands of medium- to high-fidelity
simulations on
supercomputers and high performance computing (HPC) systems
is under
development. This process will provide a high level of
automation and error
resiliency. In addition to discussing advances in capability
and capacity
computing for CONVERGE, this presentation will provide
recommendations on how to run HPC engine cases with
CONVERGE.
|