9:00-9:05 | Opening Remarks and Welcome |
9:05-10:00 | Invited Talk: Globus: Enabling Scalable and Sustainable Research for Data-Intensive Science Dr. Kyle Chard, University of Chicago |
10:00-10:30 | Morning Break |
10:30-10:55 | Design and Implementation of a Custom Hardware Accelerator for SZx Compression in Chipyard (full paper) Connor Bohannon, Kazutomo Yohsii, Sheng Di, Franck Cappello, Antonino Miceli Best Paper Award |
10:55-11:20 | Evaluating Accuracy and Performance Tradeoffs in GPU Accelerated Single Cell RNA-seq Analysis (full paper) Cory Gardner, Seyun Jeong, Oam Khatavkar, Aiden Moon, Qinglei Cao, Tae-Hyuk Ahn Best Paper Runner-up Award |
11:20-11:45 | Benchmarking Cutting-Edge Scientific Error-Bounded Lossy Compressors on Correlation-Based Rate-Distortion (full paper) Ziwei Qiu, Jinyang Liu, Kai Zhao, Robert Underwood, Sheng Di Best Paper Runner-up Award |
11:45-12:10 | Data Management System Analysis for Distributed Computing Workloads (full paper) Kuan-Chieh Hsu, Sairam Sri Vatsavai, Ozgur O. Kilic, Sankha Dutta, Yihui (Ray) Ren, David Park, Tania Korchuganova, Joseph Boudreau, Tasnuva Chowdhury, Shengyu Feng, Raees Ahmad Khan, Jaehyung Kim, Norbert Podhorszki, Scott Klasky, Tadashi Maeno, Paul Nilsson, Verena Ingrid Martinez Outschoorn, Fred Suter, Wei Yang, Yiming Yang, Shinjae Yoo, Alexei Klimentov, Adolfy Hoisie |
12:10-12:30 | Building n-Dimensional Trees for Resolution-Based Progressive Compression (short paper) Brandon Alexander Burtchell, Martin Burtscher |
12:30-14:00 | Lunch Break |
14:00-14:20 | FZModules: A Heterogeneous Computing Framework for Customizable Scientific Data Compression Pipelines (short paper) Skyler Ruiter, Jiannan Tian, Fengguang Song Best Short Paper Award |
14:20-14:40 | On the Compressibility of Floating-Point Data in Posit and IEEE-754 Representation (short paper) Andrew Rodriguez, Martin Burtscher |
14:40-15:00 | ASCRIBE-XR: Extended Reality for Visualization of Scientific Images (short paper) Ronald J. Pandolfi, Julian Todd, Jeffrey J Donatelli, Daniela Ushizima |
15:00-15:30 | Afternoon Break |
15:30-15:55 | Lightweight CNN-Based Artifact Reduction for Scientific Error-bounded Lossy Compression (full paper) Zizhe Jian, Pu Jiao, Bohan Zhang, Sheng Di, Xin Liang, Guanpeng Li, Huangliang Dai, Zizhong Chen, Franck Cappello |
15:55-16:20 | Compression Error Sensitivity Analysis for Different Experts in MoE Model Inference (full paper) Songkai Ma, Zhaorui Zhang, Sheng Di, Benben Liu, Xiaodong Yu, Guanpeng Li, Xiaoyi Lu, Dan Wang |
16:20-16:45 | Characterizing the Performance of Parallel Data-Compression Algorithms across Compilers and GPUs (full paper) Brandon Alexander Burtchell, Martin Burtscher |
16:45-17:10 | Integrating Distributed SQL Query Engines with Object-Based Computational Storage (full paper) Junghyun Ryu, Soon Hwang, Junhyeok Park, Seonghoon Ahn, JeoungAhn Park, Jeongjin Lee, Jinna Yang, Soonyeal Yang, Jungki Noh, Qing Zheng, Woosuk Chung, Hoshik Kim, Youngjae Kim |
A growing disparity between simulation speeds and I/O rates makes it increasingly infeasible for high-performance applications to save all results for offline analysis. By 2025, computers are expected to compute at 1018 ops/sec but write to disk only at 1012 bytes/sec: a compute-to-output ratio 200 times worse than on the first petascale system. In this new world, applications must increasingly perform online data analysis and reduction—tasks that introduce algorithmic, implementation, and programming model challenges that are unfamiliar to many scientists and that have major implications for the design and use of various elements of exascale systems.
This trend has spurred interest in high-performance online data analysis and reduction methods, motivated by a desire to conserve I/O bandwidth, storage, and/or power; increase accuracy of data analysis results; and/or make optimal use of parallel platforms, among other factors. This requires our community to understand the clear yet complex relationships between application design, data analysis and reduction methods, programming models, system software, hardware, and other elements of a next-generation High Performance Computer, particularly given constraints such as applicability, fidelity, performance portability, and power efficiency.
There are at least three important topics that our community is striving to answer: (1) whether several orders of magnitude of data reduction is possible for exascale sciences; (2) understanding the performance and accuracy trade-off of data reduction; and (3) solutions to effectively reduce data while preserving the information hidden in large scientific data. Tackling these challenges requires expertise from computer science, mathematics, and application domains to study the problem holistically, and develop solutions and hardened software tools that can be used by production applications.
The goal of this workshop is to provide a focused venue for researchers in all aspects of data reduction and analysis to present their research results, exchange ideas, identify new research directions, and foster new collaborations within the community.
Topics of interest include but are not limited to:
• Data reduction methods for scientific data
° Data deduplication methods
° Motif-specific methods (structured and unstructured meshes, particles, tensors, ...)
° Methods with accuracy guarantees
° Feature/QoI-preserving reduction
° Optimal design of data reduction methods
° Compressed sensing and singular value decomposition
• Metrics to measure reduction quality and provide feedback
• Data analysis and visualization techniques that take advantage of the reduced data
° AI/ML methods
° Surrogate/reduced-order models
° Feature extraction
° Visualization techniques
° Artifact removal during reconstruction
° Methods that take advantage of the reduced data
• Data analysis and reduction co-design
° Methods for using accelerators
° Accuracy and performance trade-offs on current and emerging hardware
° New programming models for managing reduced data
° Runtime systems for data reduction
• Large-scale code coupling and workflows
• Experience of applying data reduction and analysis in practical applications or use-cases
° State of the practice
° Application use-cases which can drive the community to develop MiniApps
Full Paper submission deadline: August 15, 2025 August 25, 2025 (AoE)
Author notification: September 5, 2025
Publication right submission form: September 12, 2025
Paper metadata for SC25 program (submit to Linkings): September 22, 2025
Camera-ready final papers submission deadline (submit to TAPS): September 26, 2025 (AoE)September 22, 2025 (AoE)
AD/AE submission deadline (optional included with final paper PDF): September 26, 2025 (AoE)September 22, 2025 (AoE)
Submission instructions: here
ACM template (for papers with AD/AE): here
• Papers should be submitted electronically on SC Submission Website.
https://submissions.supercomputing.org
• Paper submission should be in single-blind ACM proceedings format following SC25 paper submission guidelines.
ACM proceedings template is available at: https://www.acm.org/publications/proceedings-template
• DRBSD-11 will accept full papers (10 pages including references/appendix) and short papers (6 pages excluding references/appendix).
• Submitted papers will be evaluated by at least 3 reviewers based upon technical merits.
• DRBSD-11 encourages submissions to provide artifact description and evaluation. Details for SC'25 Reproducibility Initiative: https://sc25.supercomputing.org/program/papers/reproducibility-initiative.
• DRBSD-11 will select papers for Best Paper Award and Best Paper Runner-up Award. All accepted papers will be included in the SC workshop proceedings.
Sheng Di, Argonne National Laboratory, USA
Ana Gainaru, Oak Ridge National Laboratory, USA
Xin Liang, University of Kentucky, USA
Kento Sato, RIKEN, Japan
Jieyang Chen, University of Oregon
Ian Foster, Argonne National Laboratory/University of Chicago
Scott Klasky, Oak Ridge National Laboratory
Qing Liu, New Jersey Institute of Technology
Todd Munson, Argonne National Laboratory
Tania Banerjee, University of Houston
Ayan Biswas, Los Alamos National Laboratory
Suren Byna, Ohio State University
Jon Calhoun, Clemson University
Franck Cappello, Argonne National Laboratory, University of Illinois
Jong Youl Choi, Oak Ridge National Laboratory
Sheng Di, Argonne National Laboratory, University of Chicago
Ana Gainaru, Oak Ridge National Laboratory
Qian Gong, Oak Ridge National Laboratory
Ganesh Gopalakrishnan, University of Utah
Pascal Grosset, Los Alamos National Laboratory
Xubin He, Temple University
Dan Huang, Sun Yat-sen University
Jiajun Huang, University of South Florida
Rajeev Jain, Argonne National Laboratory
Sian Jin, Temple University
Scott Klasky, Oak Ridge National Laboratory
Sidharth Kumar, University of Illinois Chicago
Guanpeng Li, University of Florida
Samuel Li, NVIDIA Corporation
Xin Liang, University of Kentucky
Peter Lindstrom, Lawrence Livermore National Laboratory
Jinyang Liu, University of Houston
Tao Lu, DapuStor Corporation
Todd Munson, Argonne National Laboratory
John Patchett, Los Alamos National Laboratory
Viktor Reshniak, Oak Ridge National Laboratory
Houjun Tang, Lawrence Berkeley National Laboratory
Robert Underwood, Argonne National Laboratory
Xiaodong Yu, Stevens Institute of Technology
Chengming Zhang, University of Houston
Kai Zhao, Florida State University