ILLUMINE
(LCLS, SSRL, ALS, APS, NSLS-II, SNS/HFIR)
ILLUMINE - Intelligent Learning for Light Source and Neutron Source User Measurements including Navigation and Experiment Steering.
Experiments at X-ray light sources and neutron sources enable the direct observation and characterization of materials and molecular assemblies critical for energy research. Ongoing facility enhancements are exponentially increasing the rate and volume of data collected, opening up new frontiers of scientific research but also necessitating advancements in computing, algorithms and analysis to exploit this data effectively. As data rates surge, accelerated processing workflows are needed that can mine continuously streamed data to select interesting events, reject poor data, and adapt to changing experimental conditions. Real-time data analysis can offer immediate feedback to users or direct instrument controls for self-driving experiments. Autonomous experiment steering in turn is poised to maximize the efficiency and quality of data collection by connecting the user’s intent in collecting data, data analysis results, and algorithms capable of driving intelligent data collection and guiding the instrument to optimal operating regimes. ILLUMINE will facilitate rapid data analysis and autonomous experiment steering capabilities to support cutting-edge research driven by unprecedented data production rates, tightly coupling high-throughput experiments, advanced computing architectures, and novel AI/ML algorithms to significantly reduce the time to optimize instrument configurations, leverage large datasets, and optimize the use of oversubscribed beam times. To deliver these pivotal capabilities — rapid data analysis and autonomous experiment steering — for diverse experiments across the facilities, we will develop algorithms to perform real-time compression and ML inference at the experiment edge and expand on current edge-to-HPC analysis pipelines. We will also create advanced workflow monitoring and decision support systems, including reinforcement learning for data optimization, handling uncertainty, and high-dimensional search algorithms for experiments. Connecting these two elements is the development of a multi-facility framework built on a common interoperability layer for autonomous experiment workflows and built on the widely-used Bluesky data collection platform into an accessible toolbox of reusable off-the-shelf components that can be assembled into tailored workflows that cater to specific scientific needs. Collectively these advances are poised to unlock the transformative potential of the facility upgrades by delivering rapid analysis and workflow monitoring algorithms built on a common, interoperable framework to ensure their broad transferability across facilities, instruments, and experiments. Ultimately, these capabilities will significantly enhance experimental output and enable groundbreaking scientific exploration, shedding light on some of the most challenging and novel scientific questions facing the nation.
Related links:
- LCLStream Project
- Building Foundation and Surrogate Models for Experiment Steering
- Bluesky - enables experimental science at the lab-bench or facility scale.
Learn more at https://blueskyproject.io/ - Xopt - flexible high-level optimization in python
Learn more at https://github.com/xopt-org/Xopt - Blopt - Beamline Optimization Tools (packaged with bluesky)
Learn more at https://pypi.org/project/blopt/