Copyright 2012 neutronsources.org | All rights reserved. | Powered by FRM II | Imprint / Privacy Policy
Date: 16 October 2020
Source: ILL
During the second reactor cycle in 2020, the ThALES team and SCI/CS at ILL in Grenoble, France commissioned and tested a self-learning algorithm developed by the CAMERA team at Lawrence Berkeley National Laboratory (Berkeley Lab). For the first time, an algorithm took control over the measurement process, without human intervention.
Completely agnostic (that is, without any prior information on the physical model or expected signal of the measured sample), the algorithm explored various accessible instrument regions and reconstructed the signal with a strongly reduced number of total measuring points compared to a conventional grid scanning technique such as const-Q and const-E scans.
Today’s news is filled with stories about progress on machine learning or artificial intelligence in all aspects of life, be it self-driving cars, mobile applications, or face or speech recognition systems alias Siri, Alexa, or Google home. Most of the algorithms behind these applications fall into the category of so-called supervised learning, where the algorithms are trained on a huge amount of data (‘big data’) to classify images or sound patterns. Similar techniques could also accelerate the data analysis for instruments with large data acquisition rates, as very recently discussed in a joint ILL/ESRF workshop on Artificial Intelligence.
Compared to certain applications of big data from the internet that are fed with tremendous amounts of data from billions of users, the large data from synchrotron or neutron experiments are orders of magnitude smaller (typically in the mega or gigabyte range). Nevertheless, some of these algorithms have already successfully been applied to data sets taken from neutron spectrometers.
While the technical development leads to quickly growing data sets on most modern instruments, the power of a triple-axis spectrometer (TAS) lies not in the amount of data that it collects, but in the strength and efficiency with which it measures very specific data (i.e. for specific transfers of momentum, energy, for various parameters as pressure or magnetic field, etc.). In this respect, it is superior even to the most recent techniques based on “large data”, despite the somewhat simple principle of measuring data point-by-point at a (relatively) slow rate.
This also means that for an efficient measurement, it is particularly important that this effort is well-targeted, i.e., data is taken at meaningful points, and that no time is wasted to obtain information that later turns out to be irrelevant.
A promising route lies in autonomous learning, where algorithms such as gpCAM – developed by Marcus Noack of the CAMERA team at Berkeley Lab – learn from a comparatively little amount of input data and decide themselves on the next steps to take. The main ingredients in gpCAM are a flexible Gaussian process engine and a powerful mathematical optimization, which is used to train the process but also to find the next optimal measurement points.
In the Bayesian language, gpCAM estimates the posterior mean and covariance and uses them in a function optimization to calculate the optimal next measurement point. The posterior is based on a prior Gaussian probability density function, which is repeatedly retrained on previously measured points.
The algorithm is known as the Kriging model in the geosciences, named after Danie G. Krige, a South African geologist, who sought to estimate the most likely distribution of gold from a few boreholes in the Witwatersrand reef complex in South Africa. Sometimes, this task resembles the situation in an experiment: the meaningful inelastic neutron scattering intensity S(Q,ω) might be distributed like gold veins defined by the momentum Q and energy hω transfer, and the experimenter does not know where to look next at. Moreover, while the gold seeker moves across a two-dimensional space (the land) when choosing a place to drill, the space defined by momentum and energy transfer is four-dimensional.
It is a surprisingly neglected practice to quantify measuring probabilities based on statistical or Bayesian methods or, simply speaking, a surprisingly neglected question, how many individual measuring points S(Qi,ωi) are needed to either acquire sufficient information – whether it be in the agnostic case, or to confirm pre-established theoretical models.
A glance at published work suggests that the amount of meaningful data is of the order of kbytes or less, rather than Gbytes or more. The measurement time for hundreds of (meaningful) points on a TAS is of the order of hours, compared to several days in the traditional setting, which shows the optimization potential of an assisting autonomous algorithm. Already in the very first test runs, without any specific tuning of kernel definitions and acquisition functions, the algorithm demonstrated its efficiency as shown in the example below, where a conventional grid-scan on a magnetic excitation is compared to various intermediate states of the gpCAM acquisition. In addition, the results were achieved without human supervision or interaction.
The next goal is to assist gpCAM with the implementation of previously known information like crystal symmetry or even theoretical models, in order to reduce the number of points and to increase the efficiency of the measurement even further.