An introduction to the 4th issue of 《Journal of Machine Learning》

Journal of Machine Learning
4 min readJan 9, 2023

--

Journal of Machine Learning (JML, http://www.jml.pub/) is a new journal, published by Global Science Press and sponsored by the Center for Machine Learning Research, Peking University & AI for Science Institute, Beijing. Professor Weinan E serves as the Editor-in-Chief together with managing editors Jiequn Han,Arnulf Jentzen,Qianxiao Li,Lei Wang,Zhi-Qin John Xu,Linfeng Zhang. We also refer to the webpage [http://www.jml.pub/intro/editor.html?journal=jml] for many international leading scientists in the editorial board of the JML. JML publishes high quality research papers in all areas of machine learning (ML), including innovative algorithms, theory, and applications in all areas. The journal emphasizes a balanced coverage of both theory and application.

An introduction to the 4th issue

Approximation of Functionals by Neural Network Without Curse of Dimensionality

Authors: Yahong Yang & Yang Xiang

DOI: 10.4208/jml.221018, J. Mach. Learn., 1 (2022), pp. 342–372.

Learning functionals or operators by neural networks is nowadays widely used in computational and applied mathematics. Compared with learning functions by neural networks, an essential difference is that the input spaces of functionals or operators are infinite dimensional space. Some recent works learnt functionals or operators by reducing the input space into a finite dimensional space. However, the curse of dimensionality always exists in this type of methods. That is, in order to maintain the accuracy of an approximation, the number of sample points grows exponentially with the increase of dimension.

In this paper, we establish a new method for the approximation of functionals by neural networks without curse of dimensionality. Functionals, such as linear functionals and energy functionals, have a wide range of important applications in science and engineering fields. We define Fourier series of functionals and the associated Barron spectral space of functionals, based on which our new neural network approximation method is established. The parameters and the network structure in our method only depend on the functional. The approximation error of the neural network is $O(1/\sqrt{m})$ where $m$ is the size of the network, which does not depend on the dimensionality.

A Mathematical Framework for Learning Probability Distributions

Authors: Hongkang Yang

DOI: 10.4208/jml.221202, J. Mach. Learn., 1 (2022), pp. 373–431.

The modeling of probability distributions is an important branch of machine learning. It became popular in recent years thanks to the success of deep generative models in difficult tasks such as image synthesis and text conversation. Nevertheless, we still lack a theoretical understanding of the good performance of distribution learning models. One mystery is the following paradox: it is generally inevitable that the model suffers from memorization (converges to the empirical distribution of the training samples) and thus becomes useless, and yet in practice the trained model can generate new samples or estimate the probability density over unseen samples. Meanwhile, the existing models are so diverse that it has become overwhelming for practitioners and researchers to get a clear picture of this fast-growing subject. This paper provides a mathematical framework that unifies all the well-known models, so that they can be systemically derived based on simple principles. This framework enables our analysis of the theoretical mysteries of distribution learning, in particular, the paradox between memorization and generalization. It is established that the model during training enjoys implicit regularization, so that it approximates the hidden target distribution before eventually turning towards the empirical distribution. With early stopping, the generalization error escapes from the curse of dimensionality and thus the model generalizes well.

The motivation of JML

Although the world is generally over-populated with journals, the field of ML is one exception. In mathematics, there are almost no recognized venues (other than conference proceedings) for publishing research work on ML. In AI for Science, ideally, one would like to publish the work in leading scientific journals such as Physical Review Letters. However, this is a difficult task at the early stage when we focus on developing methodologies. Although there are many conferences in ML-related areas, publishing in journal form (with a thorough review process and without submission deadlines) is still the preferred venue in many disciplines, especially in mathematics and the sciences.

The objective of JML is to become a leading journal in all areas related to ML, including algorithms, theory, as well as applications to science and AI. JML will start as a quarterly publication. Considering the fact that ML is a vast and fast-developing field, we will do our best to carry out a timely, thorough and responsive review process. We have a group of young, energetic and active managing editors who will handle the review process, and a large, interdisciplinary group of experienced board members who can offer quick opinions and suggest reviewers.

All articles in JML are open-access and there is no charge for the authors. High quality papers on all aspects of machine learning are welcome. To submit your paper, please go to the journal webpage:

http://www.jml.pub/

--

--