ML Unleashes Dynamics: A Catchy Duo!

Nonlinear dynamical systems are systems where the change of the variable is not linearly proportional to with time. They are widely studied in a range of subjects from physics, biology, economics to engineering, as most of the natural phenomena are inherently nonlinear. We try to build suitable dynamical models to get an in-depth insight into various such new phenomena. These models help us in understanding the intricate relationship between the governing factors and get a holistic picture. This dynamical modeling is not always straightforward. In most cases, even if we can build the dynamical model, the exact closed-form solutions of the dynamics are difficult to achieve with the existing mathematical techniques.

In this era of digitization, we have tons of data being generated that we can exploit to our advantage. Can we develop methods that may predict future evolution by learning from past data? For example, predicting weather patterns, movement of stock prices, predicting the long-term dynamics in chaotic systems, etc. Machine learning algorithms fundamentally work on a similar strategy of learning from given data and have proven to be very efficient in finding patterns from higher dimensional data such as those involving speech, images, etc.

In our recent pedagogical article published in Resonance, we have shown how neural networks can predict the future evolution in a chaotic system and how one can reconstruct the analytical form of the underlying dynamics from given time series data using sparse regression.

The Lorenz system is a paradigmatic model in nonlinear dynamics that shows chaotic behavior for certain values of its parameters. First, we numerically integrated the Lorenz equations with the fourth-order Rung-Kutta method to obtain the time series data of the dynamical variable. The obtained time series This data then needs to be restructured into a labeled dataset in order to be fed into a supervised learning framework.

Supervised learning is a type of machine learning which uses labeled data or structured input-output data pairs to teach the models. For example, suppose S = {(x1,y1),(x2,y2),...(xn,yn)} be the dataset with xi’s representing the input data and yi’s their corresponding outputs. Now, there exists a function F: X → Y which satisfies all points in S, but its analytical form is unknown. Machine learning (ML) aims to approximate this mapping with minimum error further enabling us to predict Y from inputs that . For this, we need to choose an ML framework, a suitable learning algorithm, and an accuracy metric to check whether the framework can approximate the true function well. In our setting, the ML framework is a neural network, while the learning algorithm is that of gradient descent.

A Neural network is a network of nodes or artificial neurons and consists of many layers. The first layer consists of the is called the input values, which receives the input data, and the last layer is the output layer. and Apart from this, there are one or more hidden layers which are in between the input and output layers. Each node in the hidden layer receives a set of inputs from the previous layers multiplied with their corresponding weights (w’s) and summed up. A bias term is added to the sum and passed through an activation function (g) to introduce non linearity in the model. The output of this function ie. g(w₀+∑²⁰ᵢ₌₁ wᵢ xᵢ ) which acts as the output of that node. In the same manner this output then acts as the input for the successive nodes in the next layer. All the weights and biases in a neural network form the parameter set (W). To evaluate the performance of the neural network through the learning process, we compute the prediction error with the help of a Loss function L(W). The ultimate goal of learning lies in figuring out the optimized values for these parameters (W*) such that it satisfies:

This optimization can be done with the help of a learning algorithm like gradient descent. In each iteration, the algorithm computes the gradient of the loss function with respect to all the parameters by a method called back propagation. The parameters are then updated using their respective gradients. Once the learning is achieved and we have the optimized values of the parameters, the model can predict the output with good accuracy. For practical purposes, people use various modified versions of gradient descent, which works on similar mechanisms and also increases the optimization rate.

We trained multiple neural network models by varying the number of hidden layers, number of nodes, and other hyper parameters. From our analysis, the best- performing model could achieve 0.38% root mean square error in the test data. The comparison of the actual and predicted dynamics for the state x is shown in the adjacent Fig. Clearly, one can observe the predicted dynamics curve (yellow dashed line) closely aligns with the actual dynamics curve (blue line).

The neural network model knows nothing a priori about the governing equations of the data, and yet, it is able to learn very well from the training data and predict the data in the test set i.e., future motion.

Another model we that we have discussed in our article is how one can write the governing equations from the time series data. The model uses an optimization technique called sparse regression, which targets to find the non linearity from a library of nonlinear functions. It is called sparse since it gives weights to a few selected nonlinear functions instead of giving weights to all of them. The technique was first introduced by Bru

nton et al., and we have explained how to apply it in various nonlinear systems. For more mathematical details, please refer to the appendix in our paper.

In the present work, we have shown how one can use ML techniques to deal with nonlinear systems. Our focus was on time-series data due to its easy availability and interpretation. In recent years, machine learning has seen tremendous progress in the technological field, and this has also increased the enthusiasm of the community to work on ML and its applications. However, one also should realize that ML is like a black box (why it works is still an open question), and interpretation of results should be done with care. There are various research domains in natural science where the potential applications of machine learning are being explored,such as constructing the phase diagram, studying quantum phase transition, finding reactivity patterns in molecules etc. Hopefully, the amalgamation of machine learning with science could bring more exciting things in the upcoming days.

References

  1. Machine Learning in Non Linear Dynamical Systems. S. Roy, D. Rana, Resonance, 26(7), 953-970.
  2. S. L. Brunton, J. L. Proctor and J. N. Kutz, Discovering governing equations from data by sparse identification of nonlinear dynamical systems, Proceedings of the National Academy of Sciences, 113(15), 3932-3937 (2016).
  3. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

Recommended For You

Subscribe to our newsletter.

You can subscribe to our newsletter to get the latest updates and blogs straight into your mail.

Contribute to us.

You can contribute to this project. Contact us through email or whatsapp.


Copyright © 2023 Chrysalis IISERB