posted an update

Second Reflection

We wanted to build a network that can solve the n-body problem. This is a problem in physics that involves predicting the position of n gravitational bodies at any given time, which has remained unsolved (in the general case) since the time of Isaac Newton.

We’ve generated a significant amount of training data using a Eulerian approximation that saves position outputs that the networks can use to train. Using a Neural Network that contains only convolutional layers produces surprisingly decent results and seems to predict a blurry next step. The problem is that when we feed the network its own output, it becomes progressively blurrier, to the point where it almost seems like the model is outputting a probability distribution over possible locations that the particle could travel to. In some cases, the model completely fails, producing psychedelic images that don’t have much physical bearing, but which are quite entertaining to look at!

Using convolutional layers was mostly a proof-of-concept, and we intend to try different architectures. One very glaring limitation of only using convolutions is that it is very challenging for the network to make inference between particles that have a large spatial separation, since convolutions focus on a small area. A simple change to our current model would be to include fully connected layers, which would allow information about different particles to be shared across the network for it to make physical inferences about.

Another idea that has been floated is to implement a Generative Adversarial Network, which might be able to produce sharper images. This is most likely something to do in conjunction with the addition of dense layers, since the generative model should have free reign to produce whatever it likes, without the confines of using only convolutions. We imagine that using a discriminator would help with the blurry output, since blur around the particles would most likely be an easy giveaway that a certain output was fake, which would hopefully force the generator to amend the problem.

We have also been struggling to find a loss function that accurately captures the sense of solving an n-body problem. This has been more challenging than expected because it’s hard to quantify the fact that a particle should stay close to its starting point. Current models have been trained using cosine-similarity, but we are still on the hunt for a loss function that will accurately capture the behavior that we desire.

An overarching challenge of trying to solve the n-body problem is that there likely isn’t a closed form solution, as anything with n>2 is typically chaotic. While this means that there is not a “ground truth” that we can compare our solutions to, we can still check that certain parameters of the system, namely energy and momentum, are conserved. We haven’t yet built a system that’s stable enough to implement this on, but if we manage to get a network that manages to output more precise locations, it would be fun to see whether it breaks fundamental physical principles or not.

Log in or sign up for Devpost to join the conversation.