Hi Lee
Translating a trained neural network into a single, comprehensive mathematical equation is generally quite complex and often impractical due to the intricate structures and high dimensionality involved in most neural networks. However, the core functionality of a neural network, especially simpler ones, can indeed be described in mathematical terms, given that they fundamentally operate through mathematical transformations.
A basic feedforward neural network consists of layers of neurons: an input layer, one or more hidden layers, and an output layer. Each neuron in a layer is connected to every neuron in the next layer and linear activation function is used then in that case the values of the neurons can be calculated by applying a weighted sum of the previous layer's outputs, plus a bias.
I hope it helps!