Having trained a shallow neural network for a MIMO function approximation, what is the order of importance (information content) of the various inputs? Where can I find it?

2 次查看(过去 30 天)
In the training process I suppose there is some measure of importance of the various inputs.

采纳的回答

Christopher Stokely
heatmapping.org but is more oriented toward deep learning. If at some point you are interested, I recall a paper called "deep Taylor decomposition."
To get the best answer, try a Google search that includes the name Sobol:
Sobol method neural network
When I do that, a lot of info came up. For example: https://annals-csis.org/Volume_11/drp/pdf/225.pdf
I would also recommend searching on techniques for variable importance estimation of neural networks, etc. by the biomedicine / medical communities. Their work is readable, high quality, and investigates all the nuances of the algorithms and their predictions, mainly because it is mission-critical, experiences heavy oversight, and to discover new science with the aid of a ML algorithm.
I want to point out that in most data science / ML cases, at least for me, there can be many possible solution models that fundamentally have unphysical construction. I use the variable importances to help drive new science and determine if the model's underlying structure is even plausable or if it disagrees with physics constraints.
I would recommend reading a bit about the Fredholm Equation of the First kind (a several hundred year old math relationship puzzle), which btw lays out SVMs and other machine learning constructs. Numerical Recipes in C has the best write-up that I've seen.
Look up LINIPOS, which stands for linear inverse problems with probability * * forgot the last two words and a statistical solution to the Fredholm equation of the first kind known as the Vardi-Lee algorithm.
I seemed to have rambled but those last 2 paragraphs are closely related to variable importance estimates.

更多回答(1 个)

Christopher Stokely
Here is some more info that will explain some of your question: https://lilianweng.github.io/lil-log/2017/08/01/how-to-explain-the-prediction-of-a-machine-learning-model.html
  2 个评论
Christopher Stokely
consider trying MATLAB's partial dependence plot capability and the individual conditional expectation plots to determine the average overall dependence, or the average negative and average positive directions seperately. With this info, one can make recommendations on how to change the predictor variables to change the target variable in a desired way. However, I have limited practical experience doing this in code.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by