machine learning - How Sensitive Are FF Neural Networks? -
crosspost: https://stats.stackexchange.com/questions/103960/how-sensitive-are-neural-networks
i aware of pruning, , not sure if removes actual neuron or makes weight zero, asking question if pruning process not beingness used.
on variously sized feedforward neural networks on big datasets lots of noise:
is possible 1 (or trivial amount) or missing hidden neurons or hidden layers create or break network? or synapse weights degrade 0 if not necessary , compensate other neurons if missing 1 or two? when experimenting, should input neurons added 1 @ time or in groups of x? x? increments of 5? lastly, should each hidden layer contain same number of neurons? see in example. if not, how , why adjust sizes if not relying on using pure experimentation?i prefer overdo , wait longer convergence if larger networks adapt solution. have tried numerous configurations, still hard gauge optimum one.
1) yes, absolutely. example, if have less neurons in hidden layer model simple , have high bias. similarly, if have many neurons model overfit , have high variance. adding more hidden layers allows model complex problems object recognition there lot of tricks create adding more hidden layers work; known field of deep learning.
2) in single layered neural network rule of thumb start 2 times many neurons number of inputs. can determine increment through binary search; i.e. run through few different architectures , see how accuracy changes..
3) no, not - each hidden layer can contain many neurons want contain. there no way other can experimentation determine sizes; of mention hyperparameters must tune.
im not sure if looking simple answer, maybe interested in new neural network regularization technique called dropout. dropout randomely "removes" of neurons during training forcing each of neurons feature detectors. prevents overfitting , can go ahead , set number of neurons high without worrying much. check paper out more info: http://www.cs.toronto.edu/~nitish/msc_thesis.pdf
machine-learning neural-network
No comments:
Post a Comment