Neural networks are excellent mapping tools for complex financial data. Their mapping capabilities however do not always result in good generalizability for financial prediction models. Increasing the number of nodes and hidden layers in a neural network model produces better mapping of the data since the number of parameters available to the model increases. This is determinal to generalizabilitiy of the model since the model memorizes idiosyncratic patterns in the data. A neural network model can be expected to be more generalizable if the model architecture is made less complex by using fewer input nodes. In this study we simplify the neural network by eliminating input nodes that have the least contribution to the prediction of a desired outcome. We also provide a theoretical relationship of the sensitivity of output variables to the input variables under certain conditions. This research initiates an effort in identifying methods that would improve the generalizability of neural networks in financial prediction tasks by using mergers and bankruptcy models. The result indicates that incorporating more variables that appear relevant in a model does not necessarily improve prediction performance.