摘要:AbstractWith advancements in deep learning techniques, the implementation of data-driven approaches to identifying battery model parameters has been practiced increasingly in recent studies. Training and validation of the neural networks in most studies were performed with synthesized data from a full-factorial design. However, the full factorial design of experiment method tends to generate a large sampling size, and this limits any study with a large number of battery model parameters. In this paper, a comparative study is conducted with long short-term memory (LSTM) architectures trained and validated with synthesized data generated with various design of experiment methods: 3-level full factorial, Plackett-Burman (PB), Latin Hypercube (LH), and combined PB/LH methods. In the experiment, the LSTM networks predict eight battery model parameters using voltage, current, and temperature data. The results show that the LSTM networks trained with data designed by a 3-level full factorial have the best prediction with the lowest relative prediction error. Although the prediction accuracy decreases with a reduced sampling size, the relative errors by the other experiment design methods against the full factorial one are found to remain within an increase of only 3%. For cases in which the 3-level full factorial method leads to a large data size, PB, LH, and combined PB/LH could be considered as alternative data sampling methods.