摘要:Releasing state samples generated by a dynamical system model, for data aggregation purposes, can allow an adversary to perform reverse engineering and estimate sensitive model parameters. Upon identification of the system model, the adversary may even use it for predicting sensitive data in the future. Hence, preserving a confidential dynamical process model is crucial for the survival of many industries. Motivated by the need to protect the system model as a trade secret, we propose a mechanism based on differential privacy to render such model identification techniques ineffective while preserving the utility of the state samples for data aggregation purposes. We deploy differential privacy by generating noise according to the sensitivity of the query and adding it to the state vectors at each time instant. We derive analytical expressions to quantify the bound on the sensitivity function and estimate the minimum noise level required to guarantee differential privacy. Furthermore, we present numerical analysis and characterize the privacy-utility trade-off that arises when deploying differential privacy. Simulation results demonstrate that through differential privacy, we achieve acceptable privacy level sufficient to mislead the adversary while still managing to retain high utility level of the state samples for data aggregation.