摘要:We give a simple proof that the decisional Learning With Errors (LWE) problem
with binary secrets (and an arbitrary polynomial number of samples) is at least as hard as
the standard LWE problem (with unrestricted, uniformly random secrets, and a bounded,
quasi-linear number of samples). This proves that the binary-secret LWE distribution is
pseudorandom, under standard worst-case complexity assumptions on lattice problems. Our
results are similar to those proved by Brakerski, Langlois, Peikert, Regev and Stehlé (STOC
2013), but provide a shorter, more direct proof, and a small improvement in the noise growth
of the reduction.