A theory of capacity and sparse neural encoding. (arXiv:2102.10148v1 [cs.LG]) Leave a comment

Motivated by biological considerations, we study sparse neural maps from an
input layer to a target layer with sparse activity, and specifically the
problem of storing $K$ input-target associations $(x,y)$, or memories, when the
target vectors $y$ are sparse. We mathematically prove that $K$ undergoes a
phase transition and that in general, and somewhat paradoxically, sparsity in
the target layers increases the storage capacity of the map. The target vectors
can be chosen arbitrarily, including in random fashion, and the memories can be
both encoded and decoded by networks trained using local learning rules,
including the simple Hebb rule. These results are robust under a variety of
statistical assumptions on the data. The proofs rely on elegant properties of
random polytopes and sub-gaussian random vector variables. Open problems and
connections to capacity theories and polynomial threshold maps are discussed.

Leave a Reply

Your email address will not be published. Required fields are marked *