We propose two algorithms that decompose the joint likelihood of observing multidimensional neural input data into marginal likelihoods. The first algorithm, boosted mixtures of hidden Markov chains (BMs-HMM), applies techniques from boosting to create implicit hierarchic dependencies between these marginal subspaces. The second algorithm, linked mixtures of hidden Markov chains (LMs-HMM), uses a graphical modeling framework to explicitly create the hierarchic dependencies between these marginal subspaces. Our results show that these algorithms are very simple to train and computationally efficient, while also reducing the input dimensionality for brain-machine interfaces (BMIs).