摘要:With the advent of high throughput experiments in genomics and proteomics, the researcher in computational data analysis is faced with new challenges, both with regards to the computational capacities and also in the probabilistic/statistical methodology fields; in order to handle such massive amounts of data in a systematic coherent way. In this paper we describe the basic aspects of the mathematical theory and the computational implications of a recently developed technique called Compressive Sampling, as well as some possible applications within the scope of Computational Genomics, and Computational Biology in general. The central idea behind this work is that most of the information sampled from the experiments turns out to be discarded (for being non-useful) in the final stages of biological analysis, hence it would be better if we could find an algorithm to remove selectively such information in order to get rid of the computational burden associated with processing and analyzing such huge amounts of data. Here we show that a possible algorithm for doing so it is precisely Compressive Sampling. As a working example, we will consider the data-analysis of whole-genome microarray gene expression for 1191 individuals within a breast cancer project