Traditional classification algorithms are ideally suited to the processing of small datasets with a stationary distribution, and therefore yield significant errors when applied to real-world datasets subject to concept drift. In the current study, this problem is resolved using an incremental genetic algorithm (IGA). An assumption is made that new training data are generated at a steady rate and pass through a fixed-size window. In the initialization process, training samples are accumulated until the window is full, and a genetic algorithm (GA) is then applied to determine the set of classification rules. As new training samples arrive in the window, old instances are forgotten. Once all the original samples have been replaced by new samples, the GA is re-executed to determine the new set of best classification rules. This procedure is repeated sequentially for as long as a learning function is required. To account for concept drift, the GA utilizes a memory-based random immigrant module, in which the initial population pool of the GA applied at each stage of the incremental learning process comprises a mix of best solutions obtained in the previous stage and an appropriate number of random immigrants. The feasibility of the proposed approach is confirmed by performing a series of classification rules mining simulations using two standard datasets, namely Mushroom and Zoo. The results demonstrate that IGA achieves a comparable classification performance to that obtained using existing incremental and non-incremental methods, but incurs a significantly lower computational overhead.