期刊名称:International Journal of Signal Processing, Image Processing and Pattern Recognition
印刷版ISSN:2005-4254
出版年度:2016
卷号:9
期号:7
页码:341-350
DOI:10.14257/ijsip.2016.9.7.30
出版社:SERSC
摘要:Multi-task Learning (MTL) algorithms aim to improve the performance of several learning methods through shared information among all tasks. One particularly successful instance of multi-task learning is its adaptation to support vector machine (SVM). Recently advances in large-margin learning have shown that their solutions may be misled by the spread of data and preferentially separate classes along large spread directions. In this paper, we propose a novel formulation for multi-task learning by extending the recently published relative margin machine algorithm to the multi-task learning paradigm. The new method is an extension of support vector machine for single task learning. The objective of our algorithm is to obtain a different predictor for each task while taking into account the fact that the tasks are related as well as the spread of the data. We test the proposed method experimentally using real data. The experiments show that the proposed method performs better than existing multi-task leaning with SVM and single-task leaning with SVM.