期刊名称:TELKOMNIKA (Telecommunication Computing Electronics and Control)
印刷版ISSN:2302-9293
出版年度:2020
卷号:18
期号:3
页码:1382-1388
DOI:10.12928/telkomnika.v18i3.14868
出版社:Universitas Ahmad Dahlan
摘要:Transfer learning (TL) is a technique of reuse and modify a pre-trained network. It reuses feature extraction layer at a pre-trained network. A target domain in TL obtains the features knowledge from the source domain. TL modified classification layer at a pre-trained network. The target domain can do new tasks according to a purpose. In this article, the target domain is fundus image classification includes normal and neovascularization. Data consist of 100 patches. The comparison of training and validation data was 70:30. The selection of training and validation data is done randomly. Steps of TL i.e load pre-trained networks, replace final layers, train the network, and assess network accuracy. First, the pre-trained network is a layer configuration of the convolutional neural network architecture. Pre-trained network used are AlexNet, VGG16, VGG19, ResNet50, ResNet101, GoogLeNet, Inception-V3, InceptionResNetV2, and squeezenet. Second, replace the final layer is to replace the last three layers. They are fully connected layer, softmax, and output layer. The layer is replaced with a fully connected layer that classifies according to number of classes. Furthermore, it's followed by a softmax and output layer that matches with the target domain. Third, we trained the network. Networks were trained to produce optimal accuracy. In this section, we use gradient descent algorithm optimization. Fourth, assess network accuracy. The experiment results show a testing accuracy between 80% and 100%.
关键词:classification; convolutional neural network; multiple pre-trained network; neovascularization; transfer learning;