期刊名称:International Journal of Software Engineering and Its Applications
印刷版ISSN:1738-9984
出版年度:2008
卷号:2
期号:4
出版社:SERSC
摘要:Lifelog is a set of continuously captured data records of our daily activities. The lifelog data usually consists of text, picture, video, audio, gyro, acceleration, position, annotations, etc., and is kept in some large databases as records of individual’s life experiences, which can be retrieved when necessary and used as reference to improve life’s quality. The lifelog in this study includes several types of media data/information acquired from wearable multi sensors which capture video images, individual’s body motions, biological information, location information, and so on. We propose an integrated technique to process the lifelog which is composed of both captured video (called lifelog images) and other sensed data. Our proposed technique, called Activity Situation Model, is based on two models; i.e., the space- oriented model and the action-oriented model. By using the two modeling techniques, we can analyze the lifelog images to find representative images in video scenes using both the pictorial visual features and the individual’s context information, and to represent the individual’s life experiences in some semantic and structured ways for future experience data retrievals and exploitations. The resulting structured lifelog images were evaluated using the previous approach and the proposed technique. Our proposed integrated technique exhibited better results.