摘要:Problem statement: Map creation remains a very active field in the robotics and AI community, however it contains some challenge points like data association and the high degree of accuracy of localization which are seems to be difficult in some cases, more than that, most of these study focus on the robot navigation, without any consideration for the semantic of the environment, to serve human like blind persons. Approach: This study introduced a monocular SLAM method, which uses the Scale Invariant Features Transform (SIFT) representation for the scene. The scene represented as clouds of sift features within the map; this hierarchical representation of space, serving to estimate the current direction in the environment within the current session. The system exploited the tracking of the same features of successive frames to calculate scalar weights for these features, to build a map of the environment indicating the camera movement, then by comparing the camera movement of the current moving with the true pathway within the same session the system can help and advice the blind person to navigate more confidently, through auditory information for the path way in the surroundings. Extended Kalman Filter (EKF) used to estimate the camera movement within the successive frames. Results: The experimental work tested using the proposed method with a hand-held camera walking in indoor environment. The results show a good estimation on the spatial locations of the camera within few milliseconds. Tracking of the true pathway in addition to semantic environment within the session can give a good support to the blind person for navigation. Conclusion: The study presented new semantic features model helping the blind person for navigation environment using these clouds of features, for long-term appearance-based localization of a cane with web camera vision as the external sensor.