摘要:Marine cabled video-observatories allow the non-destructive sampling of species at frequencies and durations that have never been attained before. Nevertheless, the lack of appropriate methods to automatically process video imagery limits this technology for the purposes of ecosystem monitoring. Automation is a prerequisite to deal with the huge quantities of video footage captured by cameras, which can then transform these devices into true autonomous sensors. In this study, we have developed a novel methodology that is based on genetic programming for content-based image analysis. Our aim was to capture the temporal dynamics of fish abundance. We processed more than 20,000 images that were acquired in a challenging real-world coastal scenario at the OBSEA-EMSO testing-site. The images were collected at 30-min. frequency, continuously for two years, over day and night. The highly variable environmental conditions allowed us to test the effectiveness of our approach under changing light radiation, water turbidity, background confusion, and bio-fouling growth on the camera housing. The automated recognition results were highly correlated with the manual counts and they were highly reliable when used to track fish variations at different hourly, daily, and monthly time scales. In addition, our methodology could be easily transferred to other cabled video-observatories.