Applications for Climate Resilience

This post may be slightly wonkish, but here goes. Two weeks ago, Rose City Bluff Restoration looked at phone applications (Merlin and iNaturalist) that ultimately send our observations to the Cornell Lab of Ornithology and the Global Biodiversity Information Facility who provide data for scientists studying climate change. This week we want to touch on how a phone app works. We increasingly use applications like Merlin and PlantNet for our edification – to better identify birds, plants and other wildlife. How does iNaturalist, for example, take our uploaded pictures and identify or suggest ranked possibilities of species?

iNaturalist provides nearly instant species identification suggestions using an advanced computer vision model, which is a type of artificial intelligence. If you use iNaturalist you know that after a user posts an observation, it is shared with the iNaturalist community for verification. Human intervention ensures the quality of iNaturalist’s data, but the original identification by the application is often spot on.

The iNaturalist computer vision model is a process that teaches the computers we interact with to recognize patterns in images. Computer vision trains computers to derive meaningful information from our images, enabling computers to interpret the visual world. When we supply an image, the system processes this visual input to identify features like edges, corners, and textures, while analyzing these features to classify objects.

The iNaturalist computer vision system is trained on users’ photos and identifications to provide taxon (a group of organisms forming a unit, such as a species, family or class) identification suggestions. The model’s identification abilities reflect the collective human expertise of the iNaturalist community. The model updates every one to two months, not in real time. Each time iNaturalist trains a new model, they use a snapshot of images with observations that have complete data (coordinates, date, and media) and a taxon. They do include images from observations of captive and cultivated organisms, as opposed to what we think of as native species. There must be at least a hundred photos and sixty observations of the species to be included in the model. Observations do not need to be “research grade” (verified by the iNaturalist community) to be used in training, but verified observations are prioritized. As iNaturalist grows, the pool of images for training grows too.

When a user submits a photo to iNaturalist for species identification, the following image processing and analysis process occurs. The submitted photo is sent to iNaturalist’s computer vision model. This model, trained on a vast dataset of research-grade observations and their associated images, analyzes the visual characteristics of the organism in the photo. It identifies patterns, shapes, colors, and other features to compare them against its learned knowledge base. Based on this analysis, the computer vision model generates a list of possible species identifications. These suggestions are ranked by likelihood, with the most probable matches appearing first. The model also considers the location where the observation was made, utilizing a “geomodel” that predicts likely species in a given area.

On August 15, 2025, the latest computer vision update included 106,407 taxa. The model is the result of millions of observations and identifications shared on iNaturalist. As additional taxa are discovered and classified, these groups — primarily species, but also including genera, families, and other taxonomic ranks — are systematically documented and analyzed. To keep up with changes, iNaturalist updates the model every month or two so that the community can benefit from the improvements.

iNaturalist’s computer vision model is powered by a convolutional neural network (CNN), a type of “deep learning” that uses multiple layers to learn complex patterns and make decisions from large datasets. CNN excels at image recognition and processing. Convolutional neural networks, modeled after the human visual cortex, are widely employed in computer vision applications such as facial recognition, medical imaging, and autonomous vehicle systems.

Making an Observation with the iNaturalist Phone App

We trust this information provides a little clarity on how applications identify species from your photographs. All this information came from simply Googling or from Wikipedia.

Leave a Comment