Tech Tip Tuesday: Computer Vision

Welcome back to another installment of Tech Tip Tuesday everyone! I’ve really appreciated your feedback and conversations in the comments section every week. It’s helpful to know what works and what doesn’t, and exciting to see the information that gets shared.

Some of you tuned in and offered your favorite identification resources two weeks ago. For those of you who missed that epic discussion and want some new guides to check out, you can find all of the resources discussed (plus some extras) on the Vermont Atlas of Life Identification Resource Guide. Although TTT has moved on to other topics, I highly encourage you to keep suggesting additional beloved resources.

Sorry, no anecdotes about the weather this morning. I’m still recovering from the disappointment of a very slushy cross-country skiing adventure this past weekend. However, if this warm weather keeps up, we may begin to see spring species out much earlier than usual, who knows. Keep a lookout for any sightings of seasonally out of place plants and critters, and be sure to add them to iNaturalist!

This Week on Tech Tip Tuesday

When I first started using iNaturalist, I loved that it helped me identify observations the moment that I began to upload. In fact, I still love this feature – even if I can’t always receive a species-level identification, I enjoy the instantaneous feedback about my observation. I’m sure that I’m not alone. However, like me, you may be wondering how iNaturalist accomplishes this? That’s what I’m here to talk about today.

It all begins with artificial intelligence (AI). For a lot of people, the phrase “AI” may conjure up images of human-like robots (Ash and Bishop from the “Alien” movies usually come to mind for me). While this interpretation isn’t wrong, it does only represent a very specialized (and futuristic) subset of AI’s potential. In general, AI refers to any machine operating in ways that mimic human intelligence. Many of us probably aren’t even aware of all the ways we interact with AI on a regular basis. One example is the suggestions provided by Amazon and Netflix on what to buy or watch next based on your previous interactions with the site. In this case, a machine has learned how to make complex decisions about what to recommend.

For iNaturalist, the AI is programmed to identify distinct species and groups of organisms through a process called Computer Vision. In order for it to learn through Computer Vision, large numbers of labeled images are fed through a model that learns to associate features with that label. These images are previously uploaded iNaturalist observations and the label is the research grade identification associated with the image. The model can then be used to assign labels to unlabeled pictures with the same characteristics.

Not every species has gone through this process. In order for Computer Vision to develop a model for a species, the species needs to meet a set minimum number of research grade images – twenty to be exact. The observations fed into the model also need to come from twenty or more different users. This is done to protect the model against possible errors or biases associated with individual users. Based on these criteria, about 10,000 species were eligible when Computer Vision was originally implemented. The last reported numbers indicate that 85% of all documented species are labeled and that new species cross the twenty distinct observations threshold every 1.7 hours.

So, how does this come to play in your daily life as an iNaturalist user? When you go to identify your observations, iNaturalist provides a list of possible species and genera that your observation could belong to. These suggestions are based on a list of possibilities that the model weights depending on how consistently the observation matches with the model. Computer Vision provides this weighted list instead of a definitive identification because technology is not always perfect and therefore it allows room for human judgement as well (one time iNaturalist suggested that my White-headed Woodpecker was a Giant Panda…). In instances where your observation may fall under that 15% of species lacking a model, the program will provide a coarser filter, such as genus or family, allowing human users to establish the species-level identification.

Ultimately, the program is always learning. Any new research grade observation that gets added is run through the model, helping the program improve its identification abilities. That’s one reason why it’s very important to make sure that research grade identifications are accurate.

AI and Computer Vision are complex and fascinating areas of work that are becoming increasingly common in our daily lives. If you want to learn more, iNaturalist and The Atlantic wrote great articles explaining how iNaturalist uses Computer Vision.

TTT Task of the Week

First, I encourage you all to go and read the articles linked above. They’re not long and I don’t doubt that you will walk away from them with a better understanding of how iNaturalist works. Use this info to impress your friends! Second, look back through some of your research grade identifications and use some of the resources included in TTT #12 and the Vermont Atlas of Life website to verify that users provided the correct identifications.

Thank you for helping us map Vermont’s web of life and happy observing!

Publicado el enero 28, 2020 04:43 TARDE por emilyanderson2 emilyanderson2

Comentarios

Computer Vision seems pretty poor for bryophytes. It keeps suggesting just a few species, and these are sometimes quite rare. For example, Syntrichia ruralis comes up quite often, is always incorrrect in my experience, is quite rare in Vermont and limited to specific habitats. I wish computer vision could take rarity into account. There is enough information out there for this to be built into the model.

Publicado por dorothy hace alrededor de 4 años

Agregar un comentario

Acceder o Crear una cuenta para agregar comentarios.