california app design company

The Three Biggest Lessons We Learned Working With Computer Vision

February 03, 2020

The Yeti team loves a challenge, so when Punch Bowl Social approached us with a computer vision project, we leapt at the opportunity.

Computer vision is an interdisciplinary field involving AI and Deep Learning. In short, it  uses cameras and deep learning models to allow computers to interpret and understand the visual world in the same way as the human visual system does, seeking to automate tasks that the human visual system currently must do.

computer vision project

We worked with Punch Bowl Social to create an interactive, digital, self scoring dart experience, utilizing real dartboards and real steel tip darts. Using computer vision, our software is able to detect the darts on the dartboard, where they have landed, and when they have been removed.This scoring information is displayed on a dart lounge display screen.

dart lounge computer vision

This was a fun and challenging project here at Yeti, with some lessons learned for our team - including an understanding that projects involving computer vision bring an entirely new level of complexity when compared to your average software development product (if you couldn't guess). Depending on your specific computer vision use case this complexity could manifest in a few different ways.

Here are my top tips for managing complexity within your CV project, based on our recent work with the Dart Lounge.

1. Invest Time in Data Collection

On a computer vision project, a big portion of time will likely be spent working on your logic and algorithms using data you’ve collected. For us, this meant pictures and videos of darts being thrown. While a project is evolving and being iterated on, you will need to continue to collect and refine your test data to match other changes that have happened with the project.

For example, over the course of the Dart Lounge project we worked with many different variables. Prototypes, darts, dart boards and lighting situations evolved throughout the project, and each evolution required our collecting the most up to date and accurate test data in order tocontinue working on the computer vision part of the project.

At a certain point, it became clear that we needed to invest development time into the automation and efficiency of collecting that test data as it was becoming a large time burden in itself.

Looking back, I wish we had invested in some of the development tools for the computer vision portion of this project at the start.

2. Reduce Variables in Your Environment

As I just mentioned, the Dart Lounge evolved quite a bit from Punch Bowl Social’s idea, to a proof of concept, to a working prototype, to the version that is sitting in their actual locations. Through each phase there were constant iterations of physical factors, some in our control and some outside of our control.

It became apparent quickly that identifying the environmental factors that affect the Dart Lounge's computer vision code was a must and consciously assessing every change that was being proposed.

With computer vision, having a defined and known problem set is what makes it easiest to build accurate software. Lesson learned: we should have taken inventory and communicated to all stakeholders earlier about the decisions being made on these variables and how changes later could negatively affect our work product.

3. Live vs Test

We learned that running data through a testing harness or simulated flow, and happened in a live situation was never exactly the same and was, unfortunately, impossible to make efficient as a development process. With our timeline, we were able to rapidly run test data through our code or run live footage but very slowly.

Initially, this meant we often found a lot of discrepancies in what was happening in our testing suite compared to what could then happen while playing the live game. It also meant it could be very difficult to debug why something wasn’t working quite right during live play, resulting in a lot of guesswork coming into play.

With regular software development, the goal is to always have the ability to replicate the issue before you start to debug and fix. Why should it be different with computer vision?

We realized that, rather than just having our testing harness as a development tool, we should have also included a testing suite that ran the longer running live-based tests on pull requests and changes to the codebase over continuous integration. Had we had done this near the beginning of the project we would have had a lot less struggles with debugging scenarios during live gameplay.

Working on Punch Bowl Social's Dart Lounge came with a lot of learnings about running a software product that integrates computer vision in a dynamic environment that includes unpredictable end users (the dart lounge is cocktail friendly!).

While some of these insights may be a bit specific to our use case, we also learned quite a bit about general best practices in developing computer vision based software.

If you're looking for help with a computer vision based software product or our struggling with issues in your current project, please feel free to reach out to us!

is a CTO + Founding Partner at Yeti. He found his passion for technology as a youth, spending his childhood developing games and coding websites. Rudy now resides in the Yeti Cave where he architects Yeti’s system and heads up project production. Follow Rudy on Twitter.

blog comments powered by Disqus
The Three Biggest Lessons We Learned Working With Computer Vision https://s3-us-west-1.amazonaws.com/yeti-site-media/uploads/blog/.thumbnails/49452054246_cc6f997dd9_c.jpg/49452054246_cc6f997dd9_c-360x0.jpg
Yeti (415) 766-4198 https://s3-us-west-1.amazonaws.com/yeti-site-static/img/yeti-head-blue.png