Here's a nice story from the RMO about LGMS's 20-Time innovation project. Click the upper right box of the PDF to get a pop-out for easier viewing.

Not a summer goes by in Canmore without multiple stories about bear/human interactions, usually of the negative kind. Bears are continually entering town searching for easy food - often fruit trees, bird feeders or trash cans - and the town has made a major effort to educate locals and visitors alike that "A Fed Bear Is A Dead Bear".  As part of his 20 Time project, Colin Fearing was interested in creating a video game that could educate the public about what it's like to be a bear continually struggling to find food. 

A player takes the character of a young bear that can wander around a large forested area. Every action consumes precious energy, replenished only by scarce food found through exploration. The player must balance energy and action in order to find enough resources to be properly fattened up for hibernation. The game contains seasons, and as time ticks on the weather changes and snow starts to fall, there is greater urgency to find a suitable den to hibernate.  There is a human village that provides an easy source of food but leads the bear away from natural hibernation spots. 

Here's a quick video of what it's like to play: 

This was actually several projects in one: a crash course in C# programming, a crash course in the Unity game engine, and a large dose of animation and character design. The original idea was to include a "habituation" value that increased as bears spent time around humans eating off easy-to-find fruit trees.  Too much habituation and the bear would be relocated or killed. But, creating a real video game turned out to be a fairly major effort and that particular feature was left for next time. 

bear
map3
map
screenshot1
map4

You can find out more about this project on Colin's website. 

You can find out more about the 20 Time project in this nice Rocky Mountain Outlook article.

The e-Nable Phoenix prosthetic hand is a 3D printable gripper hand designed as a cheap, easily produced prosthetic for areas of the world where assistive devices are hard to obtain. The idea is to use consumer 3D printers, cheap PLA, and (comparatively) cheap rigging materials to generate a customizable hand, possibly even in the field. 

That sounded like a pretty good mission, so Finn D thought he'd give it a try as part of his 20 Time project. We first learned about 3D printing using PET-G, a tougher and more durable material than the usual PLA but one with a few idiosyncrasies on the print bed. Next, we printed all the parts. Although each individual part was mostly easy enough, there were quite a few of them and it was difficult to keep track. At one point, part of the hand cuff needed to be melted into place on a school hotbed. Finally, Finn rigged the hand with string and plastic gripper ends to produce a working hand. 

prosthetic-1
20190116102201
20190123123810
20190122122150
20190215132006
20190201132704
20190124145648
20190225095436
20190225095426
prosthetic-1 20190116102201 20190123123810 20190122122150 20190215132006 20190201132704 20190124145648 20190225095436 20190225095426
Click on the videos below to see an example of the hand in action. Finn has the use of both hands, so we are simulating the wrist-stump action, but you get the idea. 
e-Nable Phoenix hand v2 in action
e-Nable Phoenix hand v2 in action
We did not have an actual user in mind for the this project, so everything was printed at 100% scale. A good next step would be to find a user, and print a custom-measured and custom-scaled version. 

 

Rien has been working on her Machine Learning with Google AIY Vision 20-time project and it's time for an update. After walking through all the demos (joy detector, object recognition) she started the process of training a machine learning classifier on some new data. 

The overall goal is to use machine learning and computer vision to help sort recycling and non-recycling, either as part of an automated sorter or a reverse recycling machine. To determine what is recycling and what is not, she needed training data. The first pass involved taking hundreds of pictures of 2 types of bottles and 2 types of cans, all in front of a little stage that allowed for a controlled background.

20190130114746 The training stage.

We made the assumption that in a reverse-recycling or recycling-stream sorting situation, we'd be able to control what our background looked like. Each object was photographed from all angles, upside down, and even crushed. 

IMG2346 Example training data. IMG3031 Example training data. IMG2247 Example training data. IMG2502 Example training data.

She then ran a Tensorflow model adapted from the Tensorflow-for-Poets demo. The classifier was able to place testing data into the 4 categories with 100% accuracy. She then expanded her training data to 13 categories of bottles and cans - again, taking hundreds of pictures from all angles. Even with a much wider set of possibilities, the machine learning classifier was able to place an incoming picture into one of the 13 categories at almost 100% - that is, it was able to determine that a Coke can was a Coke can, a Nestea bottle was a Nestea bottle, etc even though it had never seen that image before.  

20190206103800 Taking pictures.

Finally, she grouped the 13 categories into "recycling" and "non-recycling" categories and retrained the model. As expected, this worked well - but the real challenge was testing the trained machine learning classifier on some pictures of objects it had never seen before. These were not examples of training data, these were entirely new objects (ie, beer cans, new branded bottles, etc). The classifier got 7 of 9 correct, with both the 2 misclassified objects the same object (a Brisk bottle). 

brisk-bottle-fail This bottle was missclassified. possible-confusion Possibly confused by similar colors?

By the nature of a neural network's operation, it is not immediately obvious how the model is coming to a conclusion. Is it color, shape, reflection, size? We hypothesize it is using color and mixing it up similar bright yellows on the Nestea can. The obvious solution of course, is to take more training data. In a real recycling sorting scenario, it would be important to get pictures of as many types of possible containers as possible (not just 13 brands), so if a never-before seen Brisk bottle was being missclassified, some pictures of the Brisk bottle and retraining should solve the problem. 

This approach seems like it has some real commercial potential. Even if the machine learning was not able to classify everything properly, given a fairly controlled camera & background environment, it should be able to successfully identify a large number of recycling/non-recycling objects. You can imagine a conveyor-belt system where a robot arm pre-sorts known objects, leaving only the "unknown" and "unsure" objects for a human. This could greatly reduce the cost of sorting a city's recycling stream, and make recycling a much more cost-effective and wide-spread. 

A very interesting 20-Time project! 

Design - build - test - repeat. Here at MMM we are okay with lots of design failure, as long as each successive fail teaches us something interesting for our next design. Here are a few pictures of a recent LGMS 20 Time project that show a nice design progression. 

The initial project was to design a snowball launcher, but Sage was a little stuck getting past the first rough 3D model on Sketchup. He decided to switch things to a slingshot, and use 3D printing to rapid-prototype successive designs. 

His first design worked, but there were some flaws - not all parts intersected properly, structures were a little thin and breakable, and the shooting power was weak. Sage made a quick return to Sketchup and v2 was soon on the 3D printer. That worked much better and solved a lot of previous problems. But he realized that for greater velocity, he needed a thicker elastic.... So, in Version 3, Sage widened and lengthened, strengthened and moved, and in less than a day he had another prototype to test. Version 4 made things even beefier and a little more comfortable in the hand - and was able to consistently shoot metal balls hard enough to easily go through cardboard. 

Sage's eagerness to keep iterating and fixing each progressive design made this a great project and a real success. 

20190208105011

Prototype and 3D print. 

20190212100017

Printing in PET-G for extra strength.

20190211095116

Multiple versions on sketchup.

20190215132722

The design progression. 

20190213103145

The design progression and happy engineer. 

20190208105011 20190212100017 20190211095116 20190215132722 20190213103145
 

Rien has been building Google's AIY Vision kit. It's a Raspberry Pi Zero with camera, light, buzzer, and button that uses Tensorflow to do machine learning. After some issues getting the little beast to connect to the camera, everything now works and she's starting to experiment with some of the built-in machine learning models. 

We first tried the object-detection model, which attempts to identify "things" in an incredibly busy scene. We had some successes (including correctly identifying an orange), although it kept wanting to identify Paul as "dough". No kidding. Is that a win or a fail? 

You can't tell what features the Tensorflow model is using to make decisions about the world, but we were able to make some guesses. The curtains, pulled back and tied with an elastic, kept getting identified as a balloon, and yes, we could see the similarities. The orange, when squashed, got identified as a squash or a lemon or a ping pong ball.  The lighting made a big difference and the background made a big difference. Of course, it is a stunningly hard problem to identify random objects in a crowded scene, so the fact that we could tag anything was a win. Next up is to build a little stage so that we can control the background and lighting a lot better.

The eventual goal is to use machine learning to make decisions about what is and isn't recycling, so next up is to build a little stage where we can control the background and lighting a lot better. This will simulate a recycling sorting environment. 

We also used the AIY to detect faces. This turned out to be a lot easier than arbitrary model detection. The model worked pretty well, including successfully detecting The Joker, side profiles, faces in pictures, etc but NOT emojiis, dogs, etc. 

Oh, and we printed a nice little box in PET-G to keep all the electronics safe! 

 

20190129134907

We have the Raspberry Pi connected to its own monitor. 

20190129135309

The camera is looking at a computer mouse. 

20190129135319

This image gets correctly identified as being about a computer mouse. 

20190129135758

Our makeshift orange stage. 

20190129140412

Correctly guessed orange - only with 0.35 probability though, so fairly unsure. 

20190129140208

The guess is "lemon". Makes some sense - the orange looks pretty squashed in this view.

20190129140936

Got all the faces in this photo. 

20190128092632

Our nice PET-G box. 

20190128123152

Tricky to get the parts nicely tucked away. 

20190128101622

Rebuilding the setup inside of the box. 

20190129134907 20190129135309 20190129135319 20190129135758 20190129140412 20190129140208 20190129140936 20190128092632 20190128123152 20190128101622

Insta Goodness