Google's Artificial Intelligence Yourself (AIY) kits can be a great introduction to AI and machine learning concepts. Here's a breakdown of five key features you should know about.

Dr. Don Wilcher

January 8, 2019

5 Min Read
5 Facts You Must Know About Google's AIY Kits

Google Artificial Intelligence Yourself (AIY) kits (Vision and Voice) allow for artificial intelligence and machine learning exploration and experimentation among engineers, makers, and STEM students. (Image source: Google)

You can explore artificial intelligence and machine learning on your own without breaking the bank. Earlier this year, Google released two development platforms that it's calling Artificial Intelligence Yourself (AIY) kits: one focused on natural language processing, and the other on computer vision. At a retail cost of $49.99 for the voice kit and $89.99 for the vision kit, Google's AIY kits are an ideal solution for makers and hobbyists and even more serious engineers looking to get their feet wet in exploring AI.

Here are five things everyone should know about the AIY kits before jumping in. And for a hands-on exploration of these AIY kits, Design News is hosting a CEC webinar on Predictive Analytics, which includes lab exercises. More information can be found here.

RELATED ARTICLES:

1). There's a Raspberry Pi Zero WH at the Core.

The Google AIY kits are powered by a Raspberry Pi Zero Wireless with a Header single board computer. The main processor is a BCM 2835 SoC with an operating speed of 1 GHz. The BCM has 512MB of RAM, which is the same storage capacity as the original Raspberry Pi A+. Included on the Pi Zero WH is a BCM43143 processor with built in WiFi and Bluetooth Low Energy (BLE) features. The board’s computing and wireless electronic components are populated on a 65 x 31 x 11.6-mm (2.6” x 1.2” x 0.5”) PCB. The Raspberry Pi Zero WH with all its electronic components and features weighs 11.5g (0.4oz).

2). The Google AIY Voice Kit Is Enabled by Google Assistant.

The AIY voice kit processes speech using Google Assistant's natural language processing (NLP) algorithms. Google Assistant is an AI virtual assistant designed mainly for mobile and smart home devices. NLP is an area of computer science and AI that works between human natural languages and computer interactions. The goal of NLP with the Raspberry Pi Zero WH is to process and analyze large amounts of natural language data (text).

The AIY Voice kit can hear and play information from the web using a voice bonnet. (Image source: Don Wilcher)

3). The Google AIY Voice Kit Is an Intelligent Speaker.

Using NLP, the Google AIY Voice kit can function as an intelligent speaker, similar to Amazon's Echo Dot, allowing users to use voice commands to get the weather forecast, play music, or look up other information on the Internet via Google's cloud service.

By attaining credentials, the voice kit can receive a variety of information from the web. A voice kit workflow allows a series of steps from obtaining the Raspberry Pi Zero WH IP address using WiFi and BLE to connect to Google’s console cloud with credentials.

A key module that allows NLP commands to be received, processed, and sent to the audio speaker is the voice bonnet. The voice bonnet is a small audio processing board that attaches to the Raspberry Pi Zero WH SBC by way of a dual inline 40-pin female header. The voice bonnet is enabled by the Pi Zero WH and has the appropriate onboard electronics amplifier capable of driving an 8-ohm audio speaker.

An Intel Movidius Vision Processing Unit assists the vision bonnet in the detection, recognition, and classification of objects. (Image source: Intel)

4). The Google AIY Vision Kit Uses a Convolutional Neural Network.

Image recognition allows machines to identify objects, places, people, and text in their surrounding environment. This is typically done with algorithms like optical character, pattern or gradient matching, face or scene identification, or scene detection techniques.

The Google AIY Vision kit’s image recognition feature has been enhanced with a convolutional neural network (CNN). A CNN can support an abundant number of neurons, thereby expressing large models through computations. Each layer of the AIY vision kit’s CNN allows higher level, more abstract features of an object to be detected. The AIY vision kit uses a 2D CNN to recognize images and a 3D CNN to recognize images and colors of an object.

5). A Vision Processing Unit (VPU) is used for object detection, recognition, and classification.

The AIY vision kit uses a vision bonnet for object detection and image recognition through classification for camera enhancement. An Intel Movidius VPU is onboard the electronic bonnet for assisting in the object detection and image recognition-classification and processing event. The VPU can use a variety of image sensors for deep CNN classification of detected objects. The processing unit has gesture/eye tracking and recognition capabilities along with 3D depth detection. There is an inertial measurement unit (IMU) externally wired to the VPU. The IMU assists the AIY vision kit in detecting camera orientation for object clarity and recognition.

Have you experimented with Google's AIY kits already? Let us know your thoughts on the kits in the comments!

DesignCon 2019 engineering education By Engineers, for Engineers
 Join our in-depth conference program with over 100 technical paper sessions, panels, and tutorials spanning 15 tracks. Learn more: DesignCon. Jan. 29-31, 2019, in Santa Clara, CA. Register to attend, hosted by Design News’ parent company UBM.

Don Wilcher is a passionate teacher of electronics technology and an electrical engineer with 26 years of industrial experience. He’s worked on industrial robotics systems, automotive electronic modules/systems, and embedded wireless controls for small consumer appliances. He’s also a book author, writing DIY project books on electronics and robotics technologies.

About the Author(s)

Dr. Don Wilcher

Dr. Don Wilcher, an Electrical Engineer, is an Associate Certified Electronics Technician (CETa), a Technical Education Researcher, Instructor, Maker, Emerging Technology Lecturer, Electronics Project writer, and Book Author. His Learn Electronics with Arduino book, published by Apress, has been cited 80 times in academic journals and referenced on patents. 

He is the Director of Manufacturing and Technology at Jefferson State Community College. His research interest is Embedded Controls, Robotics Education, Machine Learning, and Artificial Intelligence applications and their impact on Personalized Learning, Competency-Based Models curriculum, and instructional development in Mechatronics, Automation, IoT, Electronics, Robotics, and Industrial Maintenance Technologies. He is also the Founder and owner of MaDon Research LLC, an instructional technology consulting, technical training, and electronics project writing company serving Electronics Marketing Media, Technical and Engineering Education companies.

Sign up for the Design News Daily newsletter.

You May Also Like