Amazon was invited to support the 3rd annual Makeathon, a 36-hour prototyping and design competition held annually at the University of Michigan in Ann Arbor.
The Makeathon aims to bring together the University’s brightest and most creative students and give them the tools they need to build something truly awesome. Being a super-cool event, I signed up to head out to Ann Arbor and spend some time to help students build voice technology software using Alexa and the AWS platform. Spending the weekend working with students and their ideas was a great experience and I hope to be invited again in the future. Here’s the story…
I headed over to the Duderstadt Center in Ann Arbor after work on Friday of the event. Being a hackathon, things were just getting organized around 5 or 6pm. Students could form teams of six work on one of several topics: autonomous driving, product redesign, and voice technology. The red dot on my badge signaled that I was an “industry mentor” and available to help with projects. Within a few minutes, I was sitting with a team figuring out what to build over the weekend.
First of all, what to build? It can be equally exciting and daunting to dream up an idea on a so-called blank sheet of paper. I ended up spending some time with the BlueBus team to point them in the right direction. I think they had the right idea and perspective on the event: build something useful, but also learn more about the process of developing software. More on that later. “We wanted to create a skill that was simple, but really useful for students like us on campus,” said Shi Pu of the Blue Bus team. So, what is one thing that your average undergrad at UM does on a daily basis? Get the bus to class!
Photo by Grant Ritchie on Unsplash
Here’s a simple interaction:
User: “Alexa, ask blue bus, when is the next bus?”
Alexa: “The next bus arrives in 12 minutes; your walk to the bus stop is 8 minutes, so you should leave soon.”
So, why not just use the web or the mobile app to find the next bus? After all, web and mobile interfaces have rich information about all buses on campus. It turns out that this use case is perfectly suitable to a voice interaction:
– The user needs instant access to very specific information (the next bus at a particular stop)
– The information is not confidential or sensitive in nature (bus schedules are public)
– The interaction doesn’t require a lengthy conversation or a lot of details (we already have the physical location of the Echo)
“Once we got started, it was really easy to iterate quickly to develop both the intent schema and the API,” said Duoming Bian. The team chose to use the Alexa Skills Kit CLI and a base their interaction on a simple template. That solved a major blocker for the team — how to get started, exactly? For the API part, the team used the Chalice framework. After a few iterations and a suggestion from their friendly neighborhood Amazon Solution Architect, the team of five was able to divide the voice interaction work from the back-end API work in Lambda. The small team could work efficiently by organizing the major pieces of work and iterating quickly.
How does it work?
Most Echo devices (although not all) are installed in a fixed location. The BlueBus skill calls the University’s Bus Schedule page at https://ltp.umich.edu/transit/BB.php with the Echo’s physical address to get the nearest bus, which is returned in HTML format. The team’s Lambda function scrapes the HTML to get the time of the next bus, and returns in to the user via the Alexa Voice Service. Simple yet useful, and a solid base for future enhancements.
By the end of the weekend, the Blue Bus team had a finished skill they could demonstrate to their peers and judges. They were awarded first prize in the Voice Technology part of the competition! More importantly, the team earned some practical experience building a simple, yet useful voice technology product from the ground up.
Breaking down the weekend, I think the team was successful for keeping these simple tenets in mind:
1. Agree on a simple yet useful skill — “work backwards” from the customer or user’s perspective
2. Dive in and start building as soon as possible — show “bias for action”
3. Use a template to get started (ASK CLI trivia game)
4. Iterate quickly to make improvements as you go (ask deploy)
5. Divide and conquer where it makes sense (intent schema and Lambda function)
6. Avoid getting hung up on developing some splashy idea; start simple and iterate!
During our retrospective on the weekend, we pointed out the power of the working backwards process in product development. Werner Vogels, CTO — Amazon, writes about the process here in 2006: https://www.allthingsdistributed.com/2006/11/working_backwards.html. While we didn’t write a press release, FAQ, or user manual, the team did identify the customer, their scenario, and their expectations. And the team did get on board to a shared vision of the final product, which is especially important in a time-bound hackathon.
Working Backwards, Bias for Action, Invent and Simplify, Deliver Results
The BlueBus team plans to publish the source code and deployment instructions in Github so that it can be used on other campuses. Personally, I look forward to working with the BlueBus team and the University of Michigan Center for Entrepreneurship. Thanks for the invitation!
I think that basic coding and software development skills are critical for almost any career path in the future. Learning some basic coding skills fosters problem solving, logic and creativity. If you are able, I’d encourage you to help promote computer science skills in your community through a school hackathon, Hour of Code, or otherwise serving as a mentor.
https://voicetechpodcast.com/wp-content/uploads/2019/06/What-Trumps-New-American-AI-Initiative-Means-for-the-Future-of-AI-and-Governments.jpg6301200Carl Robinsonhttps://voicetechpodcast.com/wp-content/uploads/2018/11/logo_voicetechpodcast_horiz_border.pngCarl Robinson2019-06-10 09:00:152019-06-02 12:40:35What Trump's New American AI Initiative Means for the Future of AI and Governments