Open Source Voice Assistants – Joshua Montgomery, Mycroft – Voice Tech Podcast ep.013

Joshua Montgomery Mycroft

Episode description

Joshua Montgomery is the CEO of Mycroft, the open source voice assistant. Mycroft is a much needed addition to the range of big tech voice assistants on the market, offering a new level of data privacy and customisation.

Their MkII device will be launched soon, and is available for pre-order now on IndieGogo. Josh describes how the MkII improves over previous iterations, takes us through the technical stack, and explains how you can start developing voice applications for the Mycroft ecosystem today.

We discuss the many benefits of open source for voice assistants, including scalability, censorship, data security and developer support. Josh reveals why many companies are unwilling to use Amazon Alexa, and why open source is the answer to this problem. This is a thought-provoking episode with a leader in the voice assistant space – enjoy!

Links from the show:

Episode transcript

Click to expand

Powered by Google Cloud Speech-to-Text

welcome to the voice Tech podcast join me Carl Robinson in conversation with the world’s leading voice technology expert discover the latest product tools and technique and learn to build the voice app to the Future Alexa data services track website and web use it’s one of the largest data aggregation machines anywhere in the world and they named their voice assistant after it they’re always listening microphone that they put in your kitchen is named after their data analytics platform like you think that’s a coincidence

hello everybody Welcome Back today’s episode is cold open source Voice assistance I’ll be speaking to Joshua Montgomery the CEO of MyCroft I just finished team currently on a world tour around Europe in advance at the release of that month to device would you be doing soon Mycroft on a fantastic company that Open Source Products Chinese Edition in my opinion to the range of big fat voice assistant on the market currently doing today’s episode we talked about how the mop to improved over the Mach 1 just drives into the technical stack and we talked a bit about to eat some of the integration is there a possible and then explain how you can start using Mycroft the day you don’t have to wait for them out to be released he tells us how we can start building skills and contribution to the community that they have let me get into the many benefits of a sauce for one that makes my coffee much more customizable in ways that you just can’t do with the other voice assistance and it only allows users to stay in control of that data reveals why many company

the unwilling to use Amazon Alexa today and why I can sources the opposite to this problem

the plug I just released a new Amazon Alexa skill for The Voice Tech podcast it basically lets you hear the latest episode of O’Hara list of available episodes listen to so if you want to listen to this podcast on any Alexa device we have to do is say Alexa. Voicetechpodcast.com

okay so now I’m of the show as my pleasure to bring you Joshua Montgomery

so I’m here with Joshua Montgomery the CEO of Microsoft does walking to the show thank you very much yeah definitely so you’re on your own the world tour yeah we’re visiting 26 countries over the course of the year to talk about at bringing international language support to this this open platform and to meet investors and customers are having a great time you and your your founder the moment you’ve been meeting the price of the developers growing up support what is it that you’re looking to bring attention to right at the moment so what are the feelings of the end of the big Tech Voice assistance is the third there supporting a very few language is so unless your language is in a

a country with a huge economy or has you know hundreds of millions of speakers in many cases they’re ignoring them and you know there’s there seven thousand languages spoken globally and you know the big tech companies are supporting you know of anywhere from 40 to 80 which means that there’s 6900 language is out there that need to be captured for voice assistant technology and so we’re out of meeting people who speak more languages by Catalan and Welsh and Gaelic and working with them to start capturing those might come to this side this voice assistant comes at least has people talk about it more and more and there’s a lot more interested in Four Seasons in general what is Minecraft and what’s your what’s your goal and I’m watching it from the other voice assistance on the market at some Minecraft is a community that’s building an artificial intelligence that runs anywhere and interacts just

Mycroft the company is a US based company that’s that’s building the financial support for the project really but at the end of the day our goal is to have a computer system that when somebody talks to it it interacts with you so naturally that you can’t tell whether you’re talking to a human or a machine and it used to be when I would break that out I would damn sure I actually probably did get a bunch of I Rose from your audience I would get a bunch of eye rolls and then you know Google duplex did their demo a few months ago where their voice assistant made an outbound call to a restaurant and they had another one make an outbound call to a hair studio to book a table in one case in a hair appointment the other and when they did the demo on stage you know the person who answered the phone had no idea that they were super cute speaking to an AI and not to an actual human being on the other end of the line and and it’s very clear that that is really the future of this technology that it won’t be just in narrow domains like booking

appointment it’ll be in a very broad range of topics that you’ll be able to have a natural conversation with the computer about and another computer to solve real-world problems for you and so that’s that’s really where we’re headed where we’re building the Star Trek computer or Jarvis from Iron Man you know something that that inhabits your digital life that you touch from you know very first thing in the morning when you wake up and ask it what time it is all the way to the bed time in the evening where you set the alarm for the next day so I can vision and completely on board of that and yeah I’m a bit about the team then so you guys are based in Kansas that right yeah the team is mostly in Kansas City I we do have a developers as far away is Fortaleza Brazil on Melbourne Australia and Stockholm Sweden so we really have a global team and then of course we have developers are open source Community from 50 or 60 countries

contributing in one way or another one of the other reasons for driving the world tour is that you know it’s really easy as an entrepreneur to get wrapped up in you know what Silicon Valley or wrapped up in Kansas City or New York and to spend all your time there and really the only have a few of the market from your your local area I’m by getting out there and meeting people all over the world I’m talking to them about what they want these Technologies to do you really brought in the the you brought in the field of ideas that are available to the company and then of course you begin to build a global brand and then both of us are important in some cases of very diverse work for Workforce in in terms of skin color but you know they have very few women working for them and then because they’re based in Silicon Valley you know the problems that they focus

what are the problems that a no 20-something or 30 something developer is making $175,000 a year have they’re not focused on solving the problems of the everyday man who you know might live in in another city at a different income level and and they certainly don’t focus on women’s problems for sure we can get on to it. And the thing that one of the things that distinguishes Mycroft is that is the voice of all the people it’s owned by the person as opposed to Own by big fat and we can talk more about that is available right now so how can people helping people use the micro today so we we’ve got a couple different platform to support a vast majority of people who use Mycroft do so on a Raspberry Pi at we have a Raspberry Pi image that they can just simply flash onto there they can simply flash onto a CF card and plug into the Raspberry Pi and Mycroft right up using just an off-the-shelf my microphone and speaker

got to take it also buy a Google aiy kit Witcher I last time I saw on there like $7 or $12 on Amazon for what is so it includes the sound card and the microphone pick up and it all comes in a little 3-inch by 3-inch box and it’s it’s basically a complete voice assistant and it so it turns of a Raspberry Pi into a little box Voice Assistant with the button on the top and and I think the Google built it for Google Assistant but of course you know that same box runs Minecraft and so somebody who’s interested in playing with the open source technology can buy one of those flash Minecraft onto it and immediately have a full working voice assistant for like $40 the other place where of course people can buy a Minecraft part 1 those are our little smart speakers it’s a cute little almost a cartoon character enclosure that’s really designed for

Lakers and hackers the idea is that if you know you can open that box up if you want but it’s got a bunch of Ohio it’s got a bunch of USB port to Scott ethernet it’s got RCA out you can plug it into a stereo system and it’s designed so that people who want to build on top of the Minecraft platform can have access to all of this I owe and all of this other these other resources as part of the voice assistant we sell those on are on our website and then at the end of the year this year worshipping at work on the mark to the mark 2 is really designed for the consumer market so is designed to put in Mom’s Kitchen so that she can get recipes MR2 has a screen on the front of it so I can do stuff like display timers and display recipes and show you instructions and things like that it’s got a camera on it eventually we will probably support video chat through that and then it’s got a lot better sound quality in the Mark 1 it’s got a great resonant chamber good bass response that said it’s designed to be a really high quality she has a real consumer ready to buy and sell

and the kickstarter I’m near the Indiegogo page I was comparing was coming available to us what’s on the Mach 2 all the stuff is preceded at the moment one the other than x the Android says he’s already fall I cuz make his people you want to punch me to get some Soul Seekers It’s them but the month to really is consumed already very polished and available in December 8th so I’ve been pushing really hard I keep telling people it’s December 4th at 5 p.m. there’s a little bit of slip built in there but our goal is to get it into people’s happy Christmas that’s wonderful yes that I said that sheep have a Christmas present for anyone use in Ted 2 voice assistant and looking at the specs on the only need to go get page yes it’s better in every way I really like sides go great sound right touch screen you can include all the widgets robot faces you can

I don’t know if they’re supporting multi-touch or not I need ID to look into that that I don’t know if it design teams made the decision but for everything else had Pap touch a swipe left and right and then the team that were working with on the visual display from the kitty Community the Wii systems is the name of the company is building a really great visualization abstraction for the Micro Stack that supports not just the the mark 2 screen but will support screens all the way down to a little one inch screen on a watch and then of course all the way up to a big screen TV they worked out some really great message for I’m swiping back and forth between cards to be able to view data and to be able to visualize that in a very natural way and so I saw the first demos for that last week and then was really super impressed by that worked at that team’s doing and and I’m expecting to see some really cool things on the on the visualization side when the mark 2 ships and then of course the Mark 3 comes out and I will start talking about

little bit more next year I’m by that one has a 10-inch screen so somewhere to the the mack that sit in front of you already underway in terms of like a really expensive part about building a smart speaker is building that PCB the printed circuit board that has all the chips and everything on it and then and I building driver support and integrating everything into one experience once you have that bored you can put it in a variety of different enclosures and so the Mark Mark 2 has a affair with I think it’s like a four and a half inch screen and it’s really designed to be an equivalent of a an Amazon spot so Amazon has a little of speaker called the spot at that I think the last time I checked at retail 429 has A2 inch screen and then it has a much smaller resonating chamber but but that’s apparently been selling really really well and you know we wanted that spot hadn’t been shipped when we started building the mark 2 but that that was really kind of the same Target that we were we were

after only in our case it’s a much bigger screen a bigger resonating chamber better sound quality you know we have a quad core silence processor in there with an fpga which is a I mean it’s a mammoth processor for this type of Asian Bryant you know we’re using the latest generation of DDR Ram like it’s a it’s a much more capable Computing system and so you know the idea there is to provide a really high quality experience for them are too and then to be able to take that same bored and put it in the Mark 3 with a bigger screen more more speakers and then of course other places where people want to use it you know kiosks in retail environments you know navigation systems in white stadiums and things were people can and ask for his of the other applications applications & R Hardware is completely open so once we ship the mark to anybody can build at PCB without any licensing fees to us and we’ve also made the

and to open up the back end so the the server infrastructure that supports you know that thousands of dollars that’s like 20 mm Minecraft developers and users right now and all of that will be published as well as for that company is an individual’s can set up their own complete Mycroft instance totally independent of us go build their own software like that though there’s a lot of demand out there I’ve met with a lot of big companies here in European companies United States are they wanted to play this technology and are rightfully very very suspicious that big that you know that the it looks like there’s a hardware event coming out of Seattle at here in the near future it looks like that you know that Republic reports are saying there’s going to be 7 or 8 new devices that support Alexa and I’m looking through the list of devices there supporting I couldn’t help but notice their shipping devices that directly compete with their partner so that I’ve had Partners build you know Automotive

be there in a little Automotive versions that run their run their voice assistant stack bro you under their own brand and now all of a sudden Seattle shipping one under Amazon’s Brandon and you know that I think that companies need to think very carefully when there’s a boy in the stack because you really give you know if you’re using that a big tech company they have visibility of every single interaction with that device and in between that device in your skin so they can learn a lot about your business and your customers and how they’re using this technology and then when they decide to build a competing product they have the benefit of all your information okay for the for the Mont to before you move on I wanted to ask this in English but support other languages is one of the right or is that a work in progress It’s a work-in-progress I do believe they’ve got some version of German working at at some level I’m told that the user experience is a bit Rocky one of the things I’m on this trip doing is building communities and all these cities and all these

countries to help us to generate the data that’s needed to support local languages so you know the to do a broad automated speech recognition system that you need a thousands and thousands and thousands of hours of accurately tag audio and that’s a real impediment for a special for small companies to build this type of tax so we’ve been working very closely with the team at Missoula are the Deep speech team based out of her Land by Kelly Davis at it is been waiting that team and you know they’re doing a lot of the heavy lifting on the machine learning in the development of the beach Beachside they’re also doing a great job of acquiring data and various different languages anyone who’s listening to podcasts who wants to come to be part of that can go to a common boy’s yes and select their language you can either read phrases into the machine or you can validate other people’s phrases that you can contribute as few as five minutes the project you don’t have to create account nothing that you

10 hours and hours and hours at it if you choose and that’s helping to build the data set at both in English and which was the first language in then also in German and French and Welsh and there’s a few dozens of other languages that are being launched right now to support the cause is not clear about the fact that bad technology is being used in another one of the many benefits of 11 sauce is this the Spree sharing of normative data Corporation going to a little bit more in the moment the The Mach 3 I’m sure you will do fine. I said did you plan to the crowd funding campaign like you did for the Mark II and forth on that I’ve seen Financial models that included crowdfunding campaign I’ve seen models that don’t you know the crowd funding campaigns don’t really fun fully fund a project like our store on the market

this technology oftentimes outpaces the cost of the the crowdfunding campaign so for example or mark one by the time we were done on Indiegogo I want to say it was like $192,000 that we raised but we spend nearly 2 million dollars delivering those speakers to people cuz of course we had all the non-recurring cost of setting up the back end use the same software for the marked as you use for the Mark 1 and then you prove it a little bit and so forth but you know you you do when you do these crowdfunding campaign so you know really kind of write a check that you have to cash at a later date and so you know I think the one of the other challenges there is and they’re starting to see this more and more people bring good ideas to crowd funding campaigns and then find out that they’re being manufactured inch engine faster than the company and deliver the product seen that there was there was one that it was a mobile phone case that was an extendable and that the guys at

don’t know the plans you know and then before I could even get to my cat he’s sorry I will say it on Amazon or where I thought I mean breaking right there’s something to be said for keeping it was called keeping your powder dry right now and you know I think it’s actually one of the things that contributed to the failure of geebo which was a huge project out of the US they ended up raising north of 90 million dollars to go out and Build That Wall Voice Assistant running but at the end of the day you know that the assistant that they shipped it had a bunch of moving Parts which meant it was a $900 figure it out against the $90 as well as people expected either I was there was a there was some challenges like shipping is the second largest use case for a smart speaker so if you have a smart speaker that doesn’t play music you know what do you use it for right now so we have some challenges there a bit but also from the time they did their crowdfunding campaign to the time they delivered was several years and during that time

a lot of imitators stand-up and so I think that there’s some danger in doing those crowdfunding campaign so its own answer your question I’m undecided we do have plans to do it but we may not we may not pull the trigger on those just kind of depending on where we are as it as a company in the spring

okay listen is listen up I’m just releasing you Amazon Alexa skill for this podcast and let you have the latest episode or her a list of available episodes and pick a specific one to listen to too and we have to say is Alexa voice Tech podcast or if you want to read the description go to voicetechpodcast.com Alexa let me know what you think and look out for God on my blog saying that would explain exactly how to build your own scale just like that okay

but this just getting to the detective that land could you work a story that the temple stack of 36 playing how much are quads and some of the things that you can do I can do the other voice assistance sure so that I did want to step one one step back to the common voice stuff so the common voice project that we’re working on with Mozilla you know people can go to that website at, voice. Org I think it isn’t donate but the side effect of the way that that date has been collected is that it’s very clean data so people who are contributing to Common Voice or sitting near a microphone usually in a quiet environment and then a reading from a screen which has its own special Cadence benchmarking the the machine learning models that were coming out of that was it really wasn’t very accurate for real world use cases where you know you’re in the kitchen and the dishwasher is on and there’s dogs barking in the background it had a lot

trouble in a noisy environments and so I think it was that the first time we donated data over there was I think 2 months ago and then we just sent another dataset over there yesterday and that’s data from people who opted in in our ecosystem it at my craft we actually don’t collect any data on users unless they explicitly make a decision to donate their data to improve the technology and about 15% of our users make a decision to trust us with with the data to to use it to improve the fact so we’ve been taking real-world data from Mycroft instances all over the world I packing them up and sending them to the common voice project is our contribution to it so you know we get to use the resulting models but of course then they get a big huge piles of real-world data that that you know really is it’s Mom with the kids in the background in a noisy environment with music cars and that’s really I’m hopefully going to continue to improve the accuracy of those model so you know what to even buy simply

you know using Mycroft and choosing to opt-in somebody can contribute to the overall project so when we first got started the 1st demo of Minecraft was May 15th of 2015 actually we have a video of the of the demo if it actually working for like the first time I’m you know we used everything went third-party so you know for Wake word processing I think we just had like a so we would just listen to 10 seconds and in and then send it off to see if if I went for his in there right it was really terrible for automated speech recognition I think we were using IBM at the time we use with AI for intent are saying and then we used I think it was Google’s ictm or maybe espeak for speech synthesis and so we kind of Cobble together a really broken user experience just to see if it was possible because course at the time the only wide we deployed

voice Tech was Siri which is locked up on Apple and Amazon was still in closed beta so we we knew nothing about the Amazon Echo we never heard of hats off to you for a for trying at yet so we’re building this thing from scratch you know I’ll let you know the idea being that we can build this conversation assistant that that eventually will be so natural you can’t tell it’s a person but today like it was just really really terrible and going to head and and develop open source solutions for each step of the process that were initially based on existing open source projects and now have morphed into the projects that are based on machine learning and so for Wake word processing we originally use pocketsphinx which came out of Carnegie Mellon University pocketsphinx is good at recognizing phenoms so that the various different pieces of speech but it’s not really very accurate what we found with pocketsphinx is it was pretty is about 50-50 hit-or-miss and if someone said the white what it was it wouldn’t hear it would ignore it

20% of the time it wouldn’t hear it and then the remaining inaccuracy came from when it would hear inadvertently other work so sometimes you just randomly trigger like for no apparent reason why somebody would be saying Microsoft or Minecraft or something that was similar enough that it would trigger the wakeboard so we used the pocketsphinx data to generate enough data that we could start training on machine learning algorithm which we’ve deployed and it’s a precise and precise uses a fairly sizable set of data I’m about a hundred thousand tags the other ince’s that our community has tag that contain the phrase hey Mike roster don’t we also have a we also went ahead and recorded dozens and dozens of hours I can bar rooms and other noisy environment so that we could we could have examples of where there is no instance of the Wake word and at that is a very very very accurate system so precise on our mark one it actually Burns an entire core of that Raspberry Pi Pi 3 is a quad core processor

so one Total Core is dedicated 100% the Wake word processing are two were moving that to an fpga so it runs on Silicon that’s very very accurate and companies or individuals can train that precise model with actually fairly little data like 50 or 60 utterances can get you started and then as you collect more data for accurate examples of being spoken but then a company can try not to cuz I need 50 of 6-10 says hi accuracy but you can get the like 85 90% accuracy with you notice 50 other and says I’m so and you know women or children are you know people who have different voices at a different levels of the register it card improve the accuracy but just to kind of get something started that works Brian takes a very very small data set and then this is not win no talking about that transfer learning weather

I’m some kind of bad conditioning weather at the entrance is provided by the company of implementing the actresses that you have in your in your huge database and then that’s used in the model that that you were talking about the the company from scratch is in one of the multiplayer ml models where you at like 80% or 90% of the work is the basic framework and then you just add the the last the last day of summer do believe that that models being trained from scratch and so so anyway they use it for for use cases that are kind of unique so you know we partnered with a company in Kansas City called sick weather Graham Dodgers company in and 160 weather does is it uses social media data by publicly publish social media data to to track illness the track and predict illness and so you know they can say they can see a flu outbreak or they can see an outbreak of a disease in a community simply by monitoring Twitter feeds in that community

urgent thank text my Nana’s fantastic and so I was sick whether you know we’re looking at a project where we placed sensors in a public transit in the Kansas City area that Are Toonz to only they don’t detect any speech at all like they’re not recording anything they don’t all they do is detect coughs and so we can put those in a in a bus or a train and we can tell how many people on that train or coughing and modem Baseline set of data to determine the level of disease in the local community which can then of course be used to for example at Drive spending on Advertising initiatives to encourage people to wash their hands things like that things could have been shown to be able to mitigate outbreaks of disease is outside of just waking up a voice assistance Max layer of that stack so you say the word hey Mycroft in that wakes up the listener and then it records audio in the room at either until it to text silence or until a time. It runs

but I think the time. Like 15 or 20 seconds I’m usually at uses silence detection so you know as it’s listening for the Wake word it’s it’s doing a background and now it says on the noise of the room to figure out what the noise Baseline as I hear the Wake word wakes up and then it listens until the the background noise goes back to the same level and then takes that audio and it sends it to a cloud service where we convert the audio from Mario data into text that’s really wear them was all a deep speech model comes in right now that’s actually the secondary way we do it so deep speech when we originally wrote the doubt it it didn’t have enough accuracy for people to have a good user experience and there’s a there’s kind of a limit to the amount of patience people have for sure until right now all of that audio data goes into a bucket and then we send it out to another stt service for validation to make sure that it’s accurate it doesn’t come from the person using the speaker’s IP address so there’s no way

identify who that speaker is it basically just comes from micro from the company okay but here in the next several months I think it’ll move those straight the Deep speech with nothing leaving the the corporate perimeter so you know the Deep speech at machine learning model then takes the audio and puts it into text and sends it back from there it goes to rno you engine which is called adapt time and humans natural language understanding and adapt uses two approaches to try and figure out what the person using the technology was trying to say one of them is called unknown entity rules based approach no nudity rules is good for things like turn the lights on in the kitchen because it’s got a long list of all the various different objects you can turn on two lights and toaster and sing whatever the vocabulary that you already spray to find on and off for taco positions and kitchen of course is a location will have to get you in the garage and living room in other places so it looks in those lesson says oh this person is trying to

toggle an object in a location that’s probably an iot query and it assigns a probability to it selects the iot skill and then passes the extracted data onto the iot so the iot school gets a little data structure that says object lights location kitchen toggle position on and then it goes out to wherever if then that this than that or SmartThings or whatever the user’s got set up and toggles that that iot device I was going to ask about the Integrations as long as the order that is a possibility you can just borrow any idea what things working I know that I’m people been playing with if that’s than that that’s when something that we planted support when I go to production for sure another one source open I’m sorry open Home open source stack that is for a coyote i t hops

and so and I think I totally Miss named it cuz it just right out of my head so I can send it out to you know open out to Hobbs as well or you know if you’re using Raspberry Pi you can simply toggle t i o on the device right so you can you can actually flip a digital output turn on a light or do whatever you want to do on the actual Hardware the other way that we process that natural language understanding is using machine learning so although none of these are great for turn the lights on in the kitchen there terrible for make it brighter where I cooked right where a person would be able to interpret from that phrase that what you really want to do is make the kitchen brighter turn lights on their flow direction is just really incapable of that when our case we also have a series of example phrases for that skill that our bed to a machine learning algorithm called petitious that then turns around and tries to extract the meeting based on past phrases that triggered

that’s okay that’s what that’s why I wanted to end the end then if it’s not going to handle it other than the probability of a sudden is too loud I guess then it it possible of machine learning learning is a supervisor machine learning models they have to have seen examples I said I guess those are provided by that the scale developed by today they are and I’m glad that you brought that up because in the future they won’t be so the we’ve built a tool that we called persona persona does is it grabs all of the Just from people who opt-in not from people who haven’t chosen to share their data but just for people who walked in and grabbed all the most intense so if you tell it try and sell my crafts to do something I can’t currently do it gets fed to Persona and we’re taking a very Wikipedia approach to processing that data so the Persona tool allows our community to log into the system and see all

the Curry’s that came through that the system couldn’t make sense of some of those are they actually get tagged as gibberish but if it was supposed to trigger a skill the Community member can say all of this was supposed to trigger the iot special skill failure iot scale and eventually they’ll be able to actually highlight the entities and feed it into the machine learning database so then the next time somebody makes that query it triggers the appropriate still on the phone we probably will do the same for search and so if you know where default you know search is actually the number one most used feature of a voice assistance of people asking you know things like how tall was Abraham Lincoln things like that that may change over time because I think at some level that that’s being driven by the novelty of the system so people ask ridiculous questions and I’m responding with these kind of simple and so many of those searches that that didn’t hit right that didn’t get an act

answer be done to Persona and then the community comes up with an answer and and that becomes a really intriguing system because it allows our community to develop subjective answers to questions so what’s your favorite color right well in in Siri or Alexa assistance case Voice Assistant favorite color is going to be whatever the company that owns it said it right in our case you know the community can set the color for Microsoft Excel but also eventually will be able to for a replicate those personas to create personas that have custom personalities are custom for Summit Regional friend sends the favorite color in China might be right but else that might be yellow as butts and to create answers for more sensitive questions that are culturally appropriate so you know one of the challenge the voice assistant companies have is that the proper answer to a question about address for example in the United States might be very different between New York and Riyadh right and so you know

how do you create an experience that gives an appropriate answer for you know what do you think I should wear today well and one case you know it’s probably appropriate to wear a blouse in a set of slacks and another case it’s appropriate to wear hijab right and so how do we create a user experience that is able to kind of span those two communities in the answer is by taking Persona and forking it and allowing the community to develop their own standards for their culture where it’s appropriate for them I mean to this this was two things I want to say Festival because it comes to the community is more trustworthy than did you know it comes from your community the people around you and then also I guess not saying update what I kind of Correction refinement and be applied at two different levels that can be that the regional Eva bowls that the individual that produces mi amor Amor Amor by passing lies somebody

playing a voice assistant I like to take T-Mobile the phone company in the United States is an example T-Mobile has a reputation for being a bit on the edge with the way that they interact with customers their CEOs known to be a bit of a Maverick and and so they they created a Persona 4 the company that really appeals to this younger demographic in the United States and you know they can deploy a voice assistant at T-Mobile that represents their breath right there for certain questions gets you kind of energy answer than a normal Corporation wouldn’t give you at the same time you know corporations that are are more stayed in conservative General Motors for example might give you a very straight line very corporate answer to the same question and then of course because of our skills abstraction which will be here in a second you know companies can expand the skills to include no crazy stuff like being able to wrap alongside M&M that’s one of the skills Mycroft as you don’t or they can limit it to

best skills that that represent represent services that their company provides and so I I think at the end of this you know to achieve our goal the Persona to Liz actually probably the most important thing like everything else is kind of supportive of it but the Persona tool is really where people will eventually build dialogue strings to get ingested too so that I can actually have a back-and-forth dialogue Persona is where we collect the the sit all of this objective answers that a voice assistant really needs to have and then through Partnerships with organizations like at Wikipedia Foundation we eventually plug in a real-world data sets into the back and so your voice assistant should know immediately what’s going on in you know as of today that the Kavanaugh Supreme Court approval hearings are going on the United States and you know have relevant information about yesterday’s news it’s up to die and you know the wicked wicked data community

really really good at keeping all that data up-to-date to the point where every voice assistant Global use it right and I’ll be able to take that and then just it in the Persona in a structured way will really give us the ability to create a voice assistant that realistic that has a great subjective personality and then has up-to-date objective information they answer questions for the community absolutely on so it’s got the right information it’s being said in the woods at the company is it happy for it to be 7 so it’s in the in the voice of the company I’m also the text-to-speech I guess at some point that would be customized well as well so it was actually sound like the brand of the company you tell us a bit about the the text to speech and then the funds for that engine that we built initially I was called mimic it was built that once again built on great technology coming out of Carnegie Mellon but it used to parametric approach to building speech and so it’s almost

cat native you know the ability to take various different feelings and and and move them together to create a word and it wasn’t eligible right but it’s not a very robotic and because of the way that the data was processed you had to read the phrases in a very flat tummy and so the team’s been working on a new model called mimic to which I think we just published the new models weeks ago or 4 weeks ago I don’t think they’re the default yet I think if you’re a Minecraft user you can go to home and select the the mimic to model that was based on machine learning and so we had our intern coosaw who was the only person at the whole company who had enough time to sit down and do this sit down for basically a week in a Sound Studio and record what amounted to about 16 hours a clean audio and we learned a lot during that process the next voice that comes out will actually probably be better because we learned a lot about how that that dated needs to be acquired

but we didn’t take that feed it to a machine learning engine and then we can synthesize voices to have all of the inflections Clarity that you would come to expect from from an actual person and you know overtime will be able to implement take that same data set and improve the machine learning side and get closer and closer to the original speaker so I would anticipate within a year maybe two years at the outside I will be able to collect data from an individual speaker and basically mimic their voice to the point where it’s very difficult to tell it’s not that person you know for anybody but I think that’s fantastic and possibly with less than 16 hours worth of a date as well as the speech synthesis piece so you know over the course of the last three years really most of the time that we spent has been bringing all of the various different pieces of the stack it you know in house right and

and then starting to shift them from another historic approach to machine learning and then publishing goes and making them available to community so that we can start acquiring data from donors and you know that this next year Billy the entire company is focus on you X on user experience so that you know when when we go into production in February of next year the reviews that were looking for from you know technology for viewers are I’m actually almost quoted it’s although it doesn’t have all of the bells and whistles and features that Alexa and assistant have for people who care about privacy Mycroft is a great alternative voice assistant right that’s really the review were working or looking for we’re not going to have you know the ability to make phone calls through the system we’re not going I mean there’s a bunch of things that we just aren’t going to get because I don’t you know I didn’t have $319 to go by Grand Central right the the phone company thousands of people working on the big fat guns

and Amazons case they bought a bone of the speech synthesis company in Amazon’s case they bought it on Samsung’s case they bought Viv in Google’s case they bought assistant that so they are benefiting from years and years and years of Research Forum a lot of different companies in from a portfolio of internal internal Acquisitions in our case were really scratch building it but I think at the end of the day and it’s probably not sooner than 12 months right but I would be really surprised if it was longer than 12 years you know that user experience will gradually work towards a point where you really do have trouble when you’re talking to the system figuring out whether it’s worse and that’s really where we’re headed and we want to take that experience and democratize of anybody like a kid building a stem project at the high school by himself should be able to deploy that experience that you can control is robot all the way up to a multinational corporation with manufacturing facilities all over the globe

can deploy a voice assistant to answer the phones in their corporate headquarters and interact with their customers in their car so I want to make sense I mean if you want leave Voice Assistant II sounded realistic as possible and as many contacts as possible all around the world you need to enlist the help of people over on the walls and from all walks of life and I’m so I can think of no better way that and yeah this is our invoices playing this thing and one of the other pieces that stack that we really haven’t done a lot of talking about is that the back into home. Minecraft that a guy that’s where when you you get a micro smart speaker you do you attach it to our our back end server yeah that’s really where we do account management that’s where we’re hosting the credentials for your Spotify account for example that’s where we allowing our community to come in and help the Trap tag queries for precise and for deep speech

and you know in itialy the plan was to keep that that piece of the secretary that you ain’t that would be the secret sauce and then you know if you bought a device you would use our back end but the more we thought about it the more we really came to the conclusion that it makes sense for us to take that back end and make it public to allow companies to be totally independent from what we’re doing and to do that under an open license that requires those companies to contribute improvements back to the to the community and so we’re taking some steps in the near-term to a license at stuff under an LG p a license and to publish you know that entire system so that if you’re a car maker or you’re a speaker maker or you’re a retailer and you want to play the Minecraft stack in support of your own application you can do that totally independent I grabbed our back end you know have a manufacturer make our open speakers use our open software to play it at will

get your invoices create your own user experience and not have to pay us a penny off the benefit is that you can achieve Sky by their name and I think that the the the scale factor that that brings to the community outweighs the you know the Walled Garden approach for monetization I buy some encouraging people to go ahead and the doctor so let’s talk about that in the moment I wanted to ask about the the skills available of my truck episodes thing I’ve been working on a new skills store that has a really pretty good we wrapped around it that really effectively communicate with all the skills do and allow people to let you know you’re not just through voice which is the way that people install now but I should get on the website and I see what everything in it Go and install it to the devices I don’t know when that publishes it’s it’s near-term we we do a release every 2 weeks so I would be

surprised actually if it came a week from tomorrow week from Thursday which would be like the 25th but I would be I would be very surprised if it wasn’t a bush by the end of October in any way in the in the current environment we have about a hundred and fifty Community develop skills have been contributed some of those are really really handy that the remember the milk skill is a shopping list skill yeah it was very well put together by the developer who contributed it uses a lot of her back and forth dialogue stuff so you actually have a conversation with the AI as your setting up the setting up the skill and you know that that one’s really useful in handy there are other things like wrapping alongside Eminem that’s not what I mean I was so much for the other things to do that’s about to try to set up a text message to Asana through IFTTT and there’s a big clumsy it doesn’t always work perfectly and I kind of gave up in the answer I’m going to check out the run

I really a community skilled it’s really handy and has been adopted you know pretty early adopted there are there less useful Skype to tell Chuck Norris jokes that’s another thing that’s right I checked know everybody needs to hear a child horoscope so you know and everything in between you know the most developers are contributing to the stack or contributing a scallop sometime or some kind or another in many cases it’s kind of like a POC for them and other cases you know we’ve seen big companies doing Integrations with inventory systems and other things that are kind of interesting you know and everything in between and Anna go let you know all those skills are open source so the goal is that you know when somebody wants to deport them I crop Stak as they can grab skills from the stove store and the boy you know whichever ones fit their need and then you know interesting ly the you know where the other things we’ve been giving a lot of thought too and I think it’s coming down the pipe

and maybe next year maybe it might take a little longer so we we encourage at the people who use my craft to make a $2 monthly donation as part of their as part of their account when they sign up for account it at home. My car today I am and you know eventually we’re going to add some value to vote for folks who choose to pay so I can handle voices probably there may be a music streaming service coming that’s attached to us that you can pay us and then you get access to a streaming catalog there’s a few other places where we think we can add add value to the for the people that use it and I think that one of the ideas that we’ve had is a company that I plan to pursue is the ability to take those monthly payments and dish out a certain percentage and probably a significant percentage to the skills that are being used right so and an interesting way this is has some kind of perverse incentives because it really incentivizes people to build skills that are useful and the skills that are useful are

what’s the weather right at this to my shopping list set a timer I need an alarm if I do for the ones that are you being used like the esoteric stuff like hey let’s play Jeopardy is a much less used skill right and so you know the cold air is to incentivize the community or community members to really develop skills that are useful to people use that are making you know that are making people’s life day and then you know having built the skill once and maintaining it overtime you know continue to to pass that that developer the funds that are necessary to go ahead and maintain that had a couple questions on the scales and because it’s open source because it’s not called it’s not walk out in like some of the big fat ones I guess that means that the range of possible applications that can be developed is wider and some more diverse possibly is that the case and if so is there a question of how are these outside ebbing moderated is it that some

filtering process process it’s for community members in One Company employee so it’s mostly Community Driven okay that are responsible for approving skilled with a Mycroft brand so with with speakers that have our brand with people who use our home. Maicraf today I have a can you know we’re going to approve or deny skills that best represent there’s not going to be a white nationalist Adolf Hitler fan skill in on the Microsoft store so if you know some crazy person out there wants to make a skill that that is the Leonardo DiCaprio fan skill you know something that you know we wouldn’t necessarily support sorry Leo you know they can go ahead and and build a specialty skill you know and build their own background and build their own brand around it and so I’m so it’s access to the App Store that would be so sad.

you can build any app you like on the platform you can help yourself and I would be available throughout through the voice assistant app store and ship their own phone under the ground brand do it like go ahead and do it Thursday I think there’s a limited audience for that and I think that as we continue to build momentum most people will run the Mycroft branded assistant and most people will be happy with what we offer but there’s always people out there who have you know different tastes and whatever and if you know that the adult film industry wants to roll their own voice assistant that’s on a powered on our technology find things that they can put our brand on it at single Mycroft voice assistant device to multiple Services as soon as possible and if if you could side loading up into it and it was told the guy outside on that. You can you can do whatever you choose to do with with

we probably will take some steps to protect people from themselves in other words like you know don’t make it so easy to sideload devices that people use it as a security as a security vulnerability right but at the same time you know uninformed user who makes a choice to load adult content onto their smart speaker it’s none of my business right and you know it just as long as it doesn’t have my brand on it I’m I’m totally fine bring us us a good way this a good place to draw the line to use that device that I purchased in the way of the spirit of episodes I think that creativity yeah it’s one of the disadvantages of one of the weaknesses I think with the big fat will God always this you know people will come back in and let’s go back to the Google duplex demo right so are the Google duplex makes a phone call to a haircut place and schedule an appointment well you know in the person at the at the hair

esselon does another talking to an a I will now put that in the hand of an abusive customer support hacking organization write the kind of calls you up and says you know what this web page cuz your things been hacked I’m here from Microsoft Lake send me Apple AirPlay cars to pay your bill or whatever you know I’m talking about fishing and the tens of thousands of phone calls simultaneously using a Toyota that’s a negative use of this technology and I think that that’s coming and there’s not really a whole lot that we can do as a company to defend folks from that with the exception of giving them access to the same tool so for example in that case having an AI answer the phone so that when you know the the guy calls app for the Microsoft support that you know that guy puts it into an infinite Loop and runs his phone bill up to a gazillion and you know the the the person who’s AI it is never even sees the call

come across as spam nasty recently it so you can forward to your spam emails to this infinite loop on the champ on yeah I can see that their way you know when the tools are coming one way or another the danger it from my perspective is that you know her from my view is that a company know some big tech company Wicks this problem first right and then we just look at the call center industry in the United States with 3.5 million people employed in call centers when the voice assistance get good enough to answer those calls that’s three and a half million people who will no longer have employment and the free market Economist out there will be like all of it they will find other other employment in in the economy maybe they will but you know I’m looking at the split between the top 1% and the bottom 99% of each of the United States recently you know the type of employment they’re going to fight

and might not be as good as a call center job in a call center job isn’t much of a job to begin with and concentrating it in a company like and I’m going to pick on Google here like Google with 20 something thousand employees and it’s a very small company for the considering it’s got an 800 billion dollar valuation surprising you know further to concentrate Wells further impact the job market you know really isolates Billy isolates the that worker or Roblox it up into one company you know in my mind that’s really dangerous the idea that we can take that same technology and make it available to those 3.5 million call center workers so that one or two or a hundred of them can develop new businesses in that industry an employee their their co-workers and and you know have this you don’t have access to the stack is is really really important and it’s one of the reasons that were pretty behind democratizing this

if we don’t it’s coming anyway and it’s going to be on by like three guys so about that we can get involved I can build skills and develop her and I want to go to micro scale today what do I need to do I what languages do I need to know what do I need what kind of how do I will stop by and do I need to do that okay so you know so we do support Lenox so if somebody doesn’t want to buy any hardware tall they can load limit Lenox DM on their on their laptop and a virtual machine using I think the one I remember which one I use the one from Oracle and it’s open source turn key in a load that up and put it and and then load Mycroft on to that Linux machine and then use the the computer speaker and microphone to to do development so there is there’s no real need for any hard

stop being said most people are using this in a smart speaker like environment and so you don’t get in a Raspberry Pi for $35 a Google aiy kit for another 12 will put a like a full-on smart speaker you plug in the wall and go for like 40 35 and 12 so that $47 another good solution or if you have a microphone and speaker lying around you can just buy a Raspberry Pi and go you know once you’ve got the voice assistant up and running building a skill becomes is pretty straightforward we have really expensive documentation Kathy Reed is our community manager and she does a fantastic job keep it up-to-date most people will clone an existing School the one that we see is really popular is the weather still starting point because it’s got the whole framework way. They can go in and make the changes they need to customize their skills and then you know they can test it locally and once they’re happy with it they check that into and they can

and submit the scale for inclusion in the Micro Stack so it’s a it’s a pretty straightforward process I’ve seen a python developers build a a very simple hello world skill in less than a half an hour so you know based on our existing Frame Works at get them up and running get that get that functional and then start plugging in KP eyes in the back to do whatever and so it’s a it’s a pretty straightforward process of where a python shop we deliberately chose python for Minecraft because it is an accessible so it may have made more sense to have done it and go or in splicer variety of other languages I certainly would have been faster in terms of execution time but you know what that really limits the audience that can use the technology to you know professional developers who have significant experience that stem student at the high school who’s got the brilliant idea to have access to the tack and so we we we chose python has the

language right choices someone who’s the best out of any programming in Java and has since moved the python I can absolutely riveting to the Micro Stack we require a contribution agreement that it basically opens the license for the code all of the Minecraft stuff is licensed Apache to the back and will probably be lblgp oh so it’ll be a slightly different license the the skills that come in need to be open and they need to be licensed in a way that if there are our patents the other white the patent license is explicitly granted as part of the contribution so Apache to that does that very nicely one of the things that open source Community to run into in the past is companies or individuals unscrupulous individuals at developing software opening the license Computing into the community having it widely deployed and then standing up and being like Oh I gave you the copy

the licensing mechanisms were using explicitly license any packs that are associated with it so that we’re not reading landlines for for users of the tack so once they once they’ve license did it become public now if they want to create a proprietary school there more than happy to do that they just need to figure out a way to distribute that like by side loading it or by creating their own their own skill store just the nature of the Beast with us is there any skills that are contributed are are So eventually that may change but we we you know if if that is the case then we need to work out in the licensing and everything else associated with it those are the skills that I deployed to get Hub and available on your Marketplace Bob see that doesn’t apply to skills that company

Watson’s how’s the photography skills that that use your skills we don’t have any saying that if they change the back end code under the lgpl they’ll be required to contribute that so the skill that runs on the device with the apis everything that underpin it they can do whatever they want to do with that but as you know once we license the servers color the charger is cotard races The backend system once we open that up and make it possible for anybody to pull a basically the entire system companies will be required if they make changes to that to contribute it back because it although it does increase the pace what you don’t want to do is create a foundation for some other company to go running off without you know if you’re being there changes back to the overall community so that that will have a slightly I would say stricter open source interpretation Community behind

how to hack and Deb’s got the pole at that developing the skills so we have a very active user for him we have a telegram group and we have a very active chat Channel all of which Toby company employees and Community managers in throughout the day our community is really does a great job of answering questions and putting people at the appropriate documentation interesting ly you know in this past week you know Linus from Alice the the colonel the developer of the Linux kernel and nephews kind of been weeding that effort has made a public announcement that he’s taking some time off from the kernel development to go out in and adults better leadership skills is effectively what it comes down to because that Community has had such a poisonous a poison that you know that the Linux kernel development Community has had such a poisonous than you write and in our case we’ve been very very careful and how we interact with the community and how we police interactions between Community member

if you had to do very very little love to keep that Community really positive and focused on helping people to solve problem so so unlike a red it for more like you’ve been getting down into the nasty or parts of the internet 4chan or whatever where people will pull each other and pick on each other and be like real jerks at our community is very very positive and it is really receptive you been to the most basic questions and so I think that even a you know a 13 year old kid who’s developing a quill stem project can feel comfortable waiting in and saying hey I’m having trouble with you know something very basic in our community would be very patient and helping that person to South Bend now that’s wonderful I also found a Ryan Sipes was in charge of that community did a fantastic job and and building and managing that mean to go on the ground

sleepily saying you know from time to time you get a bad actor and in general as long as you confront that issue head-on and address it I’m pretty quickly those folks either shape up or they leave and you know that the goal is to have a work environment and it’s very much at work environment for most of us that that is positive where where new ideas aren’t food and where you know everybody you know feels comfortable waiting into the conversation about open source what are the main benefits then to the project and to voice my voice assistance in general about the open-source approach so so we’re doing something new and open source actually which is kind of interesting animals that was part of this process as well you know because we’re building software that’s based on machine learning the software itself has some value but it’s really the data that underpins it that makes it work right like you can have the best precise wake word model but if you don’t have a hundred thousand samples of hay Minecraft

it’s not a useful piece of sock Shores The Perennial problem other things and so you know we’ve really kind of pioneered the concept of an open an open source data thought you know data keep that allows people to be forgotten because the other kind of the other issue when you open a data set is that you know once it’s open and people have copies of it there’s really no way to call that back and so you know if you don’t if you don’t have some kind of mechanism to allow people to be forgotten and you’re using a voice assistant anything they say in front of that voice assistant becomes public record permanent things that are coming to the Forefront you know there really isn’t danger there so

so we’ve developed a model with Mozilla where every 30 days people who use the data set are required to renew it and delete the original data set that’s part of this new open source license and so the data that people are contributing if for whatever reason during a month they choose to no longer volunteer their data at by the end of the month that date has been removed from the dataset and under the license that that we granted to people who are using the data they’re required to refresh the data which gets rid of all the people who’ve opted out and that’s something that’s really new to the open source Community Financial this is voice data is being used to train the the speech-to-text model and precise data that’s being used for Wake word and in a speech synthesis data basically anything that we have our hands on that we’re making available to the public art and then the offer itself is a very standard open source open source model so open source mean

that that all of the underlying technology is accessible there are no black boxes everybody can examine the source code and see how it works. It also means that the person publishing the software is granting rights to the people using this offer to modify that software and change it in some cases like under Apache A2 license that that right of modification doesn’t have any requirements on the user so they can take it change it keep the resulting source code private and take it off and build a competing product for example in other cases like the LG p l which will be using for the back end if they change it they are required under the license to contribute those changes back to the central repository becoming care of me and so the benefit of open source for people using the software is that they can see inside and see what’s going on and that they can approve it and change it and modify it to fit their needs but I can see that voice

security from a transparency in a security perspective it really is ideal right because you know the Black Box piece of software that’s secure it’s only as secure I’m putting air quotes around there it’s only as secure as whoever the security reviewers where and how much budget that company had to review the code and what we found Time and Time and Time and Time and Time and Time Again as a proprietary software has a tendency to have a bunch of zero day vulnerability is that nobody can see it because they can’t see the underlying source code and you know from the perspective of the company that has the advantage that

you know that the world really does flea markets really do work so when the price of something is zero right it has a tendency to drive adoption and so you know companies that are out there they’re looking at the point Voice Assistant that you know every system that’s out there today has an an implied cost whether it’s you know going in and using Amazon’s Alexa voice Services which has the implied vulnerability of like Amazon has visibility of everything that’s going on in the stack and you’re really dependent on a cloud service to some of the more proprietary solutions from companies like new on her Steps II that you know yes they they might run on device or they might run locally at but you have to pay those people for free using that software the open-source model has the benefit that you have visibility you can do whatever you want to do with it it has zero price and so for from a from a cost perspective from an economic cost perspective and it means that companies adopted

and so you know it open sources from a company’s standpoint as a market strategy to go out there and make sure that the software that your building is relevant and white we use and you know if you look at the voice assistant or the speaker market today that’s really what the all the players were doing right like our Amazon I die I have to take pictures of Google Assistant ads every time I see him and I see him when I’m in a Subway or when I see an ad on the side of a building like I just shoot pictures I have pictures like they bought the entire New York Subway system the whole thing the couple in Paris but not a lot but when is it last year they like they had guys dressed up in Google Assistant costumes like wandering around then so you know all the trains were wrapped in it like they just met this huge amount of money because I’m right there after market share and it’s the same thing with our friends at Amazon and you know from our perspective we can do the exact same

sing without the marketing budget by me like Heather it’s free and we won’t spy on you Google assistant on your Android phone for free is free but there is still a cost and its people waking up to Facebook you know if you’re not paying for it then you’re the product you’re the Price Is Right I mean Google’s business model is effectively a B2B business model they are not a consumer company right now Google sells other companies your data that’s their model you are the product that they sell to their customers who paid it and you know that the end so the you know the giving up their privacy is is how people are paying for for those products you know it’s a it’s a bit different I think an Amazon’s case they know they have their monetization strategy is more along selling their customers direct date of butt

it on Amazon named Alexa Alexa and you know when you asked about why they did that they give a variety of reasons but you know Amazon made an acquisition in the late nineties of Alexa data service it Alexa data services track website and web use across the entire Internet it’s one of the largest data aggregation machines anywhere in the world where they name their voice assistant after it so think about that they’re always listening microphone that they put in your kitchen is named after their data analytics platform like you think that’s a coincidence right I can see the guy inside Amazon cut them both the same thing but you know it really does you know all of that data is going to their service it’s all coming to the United States right which has has a spotted track record at respect in the privacy of international and international let you know us persons they actually do fairly decent but if your not a US person

you know the United States has a very spotty track record there and you know it’s an open solution makes a lot of sense from the standpoint of you know you get to deploy the the software what we get is market share in mind share which eventually leads to bigger companies approaching us and saying hey we want to do this we don’t have the expertise help us to do this like we will pay a licensing fee for XY or Zed or Zee you know saying hey we want to modify your backend system to you know integrate with all of our our current account Building Systems and whatever else and we say okay well under the lgpl you have to contribute those changes back to us when they say no we say okay well we can license that to you as a private license but of course that comes with a cost right which is worked worked well for a number of different open source companies that do eyes you can make money over the long time is it’s to repay that braids or proxy licensing tails and then you know Bill Burr

I’ve seen that kind of us comic strip you heard boss gives you the assignment at at you know Acme Corp to go and build a voice assistant for whatever product you know the first thing that developer does it say Okay Alexa Google well that’s kind of neat okay open source Voice Assistant up Pops Mycroft and then that’s the opportunity for us to go in and sell that you know he uses it or she uses it you know it’s a simple solution you know using our skills framework or whatever and then presented to the pointy-haired boss employee here boss. What do we need to buy in terms of support and whatever from the company in myself for a long time I want support for the products that I deploy to support customers because of course I don’t want to be responsible for everything that one of us want to see you winning over the individual and getting them to use it within the company and then having them sell it

are doing themselves any favors by launching competing products to their customers right outside and they’re getting into Automotive they’re getting into space launch there in retail their cloud services there in phone system is there they’re floating balloons around the planet their Vintage Rose there until it looks like there’s a real possibility that fix a company out of Seattle may become the largest Logistics provider in the world by providing international shipping and you know with all of those things in place like if you’re a shipping company that wants to buy a voice assistant manager cargo Fleet you know going to go with this again but one of the things that I wanted to do is use the voice taped to the comes from people

I’m not supposed to say that’s currently being constipated to the emoji lady speech is it what what are the controls and and how much access do in individual developers have to the dates to the voice states are other uses seeming that was some kind of pain is that is not to contact us and license agreement that you need to sign that basically it’s an open source license so you can use it but it it does require you to refresh that date every 30 days so that they can be expunged from that data set that means if I was a developer and I wanted to use other people’s voice days huh that the day too I would have to redownload every every 30 days to get the yes but any models you make using that data can live forever so if you take the voice data and your training machine learning model to do X wires that you can use a model at nauseam but if you’re going to use that training data to run another training around a month later you need to download a new copy and that the purpose there is that if people opt out in the meantime we need to pull their

get out of the dataset I don’t know if we’re restricting the use of that data I need to look at the license I think the only thing that we would be sensitive to is people using that data for identify about Biometrics so wake if your license may exclude your right to use that data to try and identify the user and we might need to do that through licensing because I I don’t think that there is a a good technical solution there if somebody’s aware of one send me an email but but those are really the only two restrictions and then in Adidas available for anybody who chooses to use it I mean that’s that the whole idea of building a public data set data available they’re not even making analytics available in many cases and we actually have started publishing analytics last week or the week before we did a blog post about where we stack up next to the other

models I’ll give you a hint not very well but it was it was our Baseline runs so this is the first time we’ve run all these analytics against you know Alexa and Google and Siri to see where we stand in terms of response time and accuracy and other things and you know we we started the u.s. Sprint when we went to 1808 last month and the next 6 months are all about user experience of the goal is to take all of those blue bars in those try to have another color right to take all of those blue bars in that chart and Shrink them as over the the post that were actually aiming for is the one in December January time frame we say hey this is where we were in August is crooked how much it’s improved

another reminder you got the mop to mop to is available to purchase on Indiegogo but now you can pre-order it on Indiegogo running a start engine campaign where members of our community can invest in the company was fantastic tales about that at start engine.com there still a few Mark ones left not a ton but I think 50 maybe a hundred total goal is to sell the last mark one of the day the first box to ship so we’re not pimping it but there are still some available to put up on them and then of course we’re always looking for community members that have interesting skills develop especially folks who developed in this happens all the time I develop skills for Alexa or Google and found out that they can’t do pushes the devices they can’t get access to whatever data they can’t get this that or the other are platforms one that’s open that allows people to to do that and so I think the folks who are frustrated by the experience with with Alexa

assistant will be really happy to see how much Freedom we provided them on our platform so we love to have fun or what’s that what’s on the horizon for the next 6 to 12 months user experience user experience user experience user experience that they make a good for Grandma to stuff your time thank you so much

okay so you just had from Joshua Montgomery the CEO of Mycroft the open source Voice Assistant platform I’m really excited about the release of them out to if you’re thinking about getting one for other Christmas present for instant then I suggest you take a look at that Indiegogo page as soon as possible because they are selling fast today you enjoyed it as always you can find the show night presents the results is mentioned in the episode of voicetechpodcast.com other mentioned last episode I’d love to talk tomorrow if you about why you listen to the show I said to get in touch and set up a quick cool with me I just email me your contact details in your profile and your perfect time to call at voicetechpodcast.com

he likes about the show as always just have one friend will call you about this episode and don’t forget to subscribe in your favorite podcast app as well if you want to try my scale right now all we have to do is say Alexa stop voice cast of The Hobbit go very soon until then let me know how to send thank you for listening to me voice Tech podcast

Subscribe to get future episodes:

Join the discussion:

Support the Voice Tech Podcast:

Share this article

What do you think?

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Posts

Hillary Long Citro Digital
Best Voice App Cui Courses
Andy Mauro Automat Wide

Get notified about new articles