What’s New in Android Machine Learning (Google I/O’19)


[MUSIC PLAYING] MATEJ PFAJFAR: Hi, everyone. My name is Matej Pfajfar, and
I’m an engineering director for Android Machine Learning. It’s really great to
be here with you today to talk about how
Android developers can use Machine Learning to build
great features for Android users. Android today has a wide range
of Machine Learning tools available for everyone– designers, product
managers to engineers, mobile developers, and Machine
Learning experts alike. Now, Android is a platform. And one of the ways
you can measure the success of a platform
is through the great things that you folks
build on top of it. In Android Machine Learning,
we measure our success by your success. Today Android runs on
2.5 billion devices, and a great many of them are
already using on-device ML. On-device Machine Learning is
no longer a thing of the future. It is here right now, today. Now, Google is an
AI-first company, so we use Machine Learning in
lots of places, for example, in Lookout. It’s an application which
aims to help blind or visually impaired people gain more
confidence by helping them understand their
physical surroundings. We also use Machine Learning
extensively under the hood of Android to
optimize battery usage and allow you to get more done
on a single battery charge. For those of you new
to Machine Learning, let’s have a quick recap. Why do we use Machine
Learning at all? Well, Machine Learning
allows us to solve certain types of problems
in very elegant ways. For example, say you had
to write code to detect if someone was walking. And if you had access
to their speed, you could write a fairly
simple bit of code to say, hey, if the speed is
less than a certain threshold, say four miles per hour, then
this person must be walking and not, for example, running. Extending this to running
seems straightforward enough. And if we wanted to
cover biking as well, we just need to add another
boundary condition to our code and we’re pretty
much done, right? What about other types of
activities, for example, golfing? So if you think
about it, golfing involves a lot of walking,
but there is also carrying and swinging a bunch of clubs. So how do we go about
describing that? This is where Machine
Learning can help. In traditional programming,
we express rules in languages, such as Kotlin, Java, or C,
and we then apply those rules to problems of interest. In Machine Learning, the
paradigm is somewhat different. Instead of writing
a bunch of rules, we take a lot of label
data, like photos of people engaged in
different types of activities with corresponding labels
for those activities. And if we feed this data
to our neural network, it will learn the
business rules for us without us having
to define them. And in fact,
Machine Learning has proven to be remarkably
effective at solving many different types
of problems, including, for example, activity detection. And right now you’re
probably thinking, wow, this must be really complicated. Well, in fact, it’s not, but
we’ll get to that in a minute. I’ve talked to you about how we
use Machine Learning at Google. But it’s really exciting to see
other Android developers using Machine Learning and the
tools that we’ve provided to build really cool stuff. A group of engineers in
Delhi built Air Cognizer. It’s an application
which measures the levels of air pollution
using your phone’s camera alone. Now, it is pretty
amazing that you can detect the
presence of particles smaller than 2.5 micrometers
using just a camera and a neural network or two. Then there’s Gradeup School. It’s a learning app which
allows you to get answers to questions you need
help with by simply taking a photo of them
with your mobile phone. Now, to be clear, my
school days are long gone, but I’m a parent. I have seven-year-old twins. And it is absolutely
awesome that they have this type of
technology available to them as they go through school. My family and I live
in London, but I spend a lot of
time in California where everyone seems
to be a wine expert. So to help me take part in
conversations about wine, I have Vivino. It, too, uses Machine
Learning to help you explore good wine with good friends. But why on-device Machine
Learning specifically? Well, there are several reasons. For one, we have
privacy, a topic that’s very important
both to ourselves as well as to our users. In many cases, it’s more
appropriate to just keep everything on device
so we don’t have to worry about how the data
is processed in the cloud. On-device Machine Learning works
when there is no connectivity or when the network is patchy. And if, like me, you spend any
time on underground trains, you will know that this
is pretty useful stuff. By running our
processing on device, we can remove network latency
out of the equation altogether. And couple this with the
latest advancements in hardware acceleration, and
we can give users truly real-time experiences
across a wide range of Android devices. In fact, we have
built this capability into the Android OS itself. With the Neural Networks API,
applications using TensorFlow can achieve great performance
by accelerating their workloads on different types of hardware,
like GPUs, DSPs, and NPUs. And with Android Q,
the Neural Networks API will be able to support even
more great new use cases with even better performance. We have worked really hard
with our ecosystem partners to achieve this, and we’re
very proud of the results. For example, on a
MediaTek Helio P90, we were able to achieve a
9x increase in performance when running ML Kit’s
Face Detection model. But it’s not only about speed. Power consumption is
equally as important. And on Qualcomm’s
Snapdragon 855 AI engine, we are able to achieve a 4x
increase in energy efficiency when running very powerful
text recognition models available in Google Lens. These are all really
impressive results. But I did say that
this was going to be easier than it looks. So let’s get to that. We have made great
Machine Learning tools that will make you feel
like a Machine Learning expert. First, we are
launching the People Plus AI Guidebook,
a framework which helps you explore Machine
Learning opportunities for your application. In Android Platform,
we have made available wonderful Machine
Learning tools from speech to computer vision. Then we have ML Kit. If you want to leverage
Google Smarts out of the box, then ML Kit is the
product for you. Google Cloud’s AutoML allows
you to bring your own data set. And it will train a
Machine Learning model for your specific use case. And what’s new this year
is that this model can be downloaded and run on device. And last but not
least, TensorFlow puts the full power of Machine
Learning at your fingertips in an easy to use way. Clearly there’s
lots to talk about. So let’s get started. Hoi will walk us through our
People Plus AI Guidebook. Come on up, Hoi. [APPLAUSE] HOI LAM: Thank you, Matej. Hi, everyone, I’m Hoi. I come from the Android
Developer Relations team, and I lead our Machine
Learning effort. And if you want to follow me on
Twitter, my handle is #hoitab, H-O-I-T-A-B, to get the
latest on Android ML. So when I’m speaking
to developers from around the world what is
the most common question that I get? Is it how many layers I should
put into my new network? Or is it which ML
Kit API to use? Actually, it’s this one. Hey, my boss has asked me
to put some Machine Learning into our app. Can you help? Fortunately, yes, we can. And also, you actually
know a lot of the answer because Machine Learning
is no different compared to any other new technology. Think about the first time when
Material Design was launched. How do you work with it? You work with product
designers and engineering all combined to decide,
hey, what kind of problem are we trying to solve
with our product? How does it fit in
within the user journey? And finally, how do we
actually implement it? And it is not a single effort. You don’t just do Machine
Learning, take the box and say, yep, we’re done. We don’t need to do any
more Machine Learning. It is a very iterative process
when you have a new technology. You want to test the
boundary a little bit. You want to collaborate. And you want to see how the
user really use your product. And as Matej said,
at this I/O, we are launching the People
Plus AI Guidebook. There are six different
sections to help you structure your conversation
with your designers and product managers. You can access it from this URL. One of my personal favorite
sections of the new guidebook is this one, dealing
with graceful failure. We are all good
programmers here, so I believe thinking
of our edge cases is just part of the job. But with Machine Learning,
what is different is those are no longer edge cases. In the conventional
programming world, when you have yes
or no button, it’s fairly easy to deal with when
the user click Yes or No. In the new Machine
Learning world, what happens when a user
say, well, actually 3:00 PM is not great. Does that mean yes
or does it mean no? You need to think through
that process of how do you deal with
errors and failures because it is part
of the main flow. And if you want to find out
more about this People Plus AI Guidebook, I would
strongly encourage you to go to the
session at 1:30 tomorrow afternoon where our Google
Design team will share with you the content
of the guidebook, as well as real world usage
of the guidebook and advice among Google products. Next, I want to talk about
the Android platform itself and what we offer as
part of the platform. In particular, I want
to talk about two APIs, or two sets of APIs even. So the first one
is to do a speech. I’m sure that a lot of you
have seen it from the keynote. So let’s switch
over to my phone. So this is a little app
that I put together, and it uses the Android platform
API to do speech dictation, as well as reading it back to
me via the Text to Speech API. So fingers crossed,
this is a live demo. Let’s see if it works. I’m really glad to be
here in California. So as you can see, it dictates. And let’s see how it
reads back out to me. GOOGLE VOICE: I’m really glad
to be here in California. HOI LAM: Hey, so it works. Woo-hoo. [APPLAUSE] All right. So this secret button
here because what I just demonstrated is English. And today, English is just
table stake for dictation. How about trying it in
my native Cantonese? So let’s try this. [SPEAKING CANTONESE] [APPLAUSE] So what I’ve just
said there is, I’m really happy to share
my experience with you. And let’s see how
it reads it out. GOOGLE VOICE:
[SPEAKING CANTONESE] HOI LAM: Totally works. So let’s switch back to slides. OK. So I demonstrated that
API on the Google Pixel 3, and many of you might
have the question of, hey, where is this API? Is it part of the Android
X Support Library? Is it part of Android Pie? Is it even Q? How come I didn’t hear about
it at What’s New for Android? This may surprise you because
actually, it wasn’t new. So we launched this
API for text to speech 10 years ago at Google I/O. In
fact, it’s such a long time ago that YouTube doesn’t support
uploading any video that’s longer than 10 minutes. So I strongly encourage
you to check out part nine of the
keynote during that year because what you’ll
hear is really fundamental improvement
in the quality of sound that comes back out. And that’s one of
the beauty of Android is that we never
stop innovating. So the APIs that you
were using, we just keep that updated through time. How about the Speech
Recognizer, the dictation part? That was launched as part
of API 8, Android Froyo. So this is actionable
for you today. Your users should be able to
access both Text to Speech, as well as Speech Recognizer. So next, how easy is
it to implement this? So let’s go through the
seven steps I went through with the Speech Recognizer. So the first thing
that you’d need to do is to create your
own speech listener. Within that class, there are
multiple abstract methods that you’d need to implement. And I will highlight
three of them. The first one is
on partial result. So as I was speaking
to the phone, you see part of the results
coming back as I spoke. And what that does is
it give user feedback that you’re listening to
them and the machine actually understands what
they are saying. Another thing that you can do
to give the user more feedback is on RMS Change,
which detects the noise level around the phone
so you can display the variation in noise level
back to your user and show that you’re really
literally listening to them. And finally, when you get
the result in on result, you can access the
top confidence result by just going into the
first element of the array. And there, you will find a
string that you’re looking for. In addition, an optional
step is you can also look at the other results that
the machine was coming back with. So for example, when I’m
saying “I am,” the machine might come back with “I
apostrophe m,” or “I space am.” You can access both
through this method. In order to use it, you need
to initialize your own Speech Recognizer. And after that, you also need
to have a speaking intent. Within all these
options, I would like to highlight two of them. First, I would
strongly recommend that you enable
partial result. Again, it’s about giving
the user feedback that things are working. Second, as here I specify,
you can specify the language, and this is Cantonese. And when you are ready,
you just feed the intent into the recognizer that you
have created and away you go. It’s that simple, seven steps. Next, I want to go through
the camera API itself. You might ask, hey,
what is going on? Why are you talking
about camera? I thought this is
Machine Learning. As many of us know, many
Machine Learning use cases leverage the camera in
order to do analysis. And when I first
started six months ago, I thought, hey, this
is easy, it’s Android. I know Android. This is great. Let me try it out. And this happened to me when
I first load up my sample. It just say, oh, wait. You’re using camera 1. Why? We deprecated it in API 21. Why are you still using it? What’s wrong with you? And I go, yeah, I work
in Android [INAUDIBLE],, I know Android. I will update a sample. So I went to camera 2. I went to the basic
camera 2 sample and I find this particular
class and go, yeah, all I need to do
is understand it. And I scroll, I scroll,
I scroll because it’s 1,000 lines of code. And it’s like, oh, my
word, I am so dumb. I don’t understand
why Google hired me. And actually, after a
while, I find the confidence to talk to a lot of
developers, and they also have the same problem. So thank you for those of
you that share my pain. I am perhaps OK as a developer. So bear that in mind,
at this Google I/O, we are launching CameraX. The Support Library is
really to do two things– improve the ease
of use so that you don’t need to have
1,000 lines of code and three hours of this
session to present it to you, and the second thing is to
increase device compatibility. At launch, we will
support three use cases– preview, image capture,
and my personal favorite, image analysis. You can see where
this is going, right? So how easy is this? I told you it is not
1,000 lines of code. So let me just present
to you the four steps that I went through to implement
the image analysis use case. The first thing I did, just
like the speech listener, is to create analyzer. And the red box is where
you put your ML code. So that’s prep. That’s our analyzer. In order to run it,
the next step I do is to create a conflagration
object for the use case. Here, you will notice
that we basically define how the
threading is going to be run in very simple terms. And my personal favorite is this
option, acquire latest image. How often your pipeline
in Machine Learning is backed up because
your phone can’t deal with all the number
of images coming through? This solves the problem
by only giving you the latest image that it has. So you no longer need
to do your own code to say, hey, let’s throw
away the next five image and let’s get on
with the latest one. This automatically
does it for you. After that, you create
a use case feeding in the configuration
and the analyzer that you have put it together. And last but not
least, just feeding it into a Lifecycle owner. Just bind it in a
Lifecycle like for example the AppCompatActivity or
the Fragments that you love. That’s all you need to do,
it’s that simple, four steps. I’m sure a lot of you would be
interested to find out more, and as a result, we do
have an extra session. Wey-hey. At 5:30 tomorrow
afternoon, not tomorrow morning, tomorrow afternoon
where our team from CameraX will be presenting this in full. And you can ask
them the questions that are in your mind. With that I’ll welcome
Dong to talk about ML Kit. Come on up, Dong. [APPLAUSE] DONG CHEN: Thank you, Hoi. Hello, everyone. My name is Dong Chen. I’m a tech lead of ML Kit team. ML Kit was launched one
year ago at last Google I/O. You may wonder what has happened
to ML Kit since its launch. How is it adopted
by our developers? What’s new and what’s
cool with ML Kit? In the next few
minutes, I will lead you through these questions. First of all, a little refresher
of ML Kit for those people who are new to it. ML Kit is Google’s Machine
Learning SDK for mobile. We bring the best of a
Google Machine Learning model onto mobile devices
behind a simple to use API consistent
across both Android and iOS. It is aimed at making
Machine Learning easy and democratizing AI for
make it accessible to all the mobile developers. Just because you need
to use Machine Learning, it doesn’t mean you need
to be a ML expert or a data scientist with a PhD. You shouldn’t need to worry
about collecting data, training models, tuning model, and
running it on mobile device. ML Kit will take care
of all that for you. Since launch a year ago,
ML Kit has a strong start and a phenomenal growth. Its powerful models,
combined with simple API, have enabled many
mobile developers to add a Machine Learning
feature onto their mobile apps. We’re seeing user
engagement accelerating with active user sessions
growing by 60% month to month. Developers are building
really interesting mobile apps with ML Kit, ranging from social
sharing app for fishing lovers to shopping assistant apps for
consumers of shoes, clothes and furniture. They have used a
full range of ML Kit APIs, such as text recognition,
barcode scanning, image labeling, et cetera. Our face detection models
are 18 times faster as compared to the
last Google I/O and 13% to 24% more accurate. In addition, we also launched
Face Counter Detection. Besides the Vision
APIs, we have now also expanded into Natural
Language processing. We introduced Language
Identification feature for detecting a language
based on the textual input. We also launched Smart Reply for
producing meaningful responses for our Casual Chat. If this base API
doesn’t fit your need, we also support hosting
or running custom models. And we did not stop here. I’m thrilled to
share that we’re now launching several exciting
features for ML Kit. First of all, we’re launching
Object Detection and Tracking. You can now use ML Kit to detect
single and multiple objects in a static image. In addition, you can track
them in a streaming video while your phone moves. This will be very useful for
identifying interesting items for further processing. As shown in this
video, using your phone ML Kit will identify and track
the potted plant in the room. And with that detection result,
the app will look up the item and show you a product catalog
with prices and the links for shopping similar items. Secondly, we’re launching ML
Kit on Device Translation API. For the very first
time, mobile developers could tap into the same model
powering the offline Google Translate app free of charge. The translation will write
entirely on the device. In your own, app you can
now add a translation between any pair of languages
from 59 supported languages. You will also have control
over which language packs to download,
when to download them, and how long you’re going to
keep on the mobile device. At last Google
I/O, we introduced On-Device and the cloud-based
Image Labeling API. The On-Device API can
identify hundreds of labels. And the Cloud API can identify
more than 10,000 labels. For this beautiful photo
using the On-Device API, we look at labels like
a dog, pet, mountain. This is great. But these pre-trained
Google models can only provide
generic classifications. What if I want to
recognize my dog’s name or detect the dog’s breed? That will require me to
train my own custom model. And for that, AutoML
is a powerful tool which can help you with
model architecture search, as well as model training. Two years ago, Google launched
the cloud-based AutoML. Those powerful and very large
models can only run on a server and doesn’t fit into
the mobile device. What if I want to utilize
the power of AutoML, at the same time, I also
want to take advantage of the cost, speed, and privacy
benefits of On-Device Machine Learning, as Matej just
discussed minutes ago? I’m excited to share
the great news with you. Now we’re launching ML
Kit AutoML Vision Edge. It provides the same easy
to use Image Labeling API, plus a TensorFlow Lite
model customized for you, trained by AutoML. You can now bring your
own images of the items you’re interested in, as little
as tens of images per item depending on your scenario. We will then train
the [INAUDIBLE] for you using the AutoML based
on the photos you provided. And we will also find
and tune the model with the performance and size
optimized for mobile devices. You generate a TensorFlow Lite
model customized for you which runs entirely on a device. For example, if I want to
build a mobile app that can classify different
type of flowers, I can use AutoML to
help me train the model. To do that, I will first
use the Firebase console web interface to upload the
photos of flowers I have. Once I’ve uploaded
all the flower photos, I’ll start training
my own model. And for each training job, I
can choose the best options that fit my requirements in
terms of inference, latency, classification,
accuracy, and model size. In addition, I can choose
how many compute hours I’m willing to spend
on the training that fits in my budget. Once the model
training completes, will show the quality of the
model in terms of precision and recall rate. Now, with the model trained,
how do I use it in my own app? ML Kit will help you with that. I’m going to show you one
slide of a Kotlin code, which will implement a
flower classifier using the model we have just trained. First, I will create a
model with the name flowers. Then I will register
the flowers model with Firebase Model Manager. The register remote
model call indicates model will be downloaded
remotely from the Firebase console. And you don’t need to worry
about the model downloading. ML Kit will take
care of that for you. You also have the
option to download the model from Firebase
console yourself and bundle it inside your app. Next, I will specify the options
for using this flowers model. I will specify this confidence
threshold as 0.45 for my model, depending how the
model is trained. With that option, I will then
create my flower labeler. And these APIs may
look familiar to you, as they are really very similar
to the existing On-Device and the Cloud
Image Labeler APIs, except now we’re using a
model trained by AutoML. Isn’t that beautiful? Finally, I can run
my flower labeler to detect and classify
the flowers inside a photo or a streaming video. Yay. With just a few lines of code
I have built my own flower classifier running a model
trained by AutoML based on the photos I have. For those who are new
to ML Kit, I strongly encourage you to try it out. You will be amazed how
simple, yet powerful, it is. Last but not least, we are
launching new Material Design Guidelines for ML
use cases on mobile. Machine Learning has not
only added magic features to mobile apps, but also
affected more and more product experiences. To provide a great
end-to-end user experience, we’re publishing new
Material Design Guidelines to make our ML
product experience usable, beautiful
and understandable. Our Object Detection
Tracking Showcase app was developed by following
all these guidelines. And its full source code
is available on GitHub for both Android and iOS. To learn more about ML Kit,
please come to our session tomorrow at 12:30. We’ll show you some
cool demos and walk you through all the new
features we’re launching. And to learn more about
Material Design for ML, please come to our session
at 12:30 on Thursday. With that, I’ll hand over
to Laurence, who will talk about TensorFlow Lite. Thank you. LAURENCE MORONEY: Thank you. [APPLAUSE] So, hi, everybody. We’ve seen a lot of great stuff
that’s available on Android. We’ve seen how some of the
native APIs and Android can be used with
Machine Learning. And we’ve also seen
how ML Kits can be used with preexisting
models as well as building your own models. I’m going to switch
gears a little bit and talk about TensorFlow
and how TensorFlow can be used for building
custom models that could be deployed into ML Kit
or could be deployed directly to your device. Who here has never
heard of TensorFlow? OK, I’m doing my job, good. One. So going back to what
Matej mentioned earlier on, so TensorFlow is our API for
building Machine Learn models and for running Machine
Learning models. And going back to Matej’s
slides from earlier on, the whole idea around
Machine Learning is I like to think of it as
a new programming paradigm. We’re all software
developers, and it’s a new way for us to write
code to open up new scenarios. And instead of us
thinking of the rules and expressing them
in code ourselves, what if we could actually
label a bunch of data and then have the machine
infer the rules that make that actual data? And if you come away with
nothing else about Machine Learning, think
about it in that way. So TensorFlow. The architecture of
TensorFlow from a high level looks like this. On the left-hand
side of this diagram are all the things that you
use for building models, and we tend to
call that training. So there’s APIs in there
like keras and estimators and this distribution
strategy if you’re building big complex models so
you can spread it across CPUs and TPUs, that kind of thing. But then on the right-hand
side is deployment. So your models,
once they’re built, they can run in servers with
TFX and TensorFlow Serving. They can run in the
browser with TensorFlow.js or on Node servers
with TensorFlow.js. But then what we’re going
to be talking about today is TensorFlow Lite,
where you can put it on mobile devices
including Android. So I’m going to
give a quick demo if we can switch to the laptop. So here, I’ve designed the
simplest neural network that I could come up with. At the back, if you cannot read
this code, just go like that. You cannot read it? OK, I’ll make it bigger. Or do you just like doing this? Is that good? If you can read
it, go like this. OK. So this is the
simplest neural network I could come up with
where, if you can imagine a neural network, it is a
series of connected layers of these things called neurons. And so in this one,
I have a single layer and it has a single
neuron in it. So it’s basically
just a simple function that calculates something. So I’m going to feed that
with a bunch of data. And here’s the relationships
between the data on here, my X’s and my Y’s. Can anybody figure out what
these data are actually doing? If you were at my earlier
talk you’ve seen this already, I know. But the 0 and the
32 might be a clue. Yes, it’s Celsius to
Fahrenheit converter. It’s kind of given away
by the name F-to-C. Hoi thought that was
French to Cantonese, but– So what I’m going to
do is I’m just going to give it a bunch of data. And then what a
neural network will do is it will iterate
across that data and figure out the
patterns between them. So I’m going to run that. I know we’re short on time
and the party’s coming up. So let me run
through it quickly. So it’s gone through
500 iterations of trying to figure out the pattern. And at the bottom– this is so
big I have to scroll forever. Then at the bottom,
I’ll go onto this node. So if I, for
example, pass in 100, I know 100 degrees
centigrade is 212 Fahrenheit. I’m getting 211.3 because I
only had a small amount of data. I only had six values,
and it’s giving me a very high probability of the
relationship between those two numbers being centigrade
to Fahrenheit. So now I’ve built a model. It’s a very, very simple model. But what am I going
to do with it? I want to use that on mobile. So with TensorFlow, the first
thing that you have to do is you save it out. And we’ve got a format
called Saved Model, which is used as a consistent
format across everything. So I’m going to save this out as
a Saved Model with a save keras model like this. And when that saves
out hopefully, if it does after a
bunch of warnings, it’s going to give me the
path that it saved it to. And I’m going to take that– and this notebook is online. I’ll share the code
for it in a moment– and I’m going to
run that in here. And now what it’s doing is
it’s calling the TF Lite converter to convert
that model to TF Lite. And it’s given me
a number here, 620. Can anybody guess
what that number is? It’s not an HTTP code. It’s the size. So that model that I just
built, that’s the size in bytes. So it’s 620 bytes, and I
have a model called TF Lite– sorry, called
model.tflite– that has been saved out at my temp
directory, and it’s right here. So I can now download that
model and use that on Android. Can we switch back to
the slides, please? So now that I have that model,
this is the conversion process that I went through. I built it in TensorFlow,
I saved it out, I used the TF Lite converter,
and then I created a TF Lite model out of that. And here’s the code that
I showed to do that. This QR code is the URI of the
notebook that I just showed. So if you want to play with
the notebook for yourself, it’s on there. So now at runtime,
the diagram becomes we feed data into the model
and we get predictions out. So what I’m going
to do is I’m going to run this model on
mobile using Kotlin. So to be able to do
that, first of all I deploy my model to my Android
device as an asset file. That asset file, again,
can be just treated like any other asset in Android. Now one thing that on Android–
and the number one bug when I work with people
who use TF Lite– one thing is that
when you deploy assets to Android, Android itself
actually zips up those assets. It compresses them,
and TF Lite won’t be able to interpret
it if it’s compressed. So in your AAPT
options, make sure you say do not compress TF Lite. Then, in my build.gradle, I just
set the implementation to give me TensorFlow.Lite– TensorFlow Lite. And now in my code, I’m going
to do a couple of things. So first of all, I have to
create an instance of a TF Lite interpreter and something
called a mapped byte buffer. The mapped byte
buffer, what I’m doing is that that’s just reading
the model out of assets so that I can create an
instance of that model. I’m going to set in the
options the number of threads I want the model to run on. And then the interpreter
itself can then run on those. My code itself– and I’m
going to skip over this slide because we’re
running out of time. I’m just going to
show it in the demo. So can we switch to the
demo machine, please? So I have a very
simple application running in the emulator here. This application, all it
has is a single button that says Do Inference,
but when I click that I’m going to go to Breakpoints. And now here’s the breakpoint
with my actual code. So I want to pass the
number 100 into the model and get back like
211 and change. So that’s the conversion of
one temperature to another. So I’ve created an array. And it’s just a simple array
with one element in it. And that element contains
the 100 that I’m passing in. My output value is going
to be a byte buffer because the model is going to
feed me back a stream of bytes. We’re going really
low-level with this. But I know that those bytes
are going to be a float, and a float is four bytes. So I’m just allocating a
byte buffer with four bytes. And then I’m going
to step through this and I’m going to run it. And when I run it, I’m
passing to TensorFlow Lite’s interpreter the input value
of my input value array and then my output
value array where it’s going to write the answer. So it writes the
answer into that, and now I can read
the answer out of that by just calling the get
float of that byte buffer. And we’ll see here in the debug
that it was 211 and change. So it’s a very,
very simple example, and it seems a
little bit complex with me using these arrays and
these byte buffers to do it. But the idea is that it’s going
to be exactly the same code pattern that you use for a
far more complex example. We kind of just want to make
it so that it’s consistent regardless of the
example that you use. I also want to show a really
cool new Kotlin language feature here. So if you see like on line– Android Studio numbers
my lines for me, and if you can see on it I
have a line 57 has TFLite.run. And actually in Kotlin,
I can then go, go to 47, and it will rerun it for me. No, not really. I’m seeing who was
paying attention. So if we can switch
back to the slides. So that was like a
very, very quick look at how you would
actually write that code and host that custom model
for yourself in TF Lite. So today, we’ve
actually seen all of these options that are
available for you in Android. And I’d like to invite
Matej back up onto the stage to wrap us up. Thanks, Matej. MATEJ PFAJFAR: Thank
you very much, Laurence. All right, folks,
there you have it. Android really has ML
capability built right into it. And we have great tools to help
you take advantage of that– the People Plus AI
Guidebook, Android APIs, ML Kit’s exciting new features,
AutoML On-Device capability, and if you want to build your
own, TensorFlow is your friend. We’ve also created a new section
on the Android Developers website, which details
our Machine Learning offering across Google. Please check it out at
developer.android.com/ml. Today is just the
beginning for us at I/O, and we have many
great talks coming up, which will go into more
detail about everything that you have heard just now. And finally, if
you have questions, my team and I will be in the
AIML dome after this talk, so please do come and see us. Thank you so much for
being here with us today, and enjoy the rest of I/O. [MUSIC PLAYING]

4 thoughts on “What’s New in Android Machine Learning (Google I/O’19)

Leave a Reply

Your email address will not be published. Required fields are marked *