How to deploy Machine learning classifier to Heroku
Hi everybody, we are going to deploy a ML model trained using fasti.ai to Heroku, and using react.js as the frontend!
With this you could make a food classifier, car classifier you name it, also you can modify the app to put whatever model you want of course you have to change a couple of things, but this guide will give you the right start to making ML apps.
You can see a preview of what the app looks like here!
The first thing you need to do is fork this Repo to your github
Before deploying
We need to customize the repo for your own classifier, if you don't want to do this the repo will remain with the default 100 labels that i trained using unsplash images.
Follow this video to train your own model.
You should have a exported .pkl file, and upload it to Google drive or dropbox.
The link should be a direct link to download so use these generators to get direct links:
Google Drive
Dropbox
Here is the video for deploying the app:
*Coming soon*
Customize the app for your model
Edit the file server.py
inside the app directory and update the export_file_url
variable with the URL of the model that you exported above.
In the same file, update the variable classes
with the list of classes you expect from your model, you can get the in data.classes
data being the name of your ImageDataBunch.
Put your classes in the file app/static/data.js
this is used in the frontend.
Deploy to Heroku
- Create a Heroku account
- Go to your dashboard and add a new app
- Into the deploy tab select github in the Deployment method section and connect to your account
- Search and select the repo to deploy!
- Hit Enable automatic deploy, then hit deploy!
That's it! you have the app running i hope, now if you want to know how it works
How all works
Now we are going to see how all works from the server to the react.js frontend
The backend part
This is the server.py
file, this file has all the backend functions of the app
Here we have in the Port variable the port that heroku sets in the PORT environ variable, if PORT
is not set it defaults as 5000
Then we have the export_file_url
, export_file_name
, classes
that you have to change for your own model.
We are going to use Starlette as our web service interface, Starlette is Asynchronous so we can use async and await in our python code and have multiple requests to our app.
We add the CORS middleware, CORS stands for Cross-Origin Resource Sharing and it allows sharing files with different origins you can learn more here
Then we mount the /static and /prod-view so the client can access those as static files.
Helper functions
With this function we download the file asynchronously,
first we check if the file already exists if don't we download it with aiohttp module, aiohttp is a Asynchronous HTTP Client/Server for asyncio
Check out the docs of aiohttp for more info.
With this function we make sure that we have the learner downloaded and we return it loaded, we use await in the download_file function because is a an async function, using try and except we can catch errors.
This function sorts the the probabilities in descending order, and creates a list of tuples, in which we have [probability, id of label]
With this we create asyncio a task ensure_future
is the same as create_task
, and we make  the task is completed before continuing.
Now the routes
In here we use python decorators, Â so this is the same as defining homepage
and the doing  app.route('/', homepage)
or app.route('/analyze', methods=['POST'], analyze)
you can see more of starlette routing in their docs
In the default route we open the index.html
file.
Here we await for the data and then we open the image with open_image this is a fastai function it uses PIL to load the image here we pass it bytes but we can give a path to a file too, see here the docs for open_image.
Then we do the prediction, this outputs a list in which the 3th item is a list of probabilities that we pass to sorted_prob so we can sort them!
Finally we make a Json response with the results.
Here we do something similar to the last one but we first get a random image from unsplash using their API.
Finally we tell uvicorn to run the server, in the port that heroku set for us with the Port variable.
if __name__ == '__main__':
if 'serve' in sys.argv:
uvicorn.run(app=app, host='0.0.0.0', port=Port, log_level="info")
The Frontend part
In the body of our index.html
we have the main app div, the react lib and components imports
In our index.js we have a header component and the prediction component where all the functionality of the app resides in.
Now we are going to see the components of our app:
The Header component is just the title of the app.
Most of the application is in the Prediction component:
First we are going to see what we have in the state of this component
So now we are going to see some helper functions and what they do
if you are not familiar with the notation () => {}
this are called arrow functions they are part of ES6, Babel compiles this as a normal function.
This functions clics in the file input and shows the file picker of the system, if the app is already analyzing it shows a notification.
It shows the selected file, and shows the name of the file, this function is called when the input files fires the onChange event, with that event we retrieve the file data
We use the FileReader API to read the data of the file of the user, we then store this data in the imgPickedRaw
state variable.
Here we parse the results and give the top 5 predictions ( the predictions are already sorted ), we loop the first 5 results and then we index the correct class and we give the percentage the right format
Here is one important functions this is the one that sent the file to analyze
we use XMLHttpRequest() API
First we make sure that the file is selected otherwise we notify, then we check if we are still analyzing if we are we notify
If we pass, we make  POST request to /analyze
appending the file selected, one important part is the .onload
in this we call parseResults and we make the changes of state to show the results.
This does something similar to the last one but in this one we need to make a GET request, and the server does all for us, we do this using fetch and chaining .then since fetch returns a promise, we also check if we already getting a random image and we display a notification if we are.
Now we are going to look at the render
Here we have the input that holds the file selected, on the event onChange it triggers the showPicked function
button to select an image that fires the showPicker function, the buttons to analyze and get randoms predictions and then two Image components that show the results when they arrive
At last we are going to see the Image component:
The Image component receives imgPicked a bool that shows/hides the component, imgPickedRaw url or bytes of the image in base64, labelresult an array  of tuples that we map to generate the list of results.
That's all, hope you find it useful, if you have any questions or you found an error please let me know, you can tweet me @ramgendeploy.
Finally if you want to improve upon this and practice here is a suggestion:
Make the app so that we can request multiple random images until a limit of 10 for example, react will be rendering multiple Image components when the results arrive.
To do this you could
- Send a number with a top of 10 in the url GET variable, eg. quantity
- you can get that variable in Starlette with
request.path_params['quantity']
- Request multiple image to unsplash, and classify them
- Put the results in a list and send that with a JSONResponse
- Then in react map the array and render Image componente similar to when the Image component renders multiple li elements
I hope you have fun ð
This guide is partially based on https://course.fast.ai/deployment_render.html this deploy is for Render.