Introducing React ReExt – Sencha Ext JS Components in React! LEARN MORE

Identify Objects And Faces With Machine Learning AI And Javascript

June 3, 2021 134 Views
Show

Artificial Intelligence (AI) and Machine Learning (ML) usage is exploding worldwide in all kinds of industries and it is really easy to bring AI functionality such as object and facial recognition to Sencha Ext JS applications. We are going to discuss how quickly TensorFlow.js can be deployed in a Javascript web framework and show you how you can do it yourself. In this example we are going to implement a really awesome application that can identify objects and faces through your webcam. The application will teach the machine to save aspects of an object or face too and then tell you what or who it sees on camera.

We will be working based on this Google lab post using TensorFlow.js. and, of course, applying  Ext JS best practice as usual.

How can I get Getting Started with Sencha CMD?

If you still don’t have Sencha CMD, you can download it for free here.

Once you have it installed you can make sure you have it properly installed and configured by running this command on terminal/shell:

$ sencha

If it returns the sencha cmd version, you are good to go. Here are more details on how to install, configure and use Sencha CMD, but this article will show all the important details.

How can I create the Sencha application?

The first thing you want to do is create your project structure. Sencha CMD can do this for you easily–all you need to do is run this command. If you have any questions, take a look at the bullet points below. They explain what everything in the command does and what you will need to change to personalize your application.

sencha -sdk /Users/fabio/sencha-sdks/ext-7.4.0/ generate app modern TeachableMachine ./teachable-machine-extjs
  • /Users/fabio/sencha-sdks/ext-7.4.0/ is where your Ext JS folder is.
  • TeachableMachine is the name of our application and the namespace for our classes.
  • ./teachable-machine-extjs is the path for our project structure and the necessary files.
  • modern is the toolkit for our application.

Make sure when you run this command there is no error on the output. If there is no error, and everything runs correctly, you have successfully created your project structure. To be sure, however, let’s run our application with the initial structure. To do this, first navigate to your project folder:

$ cd teachable-machine-extjs/

Then, run the command to open the server on a specific port:

$ sencha app watch

The output of this command will return the URL where your app is available. In this case, it is running on  http://localhost:1841/. When you open it on your browser you will see a screen like this:
TensorFlow.js teachable machine

How can I clean up the Sencha project?

Once you have your basic project running, you can clean it up by removing the files and components that you don’t need.

Use the command shown below to delete your unwanted files. While deleting, keep another terminal open and have the Sencha app running because it will update the application automatically:

$ rm app/model/* app/store/* app/view/main/List.*

With that done, let’s clean up our classes in app/view/main. Make sure your three classes look like this:

Main.js:

/**
 * This class is the main view for the application. It is specified in app.js as the
 * "mainView" property. That setting causes an instance of this class to be created and
 * added to the Viewport container.
 */
Ext.define('TeachableMachine.view.main.Main', {
    extend: 'Ext.Panel',
    xtype: 'app-main',
    controller: 'main',
    viewModel: 'main'
});

MainController.js:

/**
 * This class is the controller for the main view for the application. It is specified as
 * the "controller" of the Main view class.
 */
Ext.define('TeachableMachine.view.main.MainController', {
    extend: 'Ext.app.ViewController',
    alias: 'controller.main'
});

MainModel.js:

/**
 * This class is the view model for the Main view of the application.
 */
Ext.define('TeachableMachine.view.main.MainModel', {
    extend: 'Ext.app.ViewModel',
    alias: 'viewmodel.main',
    data: {}
});

After that, test the app again in your browser console to make sure it is running perfectly without errors. For now, it should show a panel without content.

How can I add Javascript dependencies?

On the js section of the app.json file, after app.js node, add the libraries dependencies to run the application to load with the application files:

"js": [
    ...
    {
        "path": "https://cdn.jsdelivr.net/npm/@tensorflow/tfjs",
        "remote": true
    },
    {
        "path": "https://cdn.jsdelivr.net/npm/@tensorflow-models/mobilenet",
        "remote": true
    },
    {
        "path": "https://cdn.jsdelivr.net/npm/@tensorflow-models/knn-classifier",
        "remote": true
    }
]

How can I create the Sencha Main View?

Ok, let’s add some content to the panel. You need to create a panel for the video component and toolbar for the actions for the video. Here are all visual components that we will add to run the application:

  • a video component to show the webcam;
  • a text field to add the name for the object/person;
  • a Save button to save the name and start teaching the machine;
  • a progress bar to show the progress while teaching the machine;
  • a button to Add new item to teach the machine;
  • a Done button to show the machine has finished learning;
  • and, finally, the progress bar to show the percent of confidence of the object/person on the camera;

It’s very important to bind your components and link them with the View Model data which you will implement in the next steps. Also, you need to create events/handlers on some components to call methods that you will create in your ViewController. This will make your project structure perfect with view, viewModel, and viewController each with its own role.

Here is the full Main View code:

/**
 * This class is the main view for the application. It is specified in app.js as the
 * "mainView" property. That setting causes an instance of this class to be created and
 * added to the Viewport container.
 */
Ext.define('TeachableMachine.view.main.Main', {
    extend: 'Ext.Panel',
    xtype: 'app-main',
    controller: 'main',
    viewModel: 'main',
    titleAlign: 'center',
    title: 'Show an object or someone\'s face on the webcam, type the name for it and click on "SAVE" to start teaching to the machine.',
    bodyPadding: 20,
    layout: {
        type: 'vbox',
        align: 'center'
    },
    items: [{
        xtype: 'panel',
        width: 600,
        height: 400,
        items: {
            xtype: 'video',
            listeners: {
                painted: 'onWebcamPainted'
            }
        },
        bbar: [{
            xtype: 'textfield',
            label: 'Type the object/person name',
            flex: 1,
            bind: {
                value: '{objectName}',
                // hide if teaching is in progress
                hidden: '{teachingProgress > 0}'
            }
        },{
            iconCls: 'x-fa fa-check',
            text: 'Save',
            handler: 'onSaveHandler',
            bind: {
                // hide if teaching is in progress
                hidden: '{teachingProgress > 0}',
                // name is required
                disabled: '{!objectName}'
            }
        },{
            xtype: 'progress',
            flex: 1,
            hidden: true,
            bind: {
                text: 'Teaching object/person "{objectName}" to the machine... {teachingProgress * 100}%',
                value: '{teachingProgress}',
                // show progress bar only if is in progress
                hidden: '{!(teachingProgress > 0 && teachingProgress < 1)}'
            }
        },{
            iconCls: 'x-fa fa-plus',
            text: 'Add new item',
            hidden: true,
            bind: {
                // hide if teaching is in progress
                hidden: '{teachingProgress < 1}'
            },
            handler: 'addNewItemHandler'
        },{
            xtype: 'spacer',
            hidden: true,
            bind: {
                // hide if teaching is in progress or if it is working on mode to identify the object
                hidden: '{teachingProgress < 1 || isDone}'
            }
        },{
            iconCls: 'x-fa fa-check',
            text: 'Done',
            hidden: true,
            bind: {
                // hide if teaching is in progress or if it is working on mode to identify the object
                hidden: '{teachingProgress < 1 || isDone}'
            },
            handler: 'onDoneHandler'
        },{
            xtype: 'progress',
            shadow: true,
            flex: 1,
            hidden: true,
            bind: {
                text: 'This is {result.name}! I am {result.confidence * 100}% sure!',
                value: '{result.confidence}',
                // show if user decide not add more objects
                hidden: '{!isDone}'
            }
        }]
    }]
});

How do I define my Data in ViewModel?

Now let’s define some data for your view. You will need to define the objectName that will define the current object that the machine is learning, teachingProgress to show the progress during the learning, an isDone flag to define what mode machine is (learning or showing the results), and result to store the final result to show on confidence progress bar:

/**
 * This class is the view model for the Main view of the application.
 */
Ext.define('TeachableMachine.view.main.MainModel', {
    extend: 'Ext.app.ViewModel',
    alias: 'viewmodel.main',
    data: {
        objectName: null,
        teachingProgress: 0,
        isDone: false,
        result: {}
    }
});

How do I understand the Logic for the Methods?

The most important code, where you create the actions, is in the MainController.

As soon as the video element is ready, we will start creating instances of our objects and call the initial methods from the third libraries, including show the webcam on the video element.

After that, the library will keep monitoring the webcam to compare results from it and showing the results to the view through the viewModel.

But we don’t have anything to monitor, so there is also a method that saves the object/person as soon user click to save.

For more details, please read the comments on the code.

How do I implement the Logic on a ViewController?

With that done, you can get started on the ViewController logic. Firstly, you need to define the method onWebcamPainted. This will start the external library actions to monitor the webcam on the initLibs method.

async initLibs(videoCmp) {
    const
        me = this,
        mainView = me.getView(),
        {
            media,
            ghost
        } = videoCmp;

    // force to remove video controls
    media.dom.removeAttribute('controls');  

    mainView.mask('Please wait... Make sure the browser is not blocking the webcam');

    me.knnClassifier = knnClassifier.create();
    // Load the model
    me.mobilenet = await mobilenet.load();
        
    // Create an object from Tensorflow.js data API which could capture image from the web camera as Tensor.
    me.webcam = await tf.data.webcam(media.dom);
    
    // Force to start the webcam video without initial ghost covering the video
    media.show();
    ghost.hide();
    videoCmp.play();

    // all ready, we can remove loading mask
    mainView.unmask();

    // monitor the webcam
    while (true) {
        // if there are result
        if (me.knnClassifier.getNumClasses() > 0) {
            const
                img = await me.webcam.capture(),
                // Get the activation from mobilenet from the webcam.
                activation = me.mobilenet.infer(img, 'conv_preds'),
                // Get the most likely class and confidence from the classifier module.
                { label, confidences } = await me.knnClassifier.predictClass(activation);

            // save result to viewModel to interact to the view
            me.getViewModel().set('result', {
                name: label,
                confidence: Ext.Number.roundToPrecision(confidences

The addObjectExample method stores the object/person to the lib to define the statistics when the webcam shows the object to the result:

async addObjectExample(id) {
    // Capture an image from the web camera.
    const
        me = this,
        img = await me.webcam.capture(),
        // Get the intermediate activation of MobileNet 'conv_preds' and pass that to the KNN classifier.
        activation = me.mobilenet.infer(img, true);
    
    // Pass the intermediate activation to the classifier.
    me.knnClassifier.addExample(activation, id);
    
    // Dispose the tensor to release the memory.
    img.dispose();
}

The onSaveHandler calls the method to create the object/person examples to teach the machine, there is also a loop with some delay time to show effect for the progress bar:

onSaveHandler() {
    const
        me = this,
        vm = this.getViewModel();

    let progress;

    me._interval = Ext.interval(async () => {
        // if for any reason the view was destroyed, let's stop the progress interval
        if (me.getView().isDestroyed) {
            me.resetProgressInterval();
        }
        else {
            // increment progress to show on the bar
            progress = vm.get('teachingProgress');
            progress += 0.1;

            // if progress is 100%, stop interval and leave from the function
            if (progress > 1) {
                me.resetProgressInterval();
                return;
            }

            // show the new progress on the bar
            vm.set('teachingProgress', Ext.Number.roundToPrecision(progress, 2));

            // add object sample and save by its name
            await me.addObjectExample(vm.get('objectName'));
        }
    }, 100); // 100 ms to show some effect
}

Here is the final MainController code:

/**
 * This class is the controller for the main view for the application. It is specified as
 * the "controller" of the Main view class.
 */
Ext.define('TeachableMachine.view.main.MainController', {
    extend: 'Ext.app.ViewController',
    alias: 'controller.main',

    onWebcamPainted(videoCmp) {
        // init things when video is totally rendered
        this.initLibs(videoCmp);
    },

    async initLibs(videoCmp) {
        const
            me = this,
            mainView = me.getView(),
            {
                media,
                ghost
            } = videoCmp;

        // force to remove video controls
        media.dom.removeAttribute('controls');  

        mainView.mask('Please wait... Make sure the browser is not blocking the webcam');

        me.knnClassifier = knnClassifier.create();
        // Load the model
        me.mobilenet = await mobilenet.load();
            
        // Create an object from Tensorflow.js data API which could capture image from the web camera as Tensor.
        me.webcam = await tf.data.webcam(media.dom);
        
        // Force to start the webcam video without initial ghost covering the video
        media.show();
        ghost.hide();
        videoCmp.play();

        // all ready, we can remove loading mask
        mainView.unmask();

        // monitor the webcam
        while (true) {
            // if there are result
            if (me.knnClassifier.getNumClasses() > 0) {
                const
                    img = await me.webcam.capture(),
                    // Get the activation from mobilenet from the webcam.
                    activation = me.mobilenet.infer(img, 'conv_preds'),
                    // Get the most likely class and confidence from the classifier module.
                    { label, confidences } = await me.knnClassifier.predictClass(activation);

                // save result to viewModel to interact to the view
                me.getViewModel().set('result', {
                    name: label,
                    confidence: Ext.Number.roundToPrecision(confidences

How can I run the Sencha AI Application?

Once you have finished your code and saved all changes, access the app on http://localhost:1841/ to test. You can also test it live here.

On the center of the screen, you can see your webcam (make sure your browser is not blocking it), below the camera, in the field, type the name of the person/object that you will show on the camera. Keep your face/the object in the cameras field of vision and click on Save:

Naming the water bottle object

Inserting Water Bottle object

Teaching the machine a water bottle 

Teaching the machine a water bottle

 

Add a new object to the camera and give it a name then click on Save:

Naming the smartphone object

Teaching the machine a smartphone

How do I test the AI Application?

Place each object in front of the camera to see the result:

Testing the Application

Water Bottle identified by the machine!

 

recognizing the smartphone

Smartphone identified by the machine!

Where can I get the source code for identifying objects and faces in Javascript?

Now you can implement the UI to show multiple results with progress bars on the same screen with different colors. Running the demo application you can also smile at the camera and save it as “John is happy”. The machine will learn your on-camera reactions and indicate if you are happy or not. Pretty cool!

Download the full source code for the Javascript object and faces identification project.