专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
爱可可-爱生活  ·  【[86星]Awesome MCP ... ·  18 小时前  
爱可可-爱生活  ·  【[27.6k星]Reactive ... ·  17 小时前  
爱可可-爱生活  ·  《爱可可微博热门分享(1.17)》 ... ·  3 天前  
爱可可-爱生活  ·  面对O1 ... ·  3 天前  
51好读  ›  专栏  ›  机器学习研究会

【推荐】Keras/Theano/OpenCV实现的CNN手势识别

机器学习研究会  · 公众号  · AI  · 2017-07-11 23:35

正文



点击上方“机器学习研究会”可以订阅哦
摘要
 

转自:爱可可-爱生活

CNNGestureRecognizer ver 2.0

Gesture recognition via CNN neural network implemented in Keras + Theano + OpenCV

Key Requirements: Python 2.7.13 OpenCV 2.4.8 Keras 2.0.2 Theano 0.9.0

Suggestion: Better to download Anaconda as it will take care of most of the other packages and easier to setup a virtual workspace to work with multiple versions of key packages like python, opencv etc.


Repo contents

  • trackgesture.py : The main script launcher. This file contains all the code for UI options and OpenCV code to capture camera contents. This script internally calls interfaces to gestureCNN.py.

  • gestureCNN.py : This script file holds all the CNN specific code to create CNN model, load the weight file (if model is pretrained), train the model using image samples present in ./imgfolder_b, visualize the feature maps at different layers of NN (of pretrained model) for a given input image present in ./imgs folder.

  • imgfolder_b : This folder contains all the 4015 gesture images I took in order to train the model.

  • imgs - This is an optional folder of few sample images that one can use to visualize the feature maps at different layers. These are few sample images from imgfolder_b only.

  • ori_4015imgs_acc.png : This is just a pic of a plot depicting model accuracy Vs validation data accuracy after I trained it.

  • ori_4015imgs_loss.png : This is just a pic of a plot depicting model loss Vs validation loss after I training.


Usage

$ KERAS_BACKEND=theano python trackgesture.py 

We are setting KERAS_BACKEND to change backend to Theano, so in case you have already done it via Keras.json then no need to do that. But if you have Tensorflow set as default then this will be required.


Features

This application comes with CNN model to recognize upto 5 pretrained gestures:

  • OK

  • PEACE

  • STOP

  • PUNCH

  • NOTHING (ie when none of the above gestures are input)


This application provides following functionalities:

  • Prediction : Which allows the app to guess the user's gesture against pretrained gestures. App can dump the prediction data to the console terminal or to a json file directly which can be used to plot real time prediction bar chart (you can use my other script - https://github.com/asingh33/LivePlot)

  • New Training : Which allows the user to retrain the NN model. User can change the model architecture or add/remove new gestures. This app has inbuilt options to allow the user to create new image samples of user defined gestures if required.

  • Visualization : Which allows the user to see feature maps of different NN layers for a given input gesture image. Interesting to see how NN works and learns things.


Demo

Youtube link - https://www.youtube.com/watch?v=CMs5cn65YK8


链接:

https://github.com/asingh33/CNNGestureRecognizer



原文链接:

https://m.weibo.cn/1402400261/4128216208041805

“完整内容”请点击【阅读原文】
↓↓↓