International Journal of Performability Engineering, 2019, 15(1): 261-269 doi: 10.23940/ijpe.19.01.p26.261269

Smart Home basedon Kinect Gesture Recognition Technology

Yanfei Penga, Jianjun Peng,a, Jiping Lib, Chunlong Yaoa, and Xiuying Shia

a School of Information Science and Engineering, Dalian Polytechnic University, Dalian, 116034, China

b College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510642, China

Corresponding authors: * E-mail address:pengjj@dlpu.edu.cn

Accepted: 2018-12-26   Online: 2019-01-1

About authors

Yanfei Peng is a master student from the School of Information Science and Engineering, Dalian Polytechnic University. His research 3D simulation and virtual reality.

Jianjun Peng is a lecturer of Computer Science and Technology and the director of Geometric & Visual Computing Lab at Dalian Polytechnic University in China. Prior to that, She received his B.E. and Ph.D. in Computer Science and Technology from Shenyang institute of Computing Technology, Chinese Academy of Sciences (China) in 2004 and 2012, respectively. Her current research interests include virtual reality, augmented reality, motion capture, 3D reconstruction and embedded system. E-mail: pengjj@dlpu.edu.cn.

Jiping Li is a professor of Software Engineering at South China Agricultural University from Oct. 1st, 2016. Prior to that, he worked for 11 years at Dalian Polytechnic University as a professor, 2 years at Tianjin University as a postdoctoral researcher and 3 years at Tokyo Institute of Technology as a researcher of JST (Japan Science and Technology Agency). He received his B.E. and Ph.D. degree from Harbin Institute of Technology in 1995 and 1999, respectively. He had research and development experiences in virtual manufacturing, industrial and medical robot, CAD\CAE, 3D reconstructing, serious game, VR & AR. His current research interest is visual and cognitive computing, which has promising applications in visual perception, intelligent cognition, robot and automatic instrument.

Chunlong Yao was born in Heilongjiang province, China, in 1971. He received his B.E. degree in computer and its application from Northeast Heavy Machinery Institute, Qiqihar, China, in 1994; his M.E. degree in computer application technology from Northeast Heavy Machinery Institute, Qiqihar, China, in 1997; and his Ph.D. degree in computer software and theory from Harbin Institute of Technology, Harbin, China, in 2005. He is currently a professor and supervisor of postgraduate students at Dalian PolytechnicUniversity, Dalian, China. His current research interests include database and data mining, and intelligent information system.

Xiuying Shi is a master student from the School of Information Science and Engineering, Dalian Polytechnic University. Her research 3D simulation and virtual reality.

Abstract

In order to satisfy the needs of people’s intelligent home environment, this paper proposes an intelligent home control system based on gesture recognition technology. To obtain and recognize gestures of human by the depth data, skeleton data and 3D point clouds uses Kinect.The Arduino microprocessor is used to process the received data to realize the intelligent control of home appliances. The body mass index BMI was generated by the acquired biological characteristics, and detects the user’s physical condition. The experimental results show that the system can achieve effective control of household appliances and accurately measure human biological characteristics by receiving and recognizing human body posture. It proves that the system is innovative and practical.

Keywords: Kinect; Arduinomicroprocessor; depth data; skeletal tracking; biometric identification

PDF (623KB) Metadata Related articles Export EndNote| Ris| Bibtex  Favorite

Cite this article

Yanfei Peng, Jianjun Peng, Jiping Li, Chunlong Yao, and Xiuying Shi. Smart Home basedon Kinect Gesture Recognition Technology. International Journal of Performability Engineering, 2019, 15(1): 261-269 doi:10.23940/ijpe.19.01.p26.261269

1. Introduction

With the development of science and technology and the progress of society, people’s demand for quality of life is getting higher and higher, especially for the smart home.People need more convenient, more relaxed, and more intelligent control.In recent years, with the rapid development of sensor technology, image acquisition equipment and computer technology, a kind of gesture recognition technology based on deep sensor has gradually entered people’s field of vision.In November 2010, Microsoft’s Kinect depth sensor provided hardware support for the design and development of smart home based on gesture recognition technology[1].Kinect provides a platform for development such as the Kinect SDK and the OpenNI SDK.OpenNI is a multilingual, cross-platform framework that defines the API for program development and implementation of natural interactions, and builds Bridges between sensors and middleware.NITE is the PrimeSense company in the middle of the OpenNI framework development software, a mediation system software and application software, which includes the image data used to analyze the depth development kit, such as Simple viewer, Closestpoint viewer, etc[2].Therefore, during the running of system, OpenNI communication standard is established between the different physical components, and NITE provides a development kit for analyzing and applying deep image data as well as a potential for Kinect to apply to gesture recognition technology.

Smart home is a highly efficient, comfortable, safe, convenient and environmentally-friendly living environment.Based on housing as a platform, it combines building, network communication, information appliances, and equipment automation.Intelligent household originated in intelligent building; the development is mainly divided into three stages: 1) The original with intelligent household electronic appliance application in the building and equipment improves the practicality and convenience of home appliance equipment.2) The development of communication technology and automation. The home system realizes the network connection, remote control and household system automation of the equipment.Literature[3] was designed and implemented a smart home control system based on embedded Linux; the system is integrated in a home server on embedded system board independence, but has family centralized monitoring and control equipment.Literature [4] connects all kinds of smart devices in the home with wireless Bluetooth technology, and users can directly control the smart device in real time by using the mobile phone client.3) Along with the development of the Internet of things, the wide application of wireless sensor network and information equipment, intelligent household formed with a large number of sensor network and calculation processing unit network system, and realized the household system of monitoring, entrance guard, information services, and intelligent control[5].Literature [6] designed a sensor network platform to accurately detect the activity of people in the indoor space, and then control the equipment according to the occupancy situation in the building[7].

In order to realize the smart home better body feeling effect, this paper studies a smart home control system based on Kinect and Arduino.Arduino adapted into the remote control, and used the NITE middleware execution gesture tracking and recognition. Then, through wireless communication, the collected data is sent to the Arduino to realize the action control to the remote control and lighting.NITE middleware is used to perform bone tracking and obtain the user’s limbs to identify the user.The user height and weight data obtained by 3D point cloud data and gravity sensor are used to measure the user’s BMI.The system designed is to improve the original equipment man-machine interactive stiffness and control object single faults, achieve more natural human-computer interaction, and improve the “smart” composition of smart home.

2. System Structure and Working Principle

Use Kinect to get the human data and convert it to a control command. The computer sends commands to the Arduinomicroprocessorthrough the XBee wireless communication module, and the Arduino processes the received data to control the remote controller, the light and the gravity sensor.The system consists of the host and the machine, and the overall structure of the system is shown inFigure 1.The host is used for the collection of human motion information, and then the image processing is carried out to identify the human body features and movements. The human motion information is wirelesslytransmitted to the machine.

Figure 1.

Figure 1.   General structure diagram of the system


The host system is composed of PC, Kinect sensor and XBee wireless communication module. The slave control system is composed of Arduinomicroprocessor, XBee wireless communication module, TV remote controller, LED lamp (red, green, blue and white), gravity sensor and so on.The information is processed in real time from the machine, and each module is controlled to complete the corresponding instruction.

3. System Hardware Design

3.1.Kinect Sensor

The system body sensing device applied in this system is a motion-sensitive Kinect produced by Microsoft.The Kinect is composed of color cameras, infrared devices, microphone arrays, logic circuits, motors [8]. Kinect has three cameras; the middle is RGB color camera, and the left and right sides are 3D deep sensors, which are made up of infrared transmitters and CMOS infrared cameras. They can acquire color and depth images at the same time. The microphone array consists of four built-in microphones. It is the first system in the world to combine RGB images and deep images with reasonable prices.Unlike 2D cameras, the low-cost Kinect can capture human skeleton information and scene depth images, and can generate point cloud data, as well as voice recognition.The Kinect point cloud image uses color spectra to detect the human body and can effectively protect privacy.The Kinect sensor is equipped with an active light source that is independent of external lighting conditions. When the Kinect uses infrared light, it can extract deep images in dark rooms[9], as shown in Figure 2.

Figure 2.

Figure 2.   From left to right: Kinect infrared transmitter, RGB color camera, LED and infrared projector


The Kinect’s core technology is the ability to get in-depth data from the target and have a skeletal tracking function. At present, it can track 20 skeletal points in the human body, and can locate the skeletal position of up to six people. Using this technology, alot of people have carried out studies of human body recognition [10-11], gesture recognition [12-14], and face recognition [15].

3.2.Arduino Microprocessor

The collection control module of the system adopts Arduino as the core of the control module and realizes the function of the design by carrying different sensors and controllers. Arduino mainly consists of two parts: the hardware part is the Arduino circuit board, and the Arduino IDE, which is a program development environment that is stored in a computer.Arduino is an open source hardware development platform with 8 ATMEGA328 microprocessor as the core, providing 14 digital input and output pins and 6 analog input pins, as shown in Figure 3.It can support USB data transmission, and users can connect different electronic devices on the I/O ports [16-18].The Arduino is able to collect weight data by carrying gravity sensors, and by controlling lights and remote controls to feed back and influence the environment.

Figure 3.

Figure 3.   Arduino digital pins (top) and simulated pins (bottom)


3.3. XBee Wireless Communication Module

When human movement isdetected, the computer receives instructions and transfers it to the controller by wireless communication.The wireless communication is the most important part of the smart home system. The XBee adopted by this system is a wireless communication module produced by Digi company. It is based on IEEE 802.15.4; the working voltage is 3V. Only when the XBee module is connected to the XBee Explorer USB (emitter) and the XBee Explorer (receiver) can you start the device [19].We have also created a wireless connection which communicates with the Arduino through the module’s built-in serial communication protocol, as shown in Figure 4.

Figure 4.

Figure 4.   Connects the XBee Explorer to Arduino


4. System Design and Experimental Results

4.1. Kinect Controlling TV

4.1.1. Working Principle

The system sets the motion of a quick wave to the left and the right to switch the channel, and through drawing a circle by hand to achieve the continuous volumecontrol.The premise of this function is to connect the TV remote control channel +, channel -, volume + and volume - to the Arduino port to complete a modified remote control.NITE’s gesture recognition function and the circular probe inside the NITE is used to realize the recognition of a wave motion and the effect of smoothing the volume.

4.1.2. Experimental Design

First, the creation, deletion of the hand point and function updates are triggered by the NITE.After the hand point is created, set the flag to true so that hand tracking is achieved.Determine the position of the hand through the parameters of the function are called through NITE, and then clear the list and add the first point.

Second, realize the recognition of simple wave gestures.You can detect the switching potential of the channel by simply analyzing the current horizontal movement speed of the hand.Extract the current and previous hand positions from the parameter, measure their differences on the X-axis, and set a threshold to distinguish the normal movements of the hands and wave instructions.

Third, when the main loop enters the volume control mode, it will draw the latest round that NITE detects with radius, midpoint and Angle variables.

4.1.3. Experimental Results

Wave your hands in front of the Kinect and a red dot will appear.This means that NITE has recognized the hand and started tracking it. Users can also see a long track of hand movements, as shown in Figure 5.

Figure 5.

Figure 5.   Hand tracking


Wave in the horizontal direction and send the channel switch to Arduino. As long as the switch channel action is identified, the program will draw an arrow with the text, as shown in Figure 6.

Figure 6.

Figure 6.   Gestures from one channel to the next


As long as you draw a circle with your hand, you will see the volume control knob on the screen. It’s time to check whether the circle is larger than the previous one (clockwise) or small (counterclockwise). The former will send the “increase volume” signal to Arduino;otherwise, the “lower volume” signal is sent, as shown in Figure 7. If a user wants to stop the volume control, just move his/her hand out of the circle drawing in the air.

Figure 7.

Figure 7.   Raise and lower the volume with volume control knob


4.2. Controlling the Lights by Kinect

The system contains a lighting function that regulates the status of lamps and lanterns with the body posture. It also perceives the location of the user in the room and increases the brightness of the user’s nearest lamp accordingly.

4.2.1. Attitude Tracking

The Kinect tracks the user’s skeleton to determine the location of the user. Simple-OpenNI is a multilingual, cross-platform framework that defines the API for program development and implementationof natural interactions, and builds a bridge between sensor and middleware communication.If the human body appears in the field of vision of Kinect, it will get its center of mass.By standingin front of Kinect and posingfor a few seconds, the user skeleton will appear on the screen and follow every action of the user until the user disappears from the screen, as shown in Figure 8.

Figure 8.

Figure 8.   User skeleton


4.2.2. Create Lamps

After the implementation of attitude tracking, the location of lamps is determined by the method of body gesture recognition.Stand in the position of the lamp. Use both hands to hold up the position of 200mm above the head to tell the system that there is a lamp at this position; the lamp is not allowed to be created within 200mm.

To complete this gesture recognition, you need to extract both hands positions and head position from the parameter and measure their differences on the Y-axis. When the difference between the position of the two hands and the value of the head position on the Y-axis is greater than the set threshold, the lighting instruction is triggered, as shown inFigure 9.

Figure 9.

Figure 9.   Creates lamps by body posture


4.2.3. Lighting Control

When the user points his right arm to the lamp, he can adjust the color and the brightness of the lamp through the distance between the left hand and the head.

To identify whether the arm is stretched, you need to define the vector of the right forearm and right arm. These two vectors are obtained by defining the vector of the shoulder position minus the vector that defines the position of the elbow and the vector of the elbow minus the vector of the hand.

Through these vectors, it is possible to detect whether the Angle between two vectors is less than the threshold set in advance. If the right arm is detected in the straightening state, the Angle between the right arm vector and the Angle of the right hand pointing to the lamp are detected. If the Angle is less than the preset threshold, the selection of lamps is realized. Access to user space position tocheck whether the distance between the lamp and the lamp is less than a specific threshold to adjust the white LED canimprove the luminance function of the nearest lamp, as shown in Figure 10.

Figure 10.

Figure 10.   User control of RGB values of lamps and lanterns


4.3. Biometric Identification of Kinect

BMI reflects the relationship between height and weight, and is closely related to the body fat. It is an important indicator reflecting the body. International, it is usually taken as a measure of the degree of standard human body fat [20].

Usethe Kinect skeletal tracking and point cloud shading function to acquire the user’s identity and height, and receivethe weight value transmitted by Arduino.Human body mass index (BMI) isgenerated. The basic function of the program is that when the user enters the Kinect field of vision, the system starts to track the user. From the user’s point cloud, it finds the highest and lowest point as well as the height of the body. At the same time, the skeleton of the user finds the limb proportions.The data will be served as a fingerprint preservation, along with the physical characteristics of user data. If the proportion of the existing user’s limbs is similar to one of the fingerprints, the user can identify the user and retrieve the data from the current fingerprint. Once you get all the skeletons and height data, you can stand on the weight toget your BMI.

4.3.1. User Recognition

Through skeletal tracking, the real coordinates of the entire user’s skeletal joint are stored, which is used to calculate the width of the arm, forearm, thigh, calf and shoulder.The similarity between two user skeleton data is measured by multidimensional vector subtraction and vector based Cartesian coordinate system [21], as shown in Equation (1):

${{\vec{e}}_{1}}=(1, 0, 0)$, ${{\vec{e}}_{2}}=(0, 1, 0)$, ${{\vec{e}}_{3}}=(0, 0, 1)$

Subtraction of two three dimensional vector, as shown in Equation (2):

$\vec{a}-\vec{b}=({{a}_{1}}-{{b}_{1}}){{\vec{e}}_{1}}+({{a}_{2}}-{{b}_{2}}){{\vec{e}}_{2}}+({{a}_{3}}-{{b}_{3}}){{\vec{e}}_{3}}$

Subtraction of n dimensional vector, as shown in Equation (3):

$\vec{a}-\vec{b}=({{a}_{1}}-{{b}_{1}}){{\vec{e}}_{1}}+({{a}_{2}}-{{b}_{2}}){{\vec{e}}_{2}}+\cdots +({{a}_{n}}-{{b}_{n}}){{\vec{e}}_{n}}$

By taking the array of user data as a multidimensional vector, the subtraction of the two vectors is used to get the difference between the length of the two users, as shown in Equation (4):

$\vec{a}-\vec{b}=\vec{c}$, $\left\| {\vec{c}} \right\|=\sqrt{c_{1}^{2}+c_{2}^{2}+c_{3}^{2}+\cdots +c_{n}^{2}}$

Once all length differences are obtained, you need to find out with the closest user compare with the current user, and retrieve the user data from the selected file and update the user name and date of birth at the same time. This identifies the identity of the user’s, as shown in Figure 11.

Figure 11.

Figure 11.   Comparison of users


4.3.2. Height Measurement

When the user’s weight was collected, the system obtained the scene image and transformed it into a three-dimensional point cloud image. The human point cloud image was divided and processed, and the user's height was measured using the user point cloud. The scenario is enabled by NITE to separate the background from the foreground user and find the highest and the lowest point from the user’s point cloud. Subtract the Y coordinates to get the actual height of the user, as shown in Figure 12.

Figure 12.

Figure 12.   The user height from the highest point and lowest point


4.3.3. Weight Measurement

Weight measurement is carried out by using a gravity sensor in the Arduino to collect data and process data; the wireless communication module is used to transmit to the PC terminal.

This design uses the resistance strain type gravity sensor HX711; the principle is shown in Figure 13. The resistance strain sensor is made up of elastic sensing element, resistance strain gauge, compensation resistor and shell. It can be designed into many structural forms according to the specific measurement requirements. Resistance strain weighing sensor principle is: elastomers(elastic element, sensitive beam) produce elastic deformation under the action of external force, the resistance strain chip pasted on the surface (conversion element) also accompanied the deformation, strain deformation, and its resistance will change (increase or decrease). Measuring the corresponding circuit converts the resistance change into electrical signals (voltage or current), thus completing the force will transform into electrical signals in the process of [22]. The stress analysis as shown in Figure 14.

Figure 13.

Figure 13.   Schematic diagram of pressure sensor AD conversion module


Figure 14.

Figure 14.   Force analysis


5. Conclusion

This project is based on Microsoft access sensors. It integrates the advanced Internet technology, tries a new way of smart home control, realizes the 3D skeletal tracking and performs custom gesture recognition.This paper can be regarded as a general introduction to the construction of intelligent residential appliances, and can also be applied to any household electrical appliances.In addition, using Kinect’s biometric Identification Technology can also build a system which can display all the chart of height and weight information timely, through the size of different people to evaluate the user’s physical health and to categorize them.Another example is a home security system based on user identification.

Acknowledgements

The authors wish to express their gratitude towards the financial supports from Science and Technology Department of Liaoning province (No. 20170052), Education Department of Liaoning province (No. 2017J047) and China Science Technical Department (NO. 2017YFC0821003).

Reference

C. Qu, J. Sun, and J. Z. Wang ,

“Automatic Detection of Fall in Old People based on Kinect Sensor, ”

Journal of Sensing Technology, Vol. 29, No. 3, pp. 378-383, 2016

[Cited within: 1]

M. Kepski and B. Kwolek ,

“Fall Detection on Embedded Platform using Kinect and Wireless Accelerometer, ”

Computers Helping People with Special Needs, Springer Berlin Heidelberg, pp. 407-414, 2012

DOI:10.1007/978-3-642-31534-3_60      URL     [Cited within: 1]

In this paper we demonstrate how to accomplish reliable fall detection on a low-cost embedded platform. The detection is achieved by a fuzzy inference system using Kinect and a wearable motion-sensing device that consists of accelerometer and gyroscope. The foreground objects are detected using depth images obtained by Kinect, which is able to extract such images in a room that is dark to our eyes. The system has been implemented on the PandaBoard ES and runs in real-time. It permits unobtrusive fall detection as well as preserves privacy of the user. The experimental results indicate high effectiveness of fall detection.

C. X. Lu ,

“Research and Implementation of Intelligent Home Lighting Energy Saving Control System based on Embedded Linux, ”

Microelectronics& Amp; Computer, Vol. 33, No. 10, pp. 139-142, 2016

[Cited within: 1]

Y. Y. Hou, D. T. Yang, Y. Liu ,

“Intelligent Home Life and Security System based on Wireless Bluetooth Technology, ”

Journal of Jiaying University, Vol. 34, No. 5, pp. 36-40, 2016

[Cited within: 1]

K. Gill, S. H. Yang, F. Yao ,

“A Zigbee-based Home Automation System, ”

IEEE Transactions on Consumer Electronics,Vol. 55, No. 2, pp. 422-430, 2009

DOI:10.1109/TCE.2009.5174403      URL     [Cited within: 1]

In recent years, the home environment has seen a rapid introduction of network enabled digital technology. This technology offers new and exciting opportunities to increase the connectivity of devices within the home for the purpose of home automation. Moreover, with the rapid expansion of the Internet, there is the added potential for the remote control and monitoring of such network enabled devices. However, the adoption of home automation systems has been slow. This paper identifies the reasons for this slow adoption and evaluates the potential of ZigBee for addressing these problems through the design and implementation of a flexible home automation architecture. A ZigBee based home automation system and Wi-Fi network are integrated through a common home gateway. The home gateway provides network interoperability, a simple and flexible user interface, and remote access to the system. A dedicated virtual home is implemented to cater for the system's security and safety needs. To demonstrate the feasibility and effectiveness of the proposed system, four devices, a light switch, radiator valve, safety sensor and ZigBee remote control have been developed and evaluated with the home automation system.

Y. Agarwal, B. Balaji, R. Gupta ,

“Occupancy-Driven Energy Management for Smart Building Automation, ”

inProceedings of the 2 ndACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Building , pp. 1-6, 2010

DOI:10.1145/1878431.1878433      URL     [Cited within: 1]

Buildings are among the largest consumers of electricity in the US. A significant portion of this energy use in buildings can be attributed to HVAC systems used to maintain comfort for occupants. In most cases these building HVAC systems run on fixed schedules and do not employ any fine grained control based on detailed occupancy information. In this paper we present the design and implementation of a presence sensor platform that can be used for accurate occupancy detection at the level of individual offices. Our presence sensor is low-cost, wireless, and incrementally deployable within existing buildings. Using a pilot deployment of our system across ten offices over a two week period we identify significant opportunities for energy savings due to periods of vacancy. Our energy measurements show that our presence node has an estimated battery lifetime of over five years, while detecting occupancy accurately. Furthermore, using a building simulation framework and the occupancy information from our testbed, we show potential energy savings from 10% to 15% using our system.

S. Y. Chen, T. Liu, and C. Shen ,

“Smart Home Energy Optimization based on Wearable Device Perception, ”

Computer Research and Development,Vol. 53, No. 3, pp. 704-715, 2016

[Cited within: 1]

J. Smisek, M. Jancosek, T. Pajdla ,

“3D with Kinect, ”

Advances in Computer Vision & Pattern Recognition, Vol. 21, No. 5, pp. 1154-1160, 2013

[Cited within: 1]

M. Kepski and B. Kwolek ,

“Human Fall Detection using Kinect Sensor, ”

inProceedings of the 8 th International Conference on Computer Recognition Systems , pp. 743-752, 2013

DOI:10.1007/978-3-319-00969-8_73      URL     [Cited within: 1]

Falls are major causes of mortality and morbidity in the elderly. The existing CCD-camera based solutions require time for installation, camera calibration and are not generally cheap. In this paper we show how to achieve automatic fall detection using Kinect sensor. The person is segmented on the basis of the updated depth reference images. Afterwards, the distance of the person to the ground plane is calculated. The ground plane is extracted by the RANSAC algorithm. The point cloud belonging to the floor is determined using v-disparity images and the Hough transform.

J. Lee, L. Jin, D. Park ,

“Automatic Recognition of Aggressive Behavior in Pigs using a Kinect Depth Sensor, ”

Sensors, Vol. 15, No. 5, pp. 631, 2016

DOI:10.3390/s16050631      URL     PMID:27144572      [Cited within: 1]

Aggression among pigs adversely affects economic returns and animal welfare in intensive pigsties. In this study, we developed a non-invasive, inexpensive, automatic monitoring prototype system that uses a Kinect depth sensor to recognize aggressive behavior in a commercial pigpen. The method begins by extracting activity features from the Kinect depth information obtained in a pigsty. The detection and classification module, which employs two binary-classifier support vector machines in a hierarchical manner, detects aggressive activity, and classifies it into aggressive sub-types such as head-to-head (or body) knocking and chasing. Our experimental results showed that this method is effective for detecting aggressive pig behaviors in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (detection and classification accuracies over 95.7% and 90.2%, respectively), either as a standalone solution or to complement existing methods.

M. Zhang ,

“Body Motion Tracking and Recognition of Fracture Patients after Operation based on Kinect, ”

inProceedings of International Conference on Electronics, Electrical Engineering and Information Science, pp. 580-588, 2016

[Cited within: 1]

R. Ibañez, A. Soria, and A. R. Teyseyre ,

“A Comparative Study of Machine Learning Techniques for Gesture Recognition using Kinect, ”

Handbook of Research on Human-Computer Interfaces, Developments, and Applications, 2016

[Cited within: 1]

F. L. Liu, B. X. Du,and Q. H. Wang ,

“Hand Gesture Recognition using Kinect Via Deterministic Learning, ”

in Proceedings of Control and Decision Conference, pp. 196-199, IEEE, 2017

DOI:10.1109/CCDC.2017.7978867      URL    

Hand gestures are spatio-temporal patterns which can be characterized by collections of spatio-temporal features. However, in real world scenarios, hand gesture recognition suffers from huge challenges with variations of illumination, poses and occlusions. The Microsoft Kinect device provides an effective way to solve the above issues and extract discriminative features for hand gesture recognition. The recognition approach consists of two stages: a training stage and a recognition stage. In the training stage, hand gesture features representing hand motion dynamics, including spatial position and direction of fingertips, are derived from Kinect. Hand motion dynamics underlying motion patterns of different gestures which represent Arabic numbers (0 9) are locally accurately modeled and approximated by radial basis function (RBF) neural networks. The obtained knowledge of approximated hand motion dynamics is stored in constant RBF networks. In the recognition stage, a bank of dynamical estimators is constructed for all the training patterns. Prior knowledge of hand motion dynamics represented by the constant RBF networks is embedded in the estimators. By comparing the set of estimators with a test gesture pattern to be recognized, a set of recognition errors are generated. The average L1 norms of the errors are taken as the recognition measure between the dynamics of the training gesture patterns and the dynamics of the test gesture pattern according to the smallest error principle. By using the 2-fold and 10-fold cross-validation styles, the correct recognition rates are reported to be 95.83% and 97.25%, respectively.

D. D. Nguyen and H. S. Le ,

“Kinect Gesture Recognition: SVM vs. RVM, ”

in Proceedings of the Seventh International Conference on Knowledge and Systems Engineering, pp. 395-400, IEEE, 2016

DOI:10.1109/KSE.2015.35      URL     [Cited within: 1]

Human gesture recognition has been an active and challenging problem, especially when motion capture devices become more popular. Various studies have shown that support vector machines (SVMs) with Gaussian kernels are among the most prominent models for an accurate gesture classification. We demonstrate in this paper that the relevance vector machines (RVMs) could also achieve the state-of-the-art predictive performance. Moreover, RVMs run much faster than SVMs in testing phase. Intensive experiments on the Microsoft's MSRC-12 Kinect gesture data set also pointed out that prediction behaviors of SVMs and RVMs are very similar in terms of parameter sensitivity, accuracy in leave-subject-out test, and gesture discrimination.

B. Y. L. Li, A. S. Mian,and W. Liu ,

“Using Kinect for Face Recognition under Varying Poses, Expressions, Illumination and Disguise, ”

in Proceedings of IEEE Workshop on Applications of Computer Vision,pp. 186-192, IEEE Computer Society, 2013

DOI:10.1109/WACV.2013.6475017      URL     [Cited within: 1]

We present an algorithm that uses a low resolution 3D sensor for robust face recognition under challenging conditions. A preprocessing algorithm is proposed which exploits the facial symmetry at the 3D point cloud level to obtain a canonical frontal view, shape and texture, of the faces irrespective of their initial pose. This algorithm also fills holes and smooths the noisy depth data produced by the low resolution sensor. The canonical depth map and texture of a query face are then sparse approximated from separate dictionaries learned from training data. The texture is transformed from the RGB to Discriminant Color Space before sparse coding and the reconstruction errors from the two sparse coding steps are added for individual identities in the dictionary. The query face is assigned the identity with the smallest reconstruction error. Experiments are performed using a publicly available database containing over 5000 facial images (RGB-D) with varying poses, expressions, illumination and disguise, acquired using the Kinect sensor. Recognition rates are 96.7% for the RGB-D data and 88.7% for the noisy depth data alone. Our results justify the feasibility of low resolution 3D sensors for robust face recognition.

G. Barbon, M. Margolis, F. Palumbo ,

“Taking Arduino to the Internet of Things: The ASIP Programming Model, ”

Computer Communications, Vol. 89-90, pp. 128-140, 2016

DOI:10.1016/j.comcom.2016.03.016      URL     [Cited within: 1]

Micro-controllers such as Arduino are widely used by all kinds of makers worldwide. Popularity has been driven by Arduino’s simplicity of use and the large number of sensors and libraries available to extend the basic capabilities of these controllers. The last decade has witnessed a surge of software engineering solutions for “the Internet of Things”, but in several cases these solutions require computational resources that are more advanced than simple, resource-limited micro-controllers. Surprisingly, in spite of being the basic ingredients of complex hardware–software systems, there does not seem to be a simple and flexible way to (1) extend the basic capabilities of micro-controllers, and (2) to coordinate inter-connected micro-controllers in “the Internet of Things”. Indeed, new capabilities are added on a per-application basis and interactions are mainly limited to bespoke, point-to-point protocols that target the hardware I/O rather than the services provided by this hardware. In this paper we present the Arduino Service Interface Programming (ASIP) model, a new model that addresses the issues above by (1) providing a “Service” abstraction to easily add new capabilities to micro-controllers, and (2) providing support for networked boards using a range of strategies, including socket connections, bridging devices, MQTT-based publish–subscribe messaging, discovery services, etc. We provide an open-source implementation of the code running on Arduino boards and client libraries in Java, Python, Racket and Erlang. We show how ASIP enables the rapid development of non-trivial applications (coordination of input/output on distributed boards and implementation of a line-following algorithm for a remote robot) and we assess the performance of ASIP in several ways, both quantitative and qualitative.

C. Klemenjak, D. Egarter, W. Elmenreich ,

“YoMo: the Arduino-based Smart Metering Board, ”

Computer Science - Research and Development,Vol. 31, No.1-2, pp. 97-103, 2016

DOI:10.1007/s00450-014-0290-8      URL    

Smart meters are an enabling technology for many smart grid applications. This paper introduces a design for a low-cost smart meter system as well as the fundamentals of smart metering. The smart meter platform, provided as open hardware, is designed with a connector interface compatible to the Arduino platform, thus opening the possibilities for smart meters with flexible hardware and computation features, starting from low-cost 8 bit micro controllers up to powerful single board computers that can run Linux. The metering platform features a current transformer which allows a non-intrusive installation of the current measurement unit. The suggested design can switch loads, offers a variable sampling frequency, and provides measurement data such as active power, reactive and apparent power. Results indicate that measurement accuracy and resolution of the proposed metering platform are sufficient for a range of different applications and loads from a few watts up to five kilowatts.

R. Krauss ,

“Combining Raspberry Pi and Arduino to form a Low-Cost, Real-Time Autonomous Vehicle Platform

, ”in Proceedings of American Control Conference,pp.6628-6633, IEEE, 2016

DOI:10.1109/ACC.2016.7526714      URL     [Cited within: 1]

This paper presents a novel, low-cost autonomous vehicle platform based on the combination of a Raspberry Pi mini-computer, an Arduino micro-controller, and a Zumo track-driven robot chassis. The Arduino allows the control law to be executed at hard real-time intervals while the Raspberry Pi provides additional computing power, a web interface, and wireless data-streaming for control tuning and debugging. The efficacy of the platform in controls education is assessed after using the platform for demonstration purposes in a first course on feedback control. The entire system including the Raspberry Pi and Arduino Uno costs roughly $215.

H. D. Wang, N. Liu, Z. H. Cui, W. Yang, A. Huang, G. J. Zhao , et al.,

“Based on XBee’s Wireless Data Acquisition System Design and Implementation, ”

Electronic Technology,Vol. 45, No.1, pp. 67-70+55, 2016

[Cited within: 1]

M. L. Yang, X. M. Lou, Y. L. Peng, R. J. Wang, L. Li,and W. W. Guo ,

“Correlation of College Students’ BMI with Physical Fitness Indicators, ”

Chinese School Healthx, Vol. 34,No. 9, pp. 1093-1095+1098, 2013

[Cited within: 1]

E. R. Melgar, C. C. Diez, P. Jaworski ,

“Arduino and Kinect Projects,

2012

[Cited within: 1]

M. J. Liu, Q. Zhang, Y. W. Mu ,

“Design of High-Precision Electronic Scales based onHX711, ”

Information Communication, No. 1, pp. 142-144, 2017

[Cited within: 1]

/