Human-Computer Interface Based on Facial Gestures Oriented to WhatsApp for Persons with Upper-Limb Motor Impairments

Keywords: Human-computer interface, face detection, computer vision, assistive technology


People with reduced upper-limb mobility depend mainly on facial gestures to communicate with the world; nonetheless, current facial gesture-based interfaces do not take into account the reduction in mobility that most people with motor limitations experience during recovery periods. This study presents an alternative to overcome this limitation, a human-computer interface based on computer vision techniques over two types of images: images of the user’s face captured by a webcam and screenshots of a desktop application running on the foreground. The first type is used to detect, track, and estimate gestures, facial patterns in order to move and execute commands with the cursor, while the second one is used to ensure that the cursor moves to specific interaction areas of the desktop application. The interface was fully programmed in Python 3.6 using open source libraries and runs in the background in Windows operating systems. The performance of the interface was evaluated with videos of people using four interaction commands in WhatsApp Desktop. We conclude that the interface can operate with various types of lighting, backgrounds, camera distances, body postures, and movement speeds; and the location and size of the WhatsApp window does not affect its effectiveness. The interface operates at a speed of 1 Hz and uses 35 % of the capacity a desktop computer with an Intel Core i5 processor and 1.5 GB of RAM for its execution; therefore, this solution can be implemented in ordinary, low-end personal computers.

Author Biographies

Carlos Ferrín-Bolaños*, Universitaria Católica Lumen Gentium, Colombia
José Mosquera-DeLaCruz, Universitaria Católica Lumen Gentium, Colombia
John Pino-Murcia, Universitaria Católica Lumen Gentium, Colombia
Luis Moctezuma-Ruiz, Universitaria Católica Lumen Gentium, Colombia
Jonathan Burgos-Martínez, Universitaria Católica Lumen Gentium, Colombia
Luis Aragón-Valencia, 6Universidad del Cauca, Colombia


Humberto Loaiza-Correa, Universidad del Valle, Colombia


J. H. Mosquera-DeLaCruz; H. Loaiza-Correa; S. E. Nope-Rodríguez; A. D. Restrepo-Giró, “Human-computer multimodal interface to internet navigation,” Disabil. Rehabil. Assist. Technol., pp. 1–14, Jul. 2020.

C. Ferrin-Bolaños; H. Loaiza-Correa; J. Pierre-Diaz; P. Vélez-Ángel, “Evaluación del aporte de la covarianza de las señales electroencefalográficas a las interfaces cerebro-computador de imaginación motora para pacientes con lesiones de médula espinal,” TecnoLógicas, vol. 22, no. 46, pp. 213–231, Sep.. 2019.

Ministerio de Salud y Protección Social Oficina de Promoción Social de Colombia, “Sala Situacional Situación de las Personas con Discapacidad,” Junio. 2018.

L. Cortés-Rico; G. Piedrahita-Solórzano, “Interacciones basadas en gestos: revisión crítica,” TecnoLógicas, vol. 22, pp. 119–132, Dic. 2019,

N. Balsero; D. Botero; J. Zuluaga; C. Parra Rodríguez, “Interacción hombre-máquina usando gestos manuales en texto real,” Ing. y Univ., vol. 9, no. 2, pp. 101–112, 2005.

W. A. Castrillón Herrera, “Implementación de una Interfaz Hombre-Máquina para el Control de un Brazo Robótico Mediante Posturas Labiales,” (Trabajo de Grado), Universidad Nacional de Colombia, Manizales, 2009.

J. H. Mosquera; H. Loaiza; S. E. Nope; A. D. Restrepo, “Identifying facial gestures to emulate a mouse: navigation application on Facebook.,” IEEE Lat. Am. Trans., vol. 15, no. 1, pp. 121–128, Jan. 2017,

C. Mauri, T. Granollers; J. Lorés; M. García “Computer vision interaction for people with severe movement restriction,” An Interdiscip. J. Humans ICT Environ., vol. 2, pp. 38–54, Apr. 2006.

E. Perini; S. Soria; A. Prati; R. Cucchiara, “FaceMouse: A Human-Computer Interface for Tetraplegic People,” in Lecture Notes in Computer Science, Springer-Verlag Berlin Heidelberg, pp. 99–108, 2006.

J. Varona; C. Manresa-Yee; F. J. Perales, “Hands-free vision-based interface for computer accessibility,” J. Netw. Comput. Appl., vol. 31, no. 4, pp. 357–374, Nov. 2008,

M. Betke; J. Gips; P. Fleming, “The Camera Mouse: visual tracking of body features to provide computer access for people with severe disabilities,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 10, no. 1, pp. 1–10, Mar. 2002.

S. S. Khan; M. S. H. Sunny; M. S. Hossain; E. Hossain; M. Ahmad, “Nose tracking cursor control for the people with disabilities: An improved HCI,” en 2017 3rd International Conference on Electrical Information and Communication Technology (EICT), Khulna 2017, pp. 1–5.

A. Matos; V. Filipe; P. Couto, “Human-computer interaction based on facial expression recognition: A case study in degenerative neuromuscular disease,” ACM Int. Conf. Proceeding Ser., pp. 8–12, Dec. 2016.

A. Rabhi; A. Sadiq; A. Mouloudi, “Face tracking: state of the art,” in 2015 Third World Conference on Complex Systems (WCCS), Marrakech. 2015, pp. 1–8.

P. Premaratne, Human Computer Interaction Using Hand Gestures. Singapore: Springer Singapore, 2014.

L. Nanni; S. Brahnam; A. Lumini, “Face Detection Ensemble with Methods Using Depth Information to Filter False Positives,” Sensors, vol. 19, no. 23, p. 5242, Nov. 2019,

M. W. Ni, “Facial image registration,” (Tesis Doctoral), Electrotechnique, Automatique et ´Traitement du Signal, l’universite de Grenoble, 2017.

M. H. Teja, “Real-time live face detection using face template matching and DCT energy analysis,” in 2011 International Conference of Soft Computing and Pattern Recognition (SoCPaR), Dalian. 2011, pp. 342–346.

A. Aldhahab; T. Alobaidi; A. Q. Althahab; W. B. Mikhael, “Applying Multiresolution Analysis to Vector Quantization Features for Face Recognition,” in 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas 2019, pp. 598–601.

S. Zafeiriou; C. Zhang; Z. Zhang, “A survey on face detection in the wild: Past, present and future,” Comput. Vis. Image Underst., vol. 138, pp. 1–24, Sep. 2015.

A. Kumar; A. Kaur; M. Kumar, “Face detection techniques: a review,” Artif. Intell. Rev., vol. 52, pp. 927–948, Agu.2019.

F. Pujol; M. Pujol; A. Jimeno-Morenilla; M. Pujol, “Face Detection Based on Skin Color Segmentation Using Fuzzy Entropy,” Entropy, vol. 19, no. 1, p. 26, Jan. 2017.

E. Perini; S. Soria; A. Prati; R. Cucchiara, “FaceMouse: A human-computer interface for tetraplegic people,” Lect. Notes Comput. Sci, Berlin, 2006, pp. 99–108.

R. Brunelli, Template Matching Techniques in Computer Vision: Theory and Practice. JohnWiley & sons ltda. 2009

V. S. R. Middi, K. J. Thomas; T. A. Harris, “Facial Keypoint Detection Using Deep Learning and Computer Vision,” Springer International Publishing, 2020, pp. 493–502.

A. Divya; K. B. Raja; K. R. Venugopal, “Face Recognition Based on Windowing Technique Using DCT, Average Covariance and Artificial Neural Network,” in 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Bangkok 2018, pp. 335–342.

J. Huang; Y. Shang; H. Chen, “Improved Viola-Jones face detection algorithm based on HoloLens,” Eurasip J. Image Video Process., vol. 2019, no. 1, 2019.

Y. Freund; R. E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting,” J. Comput. Syst. Sci., vol. 55, no. 1, pp. 119–139, Aug. 1997,

I. Culjak; D. Abram; T. Pribanic; H. Dzapo; M. Cifrek M, “A brief introduction to OpenCV,” en Proceedings of the 35th International Convention MIPRO, Opatija 2012.

J. H. Mosquera; E. Oliveros, “Interacción Humano-Máquina Audiovisual,” (Trabajo de grado), Universidad del Valle, Santiago de Cali, 2011.;jsessionid=7DE19B6C73D98F390B5D6A9B0C55D34A?sequence=1

V. Londoño-Osorio; J. Marín-Pineda; E. I. Arango-Zuluaga, “Introducción a la Visión Artificial mediante Prácticas de Laboratorio Diseñadas en Matlab,” TecnoLógicas, edición especial, pp. 591-603, Nov. 2013.

C. Sagonas; G. Tzimiropoulos; S. Zafeiriou; M. Pantic, “300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge,” in 2013 IEEE International Conference on Computer Vision Workshops, 2013, pp. 397–403.

X. Ren; J. Ding; J. Sun; Q. Sui, “Face modeling process based on Dlib,” in 2017 Chinese Automation Congress (CAC), Jinan 2017, pp. 1969–1972.

A. Sweigart, “Welcome to PyAutoGUI’s documentation!,” Read the Docs, 2020.

C. Ferrin; J. Pino, “hciVisualGesture,” 2020.

ISO 9241-940:2017, Ergonomics of human-system interaction - Part 940:Evaluation of tactile and haptic interactions. Switzerland, 2017.

Z. Ali; S. B. Bhaskar, “Basic statistical tools in research and data analysis,” Indian J. Anaesth., vol. 60, no. 9, pp. 662-669, 2016.

How to Cite
C. Ferrín-Bolaños, “Human-Computer Interface Based on Facial Gestures Oriented to WhatsApp for Persons with Upper-Limb Motor Impairments”, TecnoL., vol. 24, no. 50, p. e1722, Jan. 2021.


Download data is not yet available.
Research Papers
Crossref Cited-by logo

More on this topic