4
CHEN CHEN Signal and Image Processing (SIP) Lab Phone: (513)306-1788 Department of Electrical Engineering E-mail: [email protected] University of Texas at Dallas, Richardson, TX Webpage: http://www.utdallas.edu/cxc123730/ Education University of Texas at Dallas, Richardson, TX Aug. 2012 – May 2015 (expected) Ph.D. in Electrical Engineering Dissertation advisor : Prof. Nasser Kehtarnavaz Current GPA: 4.00/4.00 Project : Human Action Recognition Using Depth and Inertial Sensors Fusion Mississippi State University, Starkville, MS Aug. 2009 – May 2012 Master of Science in Electrical Engineering Thesis advisor : Prof. James E. Fowler GPA: 3.72/4.00 Thesis : Multihypothesis Prediction for Compressed Sensing and Super-resolution of Images Beijing Forestry University, Beijing, China Sep. 2005 – July 2009 Bachelor of Engineering, Automation Thesis advisor : Prof. Ning Han Thesis : A Fast Inter-mode Decision Algorithm for H.264/AVC Research Interests Machine learning and computer vision: sparse representation, dictionary learning, human action recognition, multimodal fusion Signal, image and video processing: compressive sensing, image compression, image super-resolution Remote sensing: hyperspectral image classification, land-use scene classification Research Experience Graduate Research Assistant, University of Texas at Dallas Aug. 2012 – present Human Action Recognition from Depth Sequences Designed a computationally efficient depth motion maps (DMMs)-based human action recognition method using a distance weighted 2 -regularized collaborative representation classifier. Proposed a computationally efficient and effective feature descriptor using DMMs and local binary patterns (LBPs) for action recognition from depth sequences. Both feature-level fusion and decision-level fusion approaches were investigated which involved kernel-based extreme learning machine (KELM) classification. Human Action/Gesture Recognition Using Depth and Inertial Sensors Fusion Created a publicly available multimodal human action dataset, named UTD-MAD, consisting of four temporally synchronized data modalities (RGB videos, depth videos, skeleton positions, and inertial signals) from a Kinect camera and a wearable inertial sensor for a comprehensive set of 27 human actions. [UTD Multimodal Action Dataset (UTD-MAD) Website] Developed a fusion approach for improving human action recognition based on two differing modality sensors consisting of a depth camera and an inertial body sensor. For action recognition, both feature-level fusion and decision-level fusion were examined by using a collaborative representation classifier. Developed a data fusion approach using data from inertial and vision depth sensors within the framework of a probabilistic hidden Markov model (HMM) for the application of hand gesture recognition. (Video demo)

ChenChen_CV_long

Embed Size (px)

Citation preview

Page 1: ChenChen_CV_long

CHEN CHENSignal and Image Processing (SIP) Lab Phone: (513)306-1788Department of Electrical Engineering E-mail: [email protected] of Texas at Dallas, Richardson, TX Webpage: http://www.utdallas.edu/∼cxc123730/

Education

• University of Texas at Dallas, Richardson, TX Aug. 2012 – May 2015 (expected)Ph.D. in Electrical EngineeringDissertation advisor : Prof. Nasser KehtarnavazCurrent GPA: 4.00/4.00Project : Human Action Recognition Using Depth and Inertial Sensors Fusion

• Mississippi State University, Starkville, MS Aug. 2009 – May 2012Master of Science in Electrical EngineeringThesis advisor : Prof. James E. FowlerGPA: 3.72/4.00Thesis: Multihypothesis Prediction for Compressed Sensing and Super-resolution of Images

• Beijing Forestry University, Beijing, China Sep. 2005 – July 2009Bachelor of Engineering, AutomationThesis advisor : Prof. Ning HanThesis: A Fast Inter-mode Decision Algorithm for H.264/AVC

Research Interests

• Machine learning and computer vision: sparse representation, dictionary learning, human action recognition,multimodal fusion

• Signal, image and video processing: compressive sensing, image compression, image super-resolution

• Remote sensing: hyperspectral image classification, land-use scene classification

Research Experience

Graduate Research Assistant, University of Texas at Dallas Aug. 2012 – present

• Human Action Recognition from Depth Sequences

– Designed a computationally efficient depth motion maps (DMMs)-based human action recognitionmethod using a distance weighted `2-regularized collaborative representation classifier.

– Proposed a computationally efficient and effective feature descriptor using DMMs and local binarypatterns (LBPs) for action recognition from depth sequences. Both feature-level fusion and decision-levelfusion approaches were investigated which involved kernel-based extreme learning machine (KELM)classification.

• Human Action/Gesture Recognition Using Depth and Inertial Sensors Fusion

– Created a publicly available multimodal human action dataset, named UTD-MAD, consisting of fourtemporally synchronized data modalities (RGB videos, depth videos, skeleton positions, and inertialsignals) from a Kinect camera and a wearable inertial sensor for a comprehensive set of 27 humanactions. [UTD Multimodal Action Dataset (UTD-MAD) Website]

– Developed a fusion approach for improving human action recognition based on two differing modalitysensors consisting of a depth camera and an inertial body sensor. For action recognition, bothfeature-level fusion and decision-level fusion were examined by using a collaborative representationclassifier.

– Developed a data fusion approach using data from inertial and vision depth sensors within theframework of a probabilistic hidden Markov model (HMM) for the application of hand gesturerecognition. (Video demo)

Page 2: ChenChen_CV_long

• Depth and Inertial Sensors Fusion in Healthcare Applications

– Developed a home-based Senior Fitness Test (SFT) measurement system by using an inertial sensor anda depth camera in a collaborative way. The depth camera was used to monitor the correct pose of asubject for a fitness test and any deviation from the correct pose while the inertial sensor was used tomeasure the number of a fitness test action performed by the subject within the time duration specifiedby the fitness protocol. (Video demo)

– Developed a medication adherence monitoring system for pill bottles based on a wearable inertial sensorand a Kinect camera. The Kinect camera was used only in the training phase to create signal templatescorresponding to the two actions of twist-cap and hand-to-mouth automatically. The act of pill intakewas identified by performing a moving window dynamic time warping (DTW) in real-time betweensignal templates and the signals acquired by the wearable inertial sensor.

Graduate Teaching and Research Assistant, Mississippi State University Sep. 2009 – May 2012

• Spectral-Spatial Preprocessing Using Multihypothesis Prediction for Noise-RobustHyperspectral Image Classification

– Proposed a spectral-spatial preprocessing strategy using multihypothesis (MH) prediction for improvinghyperspectral image classification.

– Proposed a spectral band-partitioning strategy based on inter-band correlation coefficients to improvethe representational power of the hypothesis set.

• Single-Image Super-Resolution Using Multihypothesis Prediction

– Developed a novel algorithm using multihypothesis prediction drawn from the low resolution images forsingle image super-resolution. Tikhonov regularization to an ill-posed least-squares optimization wasapplied to appropriately weight the hypothesis predictions. No high resolution training data was used.

• Compressed-sensing (CS) Images Recovery Using Multihypothesis Prediction

– Developed a novel CS images recovery algorithm using multihypotheis prediction. Proposed a distanceweighted Tikhonov regularization to an ill-posed least-squares optimization.

– Achieved superior performance on CS image and video reconstruction over alternative strategies such astotal variation (TV) minimization.

Teaching Experience

• Graduate Teaching Assistant Summer 2011Department of Electrical and Computer Engineering, Mississippi State UniversityCourse: ECE 3443 Signal and System

• Graduate Teaching Assistant Spring 2011Department of Electrical and Computer Engineering, Mississippi State UniversityCourse: ECE 3443 Signal and System, ECE 3183 Electric Engineering Systems

• Graduate Teaching Assistant Fall 2010Department of Electrical and Computer Engineering & Department of Mathematics and StatisticsMississippi State UniversityCourse: ECE 3183 Signal and Systems, ST 6523 Introduction to Probability

• Graduate Teaching Assistant Summer 2010Department of Electrical and Computer Engineering, Mississippi State UniversityCourse: ECE 3443 Signal and System, ECE 3413 Introduction to Electronic Circuits

• Graduate Teaching Assistant Fall 2009 – Spring 2010Department of Electrical and Computer Engineering, Mississippi State UniversityCourse: ECE 3413 Introduction to Electronic Circuits

Peer-reviewed Publications (Organized by Topics)

Page 3: ChenChen_CV_long

• Human Action/Gesture Recognition and Application

– [C] C. Chen, R. Jafari, and N. Kehtarnavaz, “UTD-MAD: A Multimodal Dataset for Human ActionRecognition Utilizing a Depth Camera and a Wearable Inertial Sensor,” submitted to the IEEEInternational Conference on Image Processing (ICIP), Quebec city, Canada, September 2015. [UTDMultimodal Action Dataset (UTD-MAD) Website]

– [C] C. Chen, R. Jafari, and N. Kehtarnavaz, “Action Recognition from Depth Sequences Using DepthMotion Maps-based Local Binary Patterns,” in Proceedings of the IEEE Winter Conference onApplications of Computer Vision (WACV 2015), Waikoloa Beach, HI, January, 2015, pp. 1092-1099.(Oral and poster presentation)

– [J] C. Chen, R. Jafari, and N. Kehtarnavaz, “Improving Human Action Recognition Using Fusion ofDepth Camera and Inertial Sensors,” IEEE Transactions on Human-Machine Systems, vol. 45, no. 1,pp. 51-61, February 2015.

– [C] K. Liu, C. Chen, R. Jafari, and N. Kehtarnavaz, “Multi-HMM Classification for Hand GestureRecognition Using Two Differing Modality Sensors,” in Proceedings of the 10th IEEE Dallas Circuitsand Systems Conference (DCAS’14), Richardson, TX, October 2014, pp. 1-4. (Oral presentation)

– [J] K. Liu, C. Chen, R. Jafari, and N. Kehtarnavaz, “Fusion of Inertial and Depth Sensor Data forRobust Hand Gesture Recognition,” IEEE Sensors Journal, vol. 14, no. 6, pp. 1898-1903, June 2014.

– [C] C. Chen, K. Liu, R. Jafari, and N. Kehtarnavaz, “Home-based Senior Fitness Test MeasurementSystem Using Collaborative Inertial and Depth Sensors,” in Proceedings of the 36th AnnualInternational Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’14),Chicago, IL, August 2014, pp. 4135-4138.

– [C] C. Chen, N. Kehtarnavaz, and R. Jafari, “A Medication Adherence Monitoring System for PillBottles Based on a Wearable Inertial Sensor,” in Proceedings of the 36th Annual InternationalConference of the IEEE Engineering in Medicine and Biology Society (EMBC’14), Chicago, IL, August2014, pp. 4135-4138.

– [J] C. Chen, K. Liu, and N. Kehtarnavaz, “Real-Time Human Action Recognition Based on DepthMotion Maps,” Journal of Real-Time Image Processing, August 2013. (doi: 10.1007/s11554-013-0370-1)

• Remote Sensing Land-Use Scene Classification

– [C] C. Chen, W. Li, H. Su, J. Guo, and F. Guo, “Gabor-Filtering-Based Completed Local BinaryPatterns for Land-Use Scene Classification,” to appear in BigMM-HSI: Multimedia Big Data andHyperspectral Imaging Workshop, in conjunction with the IEEE International Conference on MultimediaBig Data (BigMM), April 20-22 2015, Beijing, China.

• Hyperspectral Image Classification

– [J] W. Li, C. Chen, H. Su, and Q. Du, “Local Binary Patterns for Spatial-Spectral Classification ofHyperspectral Imagery,” IEEE Transactions on Geoscience and Remote Sensing, to appear, 2015. (doi:10.1109/TGRS.2014.2381602)

– [J] H. Su, Y. Sheng, P. Du, C. Chen, and K. Liu, “Hyperspectral Image Classification Based onVolumetric Texture and Dimensionality Reduction,” Frontiers of Earth Science, November 2014. (doi:10.1007/s11707-014-0473-4)

– [J] H. Su, B. Yong, P. Du, H. Liu, C. Chen, and K. Liu, “Dynamic Classifier Selection UsingSpectral-Spatial Information for Hyperspectral Image Classification,” Journal of Applied RemoteSensing, vol. 8, no. 1, pp. 085095, August 2014. (doi: 10.1117/1.JRS.8.085095)

– [J] C. Chen, W. Li, H. Su, and K. Liu, “Spectral-Spatial Classification of Hyperspectral Image Basedon Kernel Extreme Learning Machine,” Remote Sensing, vol. 6, no. 6, pp. 5795-5814, June 2014.

– [J] C. Chen, W. Li, E. W. Tramel, M. Cui, S. Prasad, and J. E. Fowler, “Spectral-Spatial PreprocessingUsing Multihypothesis Prediction for Noise-Robust Hyperspectral Image Classification,” IEEE Journalof Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 4, pp. 1047-1059,April 2014.

• Single Image Super-Resolution

– [J] Z. Zhu, F. Guo, H. Yu, and C. Chen, “Fast Single Image Super-Resolution via Self-ExampleLearning and Sparse Representation,” IEEE Transactions on Multimedia, vol. 16, no. 8, pp. 2178-2190,December 2014.

Page 4: ChenChen_CV_long

– [C] C. Chen, and J. E. Fowler, “Single-Image Super-Resolution Using Multihypothesis Prediction,” inProceedings of 46th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, Nov2012, pp. 608-612. (Oral presentation)

• Compressed-Sensing of Images and Videos

– [J] C. Chen, W. Li, and J. E. Fowler, “Hyperspectral Imagery Reconstruction Using MultihypothesisPrediction,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 1, pp. 365-374,January 2014.

– [C] C. Chen, E. W. Tramel, and J. E. Fowler, “Compressed-Sensing Recovery of Images and VideoUsing Multihypothesis Predictions,” in Proceedings of 45th Asilomar Conference on Signals, Systems,and Computers, Pacific Grove, CA, Nov 2011, pp. 1193-1198. (Oral presentation)

Patent

• N. Kehtarnavaz, R. Jafari, K. Liu, C. Chen, and J. Wu, “Fusion of inertial and depth sensors for robustbody movement measurements and recognition,” the University of Texas at Dallas, 2014 (pending).

Professional Services

• Conference Technical Program Committee Member

– 2013 IEEE International Conference on Image Processing (ICIP 2013)

– 2014 IEEE International Conference on Image Processing (ICIP 2014)

– 2015 International Conference on Intelligent Computing (ICIC 2015)

• Invited Journal Reviewer

– IEEE Transactions on Image Processing (since 2011)

– IEEE Transactions on Multimedia (since 2014)

– IEEE Transactions on Circuits and Systems for Video Technology (since 2014)

– IEEE Transactions on Human-Machine Systems (since 2014)

– IEEE Signal Processing Letters (since 2012)

– IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (since 2013)

– IEEE Geoscience and Remote Sensing Letters (since 2013)

– Signal, Image and Video Processing (since 2011)

– Sensors (since 2014)

– Mathematical Problems in Engineering (since 2014)

– Journal of Electronic Imaging (since 2013)

– Journal of Applied Remote Sensing (since 2014)

– Journal of Real-Time Image Processing (since 2014)

– Journal of Biomedical Optics (since 2014)

– International Journal of Remote Sensing and Remote Sensing Letters (since 2013)

– International Journal of Electronics and Communications (since 2014)

• Invited Conference Reviewer

– 2013 IEEE International Conference on Image Processing (ICIP 2013)

– 2014 IEEE International Conference on Image Processing (ICIP 2014)

– 2015 IEEE International Conference on Image Processing (ICIP 2015)

– 2015 IEEE Winter Conference on Applications of Computer Vision (WACV 2015)

– The First IEEE International Conference on Multimedia Big Data, 2015 (BigMM 2015)

Computer Skills

• Programming Languages: C/C++, MATLAB, HTML, LATEX

• Software/Tools: Microsoft Visual Studio, EXELIS ENVI, OpenCV,

• Operating Systems: Windows, Linux (Ubuntu)