12
E-Jacket: Posture Detection with Loose-Fiing Garment using a Novel Strain Sensor Qi Lin †‡ , Shuhua Peng , Yuezhong Wu †‡ , Jun Liu , Wen Hu †‡ , Mahbub Hassan †‡ , Aruna Seneviratne †‡ , Chun H Wang University of New South Wales Data61 CSIRO {qi.lin,shuhua.peng,wen.hu,mahbub.hassan,a.seneviratne,chun.h.wang}@unsw.edu.au,{yuezhong.wu,jun.liu}@student. unsw.edu.au ABSTRACT We address the problem of human posture detection with casual loose-fitting smart garments by fabricating a new type of highly sensitive, stretchable, optical transparent and low-cost strain sensor enabled by uniquely designed microcracks within a hybrid conduc- tive thin film. In terms of sensitivity and stretchability, the devel- oped sensor outperformed most of the works reported in recent literature, and has a gauge factor of 103 at the high strain of 58%. By attaching these sensors to an off-the-self casual jacket, we im- plement E-Jacket, a smart loose-fitting sensing garment prototype. To detect postures from sensor data, we implement a conventional deep learning model, CNN-LSTM, capable of overcoming the noise induced by the loose-fitting of the sensors to the human skin. To evaluate E-Jacket, we conducted three case studies in experimental environments: recognition of daily activities, recognition of station- ary postures with random hand movements, and slouch detection. Our evaluation results demonstrate the feasibility of the proposed E-Jacket smart garment system for different posture recognition applications. CCS CONCEPTS Human-centered computing Ubiquitous and mobile com- puting;• Computing methodologies Machine learning; KEYWORDS Smart garment, deep learning, CNN-LSTM, piezo-resistive strain sensor, posture detection 1 INTRODUCTION Electronics miniaturization and advancements in textile technology have enabled integration of various types of sensors into textiles and fabrics ushering in an era of E-Textiles or so called smart garments. A recent survey [40] shows that the E-Textile market is growing rapidly with products such as smart undergarments, socks, and gloves already on offer. With proper data analytic, these smart garments can eventually detect a wide range of body postures and activities creating new opportunities in ubiquitous health and fitness monitoring. Indeed, posture detection via smart garments has become a hot topic of research in recent years [8, 10, 12, 18, 31, 39]. A funda- mental challenge facing smart-garment-based posture detection is the high level of sensor signal noise caused due to the movement of the garment relative to the skin. The problem can be largely addressed by tight fitting of the garment, which is why the current smart garment products focus on socks, undergarments, gloves, and tight-fitting cuts for shirts and pants. However, for the smart garment industry to really take off, accurate posture detection so- lutions must be devised for the casual loose-fitting garments as well. Unfortunately, with the state-of-the-art sensors, researchers have found that posture detection accuracy deteriorates rapidly with increasing “looseness" of the garment [11], which highlights the need for innovation in sensor design for the smart garment industry. Currently, researches on high stretchability and sensitivity sen- sors focus generally have two direction. The first one is to stimulate large contact resistance changes between conductive nanofillers within elastomeric matrix under mechanical stimuli [38]. The sec- ond one is to control formation of micro or nanocracks within a conductive network [47]. A mechanical stimulus leads to the opening and closing of cracks that finally translates into resistance changes. In this work, we focus on the second method. In this paper, we report the design, fabrication, and evaluation of a transparent, low-cost and highly sensitive strain sensor en- abled by designed microcracks within a thin hybrid film consisting of conductive polymer poly, polystyrene sulfonate (PEDOT:PSS), and carbon nanofibers (CNFs) through a simple solution casting approach. The sensor sensitivity and stretchability can be simply tuned by controlling the duration of plasma treatment of the poly- dimethylsiloxane (PDMS) before the casting of conductive films. Our result shows that the proposed strain sensor outperforms the state-of-the-art sensors in terms of sensitivity, stretchability, appearance and financial cost. By attaching these sensors to an off- the-self casual jacket, we further implement a loose-fitting smart garment prototype, which we call E-Jacket. We also implement a deep neural network to further assist the data analytic in combating the heightened sensor signal noise caused by the loose-fitting of the garment. Laboratory trials with real subjects confirm that, using only a small number of these sensors, E-Jacket can reliably detect typical human postures and activities. The contribution of this work can be summarized as follows. Camera Ready

ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

E-Jacket: Posture Detection with Loose-Fitting Garment using aNovel Strain Sensor

Qi Lin†‡, Shuhua Peng†, Yuezhong Wu†‡, Jun Liu†, Wen Hu†‡, Mahbub Hassan†‡, Aruna

Seneviratne†‡, Chun H Wang†

†University of New South Wales‡Data61 CSIRO

qi.lin,shuhua.peng,wen.hu,mahbub.hassan,a.seneviratne,[email protected],yuezhong.wu,[email protected]

ABSTRACTWe address the problem of human posture detection with casualloose-fitting smart garments by fabricating a new type of highlysensitive, stretchable, optical transparent and low-cost strain sensorenabled by uniquely designed microcracks within a hybrid conduc-tive thin film. In terms of sensitivity and stretchability, the devel-oped sensor outperformed most of the works reported in recentliterature, and has a gauge factor of 103 at the high strain of 58%.By attaching these sensors to an off-the-self casual jacket, we im-plement E-Jacket, a smart loose-fitting sensing garment prototype.To detect postures from sensor data, we implement a conventionaldeep learning model, CNN-LSTM, capable of overcoming the noiseinduced by the loose-fitting of the sensors to the human skin. Toevaluate E-Jacket, we conducted three case studies in experimentalenvironments: recognition of daily activities, recognition of station-ary postures with random hand movements, and slouch detection.Our evaluation results demonstrate the feasibility of the proposedE-Jacket smart garment system for different posture recognitionapplications.

CCS CONCEPTS•Human-centered computing→Ubiquitous andmobile com-puting; • Computing methodologies → Machine learning;

KEYWORDSSmart garment, deep learning, CNN-LSTM, piezo-resistive strainsensor, posture detection

1 INTRODUCTIONElectronics miniaturization and advancements in textile technologyhave enabled integration of various types of sensors into textiles andfabrics ushering in an era of E-Textiles or so called smart garments.A recent survey [40] shows that the E-Textile market is growingrapidly with products such as smart undergarments, socks, andgloves already on offer. With proper data analytic, these smartgarments can eventually detect a wide range of body posturesand activities creating new opportunities in ubiquitous health andfitness monitoring.

Indeed, posture detection via smart garments has become a hottopic of research in recent years [8, 10, 12, 18, 31, 39]. A funda-mental challenge facing smart-garment-based posture detection is

the high level of sensor signal noise caused due to the movementof the garment relative to the skin. The problem can be largelyaddressed by tight fitting of the garment, which is why the currentsmart garment products focus on socks, undergarments, gloves,and tight-fitting cuts for shirts and pants. However, for the smartgarment industry to really take off, accurate posture detection so-lutions must be devised for the casual loose-fitting garments aswell. Unfortunately, with the state-of-the-art sensors, researchershave found that posture detection accuracy deteriorates rapidlywith increasing “looseness" of the garment [11], which highlightsthe need for innovation in sensor design for the smart garmentindustry.

Currently, researches on high stretchability and sensitivity sen-sors focus generally have two direction. The first one is to stimulatelarge contact resistance changes between conductive nanofillerswithin elastomeric matrix under mechanical stimuli [38]. The sec-ond one is to control formation of micro or nanocracks withina conductive network [47]. A mechanical stimulus leads to theopening and closing of cracks that finally translates into resistancechanges. In this work, we focus on the second method.

In this paper, we report the design, fabrication, and evaluationof a transparent, low-cost and highly sensitive strain sensor en-abled by designed microcracks within a thin hybrid film consistingof conductive polymer poly, polystyrene sulfonate (PEDOT:PSS),and carbon nanofibers (CNFs) through a simple solution castingapproach. The sensor sensitivity and stretchability can be simplytuned by controlling the duration of plasma treatment of the poly-dimethylsiloxane (PDMS) before the casting of conductive films.

Our result shows that the proposed strain sensor outperformsthe state-of-the-art sensors in terms of sensitivity, stretchability,appearance and financial cost. By attaching these sensors to an off-the-self casual jacket, we further implement a loose-fitting smartgarment prototype, which we call E-Jacket. We also implement adeep neural network to further assist the data analytic in combatingthe heightened sensor signal noise caused by the loose-fitting of thegarment. Laboratory trials with real subjects confirm that, usingonly a small number of these sensors, E-Jacket can reliably detecttypical human postures and activities.

The contribution of this work can be summarized as follows.

Camera

Rea

dy

Page 2: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

• We design, fabricate, and evaluate a new garment-friendlystrain sensor, which significantly outperforms existing sen-sors in terms of sensitivity, stretchability, appearance andfinancial cost.

• We implement a loose-fitting smart garment prototype, E-Jacket by attaching four of our new strain sensors to anoff-the-shelf casual loose-fitting jacket. Using our prototype,we collect data from 12 subjects for three static postures, i.e.,standing, sitting, and lying, as well as two active postures,walking and running.

• For posture classification from collected sensor data, weimplement data analytic comprising signal processing toremove sensor noise and a deep learning architecture toclassify the processed data.

• We evaluate the performance of E-jacket using 3 differentposture detection case studies in experiment environments:recognition of daily activities (dynamic postures), detectionof stationary postures while allowing random arm move-ments, and detection of slouching. Our evaluation showsthat E-Jacket can achieve up to 91.7%, 84.4%, and 88.4% pos-ture detection accuracy, respectively, for these three casestudies.

The rest of the paper is structured as follows. Related work isreviewed in Section 2 followed by the details of the new sensor inSection 3. The proposed deep learning and the associated signalprocessing are presented in Section 4. Prototyping and data col-lection are explained in Section 5. We evaluate the prototype inSection 6 before presenting our conclusion in Section 7.

2 RELATEDWORK2.1 Smart Garment SystemIntegrated smart garment system has been developed mostly forposture detection and relevant applications. In 2007, Mattmann et.al. designed a smart garment system based on piezo-resistive strainsensors, which produced 97% subject-dependent accuracy in rec-ognizing upper body postures [31]. However, this system requirestight-fitting garment so that the movement of the clothing perfectlymatches the movement of the skin to avoid the garment wrinkleerrors. Later, Harms et al. developed a loose-fitting smart garmentsystem called SMASH, on a everyday garment, i.e., a jumper [12].SMASH used two IMUs on the upper and lower arms respectivelyand was evaluated in three practical applications. The results showthat SMASH can achieve 81% accuracy in distinguishing 21 dynamicpostures. However, subjects were asked to conduct the differentpostures from a static standing postures in their experiment, sotheir experiment is closer to activities detection. With SMASH pro-totype, the same authors modelled sensor orientation errors (SOEs)caused by garment wrinkle on loose-fitting garment in [11] andused SMASH system to validate their results. Here, they introduceda metric called body-garment mobility (BGM) to measure the “loose-ness” of the smart garment and concluded that SOE increased withBGM increases. Although many aspects of smart garment havebeen investigated, a loose-fitting smart garment system for sta-tionary posture detection hasn’t been studied before and is barelyunderstood.

Aiming at monitoring postural exercises during rehabilitation,Sardini et. al. developed a sensorized T-shirt by stitching a cop-per wire onto a tight-fitting T-shirt as an inductive sensors [39].Their system can detect stationary postures but again, requirestight-fitting garment. Gioberto et. al. designed and developed aloose-fitting smart garment system [8] with piezo-resistive sensorssnitched on the knee locations. This system is able to detect bendsand folds of the knees but has limited sensing capabilities on otherbody locations. Recently, Kaighadi et. al. designed and developpeda smart garment system with triboelectric textiles to detect flex-ion and extension of elbows [18]. With triboelectric textiles, thissystem is also capable of detecting sweat conditions of the users.However, loose-fitting sensing on the joint locations can only detectmotion-based activities, which has been achieved by IMU-basedactivities recognition systems before [5, 37, 50, 53]. Overall, exist-ing smart garment systems are summarized as in Table 1, whichshows that they cannot detect stationary postures or activities withloose-fitting garment. Our work can fill the gap that there is noexisting loose-fitting garment for stationary postures detection.

2.2 Human Activities Recognition SystemSince postures detection can be broadly considered as one branch ofhuman activities recognition, we will discuss existing human activi-ties recognition(HAR) system. Wearable sensor-based HAR systemhas been well studied over past decade and become more popularthan their video-based counterparts [43]. Among them, InertialMeasurement Unit(IMU) or accelerometers are the most popularsensor modalities [5, 37, 50, 53], though there are also other type ofwearable sensor modalities such as kinetic energy harvesters [17].Anguita et. al. developed a pioneer HAR system in 2012 [3, 4], wherethey asked a subject to wear a waist-mounted smartphone and col-lected data of three stationary activities: stand, sit and lay, as well asthree dynamic activities: walk, walk upstairs and walk downstairs.They implemented support vector machine (SVM) to recognize theactivities and produced a 96% subject-dependent accuracy; specifi-cally, 97% for sitting, 90% for standing and 100% for laying. Morerecently, Jiang et. al. designed and developed a deep ConvolutionalNeural Networks (CNN) for HAR [15]. They evaluated the proposedsystem with the dataset in [4] and their CNN-based HAR improvedthe accuracy by approximately 1% compared to SVM in the originalwork [4]. Overall, existing IMU-based systems produces better HARperformance over the loose-fiting smart garment systems. How-ever, smart garment systems suits better for the applications suchas postures monitoring during rehabilitation since they can operatewithout the requirments of tight-fitting that is less user-friendly.

2.3 Strain SensorsStrain sensors are usually fabricated with two ways for high strech-ability and high sensitivity. Onemethod is to apply mechanical stim-uli, such as lateral tension and normal compression, on conductivenanofillers within elastomeric matrix [1, 35, 38]. Another method isto control micro/nanocracks within a conductive network. Recently,the second method draws more research attention due to its flexibil-ity of to tune the sensitivity of the sensor [2, 16, 20, 30, 47, 51, 56].However, these highly sensitive sensors exhibited variable Gauge

2

Page 3: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

Table 1: Summary of previous works on smart garments.

Sensors Loose/tight-fitting Number of sensors On-joint Postures detection accuracy CitationPiezo-resistive strain sensors Tight-fitting 21 No 97% on 27 stationary postures [31]IMU Loose-fitting 2 Yes 81% on 21 dynamic postures [12]Inductive sensor Tight-fitting a long copper No Can detect stationary angle offset [39]

wire for rehabilitationPiezo-resistive strain sensors Loose-fitting a long fabric Yes Can detect dynamic bends and folds [8]Triboelectric textiles Loose-fitting 1 Yes 91.3% on 4 dynamic activities [18](Our results) Piezo-resistive strain sensors Loose-fitting 1 No 91.7% on 5 activities (2 dynamic and 3 stationary)

Factors (GFs) in different strain ranges and hence additional com-puting system is required to converting nonlinear to linear dataover a large strain range [24].

In a typical recent work of the second category, Wu et. al. devel-oped a new type of highly stretchable and sensitive strain sensorsconsisting of vertical graphene nanosheets (VGNs) with mazelikemicrostructure as the conductive network [47]. The sensor per-formance in terms of stretchability and sensitivity outperformedmost graphene thin-film-based sensors and more importantly, thesensor showed an excellent linear relationship between strain andoutput resistance changes over the entire detection range of 120%strain. Compared with previously reported strain sensors based onmicrocracks with bi-linearity and small working strain range [2,16, 20, 30, 51, 56], it was found that the unique microstructure ofmicrocracks bridged by the conductive strings of graphene/PDMSwas critical to achieve this mono-linearity and high stretchabil-ity. However, the cost associated with the preparation of verticalgraphene is high and the resulting sensors are black due to themicrometer thickness of VGNs, which makes them inappropriatefor smart garment systems where optical transparent sensors arerequired to reduce the impact on people’s daily life.

In this work, we develop a novel PDMS/PEDOT/CNFs nanocom-posites sensor. This sensor can work at the sensing range with themaximum strain of 58%. It increases the sensitivity of the gaugefactor from 32.6 as reported in the state-of-the-art work [47], to 103.Moreover, the sensor is fabricated with low cost materials, implyingthat it is more suitable for large scale production.

3 A NEW PIEZO-RESISTIVE SENSORIn order to fit a smart garment system, stretchable and wearablepiezo-resistive strain sensors based on flexible and conductive poly-mer nanocomposites were mounted onto loose-fitting cloths. Thequantitative metrics of strain sensors are sensitivity (gauge factor)and sensing strain range or maximum sensing strain. Gauge factor(GF) is determined by a linear regression analysis of the relativeresistance change as a function of strain:

GF =(R − R0)/R0

ϵ, (1)

where R0 is the resistance of the sensor at rest and R is theresistance when an applied strain of ϵ is exerted to the sensor.Strain (ϵ) is calculated by:

ϵ =L − L0L0

× 100%, (2)

where L and L0 is the current length and the initial length ofthe strain sensor. This section will introduce this new sensor indetails1.

3.1 The craftwork of strain sensorBased on the nanocomposites of conductive polymer poly (i.e., 3,4-ethylenedioxythiophene), PEDOT, CNFs and PDMS, a new highlysensitive and stretchable strain sensor was designed and fabricatedthrough the controlled formation of micro/nanocracks within aconductive network. The conductive hybrid thin film with PEDOTand CNFs was sandwiched and encapsulated in between two lay-ers of PDMS, which results in highly integrated sensor structurecapable of withstanding large mechanical deformations.

The microcracks on the PEDOT/CNFs layer lead to the increaseof resistance and high strain sensing performance. Figures 1(a) and(b) show microcracks when the conductive PEDOT/CNFs layer iswithout stretching and under 50% strain respectively. Theses opticalmicroscope pictures were taken using a Zeiss Axio Zoom V16. Asshown in Figure 1(a), discrete CNFs are dispersed on the PEDOTfilm. When sensor is under 50% strain, zigzag cracks were observedon the the PEDOT/CNFs layer as shown in Figure 1(b). Accordingto the figure, CNFs were pulled out from the PEDOT and bridgedthe cracks. These mircocracks are similar to those within epoxynanocomposites containing CNFs and graphene nanoplatelets [22,44, 46].

(a) Sensors at rest (b) Sensors stretched

Figure 1: Opticalmicroscope images of strain sensorwithoutstretching (a) and under 50% strain (b).

The fabrication process for this sensor is shown in Figure 2 andlisted as follows.

(1) Firsly, we treated PDMS films with about 200 µm thicknessusing O2 plasma for 10 minutes. As the result, we derive athin layer of oxidized hydrophilic PDMS.

1Sensor Spec files: https://github.com/ql1179/Ejacket_sensor

3

Page 4: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

(2) Secondly, we coated a conductive thin films of PEDOT/CNFson to the surface of oxidized hydrophilic PDMS by dropcasting.

(3) We then baked the coated PEDOT/CNFs at 60C for 5 hoursto enhance the adhesion of the the PEDOT/CNFs layer.

(4) Two silver paste as well as a pair of copper wires attachedonto them was applied at the both ends of the conductivefilm for electrical measurements.

(5) Lastly, we encapsulated another layer of PDMS onto theconductive thin films.

Figure 2: Fabrication process of the proposed sensor.Regarding materials, conductive polymer aqueous dispersion

of PEDOT:PSS (Clevios PH1000) with concentration of 1-1.3 wt%was purchased from Heraeus Deutschland GmbH. CNFs (Pyrograf-III, grade PR-24-XT-HHT) were provided by Applied Sciences Inc.PDMS elastomer kit with base/curing agents (Sylgard 184) was fromDow Corning.

3.2 The characteristics of strain sensorThe fabricated prototype sensor has maximum sensing range ϵmaxof 58%, and the sensitivity GF of 103.8. To examine the stability ofmechanical properties, the cyclic tests are conducted by applying asinusoidal cyclic load to stretch the sensor at a frequency of 0.08Hz. A tensile testing machine (Instron Model 3369) was employedfor this test to characterize the mechanical properties. The electri-cal resistance was measured using a digital multimeter (34465A,Keysight Technologies). Figure 3(a) shows the resistance changes of1,000 times of cyclic loading and unloading of a maximum strain of20%. The cyclic reliability of our sensor is comparable to reportedsensors in other recent researches [23, 47], showing that the sen-sors have a good working stability against a highly frequent strainapplied on them.

The response curve in Figure 3(b) plots the relationship betweenresistance changes and loaded strain. The slope of the linear fittingline corresponds to GF, whose value is 103.8. The linear fittingcoefficients R2 of 0.974 implies the response of the sensor is almostperfect linear.

The degree of hysteresis ( 3%) shown in Figure 4(a) can be eval-uated by using the ratio of the width of the hysteresis loop to therange during the loading-unloading cycle, as discussed in previousworks [48]. The optical transmittance by ultraviolet–visible spec-trophotometry is higher than 40% in the wavelength range of 350to 850 nm as shown in Figure 4(b). The sensor itself can work indifferent aqueous conditions. However, the electrical part includingcopper wires attached to silver pastes, and embedded system forsignal processing cannot be laundered.

This is not the first time piezo-resistive strain sensor is usedon the smart garment. A tight-fitting smart garment system thatused piezo-resistive sensors in the earlier work reported to have asensitivity of 2 kΩ/mm [31]. With these sensors, they achieved 97%

(a) Relative resistance change of strain sen-sor under cyclic loading and unloading ofa maximum strain of 20%.

(b) Relative resistance change as a func-tion of strain under monotonic tensilestrain.

Figure 3: Test results of cyclic loading and tensile.

(a) Hysteresis (b) Transparency

Figure 4: Hysteresis and transparency tests of the sensors.

accuracy in recognizing upper body postures with skin-attachedtight-fitting smart garment. Considering their works as the baseline,the proposed novel piezo-resistive sensor is 150 timesmore sensitivein detecting strain. For our prototype, the initial length of the sensoris approximately 38 mm and the initial resistance is approximately137 kΩ. Substituting L0 with 38 mm, R0 with 137 kΩ and GF with103.8 in Eq.( 1) and ( 2), the sensitivity between the stretched lengthand corresponding resistance changes can be computed to be 306kΩ/mm, which is approximately 150 times more sensitive that thebaseline sensor in [31].

Table 2 summarizes the GF and ϵmax of recently reported graphene-based strain sensors and our sensors compare favorably to them interm of transparent, low-cost, flexible, and highly stretchable. Ourultra-sensitive and stretchable sensor makes it possible to collectsufficient information in loose-fitting garment with high signal-to-noise-ratio (SNR). Moreover, state-of-the-art deep learning toolsare capable of extracting suitable features given sufficient informa-tion. Therefore, we expect E-Jacket to overcome the interferencefrom sensor orientation errors and achieve comparable posturesdetection results with loose-fitting smart garments.

4 E-JACKET DATA ANALYTICSince a loose-fitting garment does not perfectly match user’s skin,textile movement is not identical to skin or limbs movement. Sensororientation errors from garment wrinkle can result in a decrease of25% in posture recognition rate from a skin-attached garment to aloose-fitting garment [11]. To this end, we present E-Jacket, a loose-fitting smart garment prototype based on the novel pizezo-resistivesensor modality introduced in Section 3 earlier and a robust deeplearning model to address such challenge. Our deep learning modelhas a combination of a Convolutional Neural Network (CNN) and

4

Page 5: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

Table 2: The sensing performance of recently reportedgraphene-based strain sensors.

Graphene-based strain sensors Maximum Gauge Referencesensing factorrange

VGNs/PDMS 120% 32.6 [47]Fish scale-like rGO/tape film 82% 16.2 [28]Hybrid AgNWs/AuNWs 5% 236 [13]

Graphene woven fabrics on PDMS 8% 500 at 2% [52]Ultrathin graphene films/PDMS 3.4% 1037 [26]

Graphene-nanocellulose paper/PDMS 100% 7.1 [49]Fragmentized graphene foam/PDMS 70% 15 [14]

Graphene ribbon mesh 7.5% 20 [29]on stretchable tape

Graphene aerogel/PDMS 19% 61.3 [45]Compressed graphene foam/PDMS 120% 7.2 [54]

Nanographene films 1.6% 500 [55]Monolayer graphene on PDMS 5% 151 [7]

Graphene nanoplatelets/ 150% <2.5 [36]stretchable yarns

Graphene/butadiene styrene rubber/ 100% 82.5 [27]natural rubber composites

(Our sensor) PEDOT/CNFs on PDMS 51% 103.8

a Long Short-Term Memory (LSTM) recurrent network [6]. The in-tuition for choosing CNN-LSTM is three-folded: there exists manyworks that using CNN-LSTM for HAR [33]; a study compares Lo-gistic Regression, SVMwith RBF kernel, CNN, LSTM, Bi-directionalLSTM and CNN-LSTM for HAR and the results showed that CNN-LSTM can provide the highest accuracy [32]; on-body biometricsare usually noisy so that we desire higher accuracy.

4.1 Signal Pre-ProcessingBefore CNN-LSTM model, we design three signal pre-processingalgorithms for E-Jacket: synchronization, noise filtering, and seg-mentation. All sensing units attached on the smart garment are syn-chronized using a time-slotted channel hopping-based (TSCH) [9]time synchronization mechanism2. After raw signal captured, wefirstly apply a Butterworth band pass filter with cut off frequencyfrom 0.5 Hz to 10 Hz to remove irrelevant energy, as the usefulhumanmotion usually lies below 10 Hz [25]. Due to the heavy noisefrom biometric measurement and loose-fitting wrinkle errors, thesimilarity for the same activity of the raw signals can be very low,the average Pearson correlation among raw signals for the sameactivity is around 0.35. Besides deep learning tools, a band passfiler can increase the similarity by removing the noises in otherfrequency channels. By applying band pass filter, the correlation ofthe same activity increases to the correlation of 0.7. Since the am-plitude of strain signals contains useful information representingspecific activities, we keep the voltage readings unnormalized. Wethen apply sliding windows with 50% overlapping and the size of 4seconds (equivalent 512 samples at 128 Hz, see Section 5 later forthe details) to segment strain signals, and reshaped the one dimen-sional time series data into a 16× 32 = 512 sample two-dimensionalmatrix as the input of CNN-LSTM model introduced next.

2Other time synchronization methods such as Cheepsync [41] can also be used.

4.2 CNN-LSTM ModelCNN-LSTM, the combination of CNN and LSTM, was developed forvisual time series prediction problem and the applications using asequence of images such as activity recognition [6, 33]. CNN-LSTMmodels add additional CNN layers before the LSTM layers for afeature transformation ϕV (xt ) with weights V . Therefore, CNN-LSTM models support sequence prediction similarly to LSTM andperform better on video representation with CNN layers for featureextraction on input data. Recent studies also show that CNN-LSTMmodels perform better than their LSTM counterparts in activityrecognition applications [33]. Therefore, we selected CNN-LSTMas the machine learning model for E-Jacket.

The structure of the CNN-LSTM model used in E-Jacket is de-picted in Figure 5. As discussed, before feeding them to the model,we reshape input data from the 1d sequence to 2d matrix to satisfythe requirement of input dimension of the Conv1D layer. One di-mension is the time steps, the other one is the features on each timestep. Firstly, we employ a two 1d-convolutional layers on the inputdata so that we can extract robust features. The 1d-convolutionallayer is a variant of CNN which is particularly designed for se-quence and time series data [21]. In 1d-convolutional layers, theconvolution filters only move along the direction of time acrossthe data. As such, 1d convolutional layers are capable of derivingfeatures from fixed-length segments. A max pooling layer is ap-plied to extract the most important features from the output featuremap of the convoluational layers. When applied to HAR, CNN hastwo advantages over other models: local dependency and scaleinvariance. Local dependency means the nearby signals are likelyto be correlated and scale invariance means scale-invariant for dif-ferent paces or frequencies[43]. The combination of convolutionand pooling is very common in CNN. We apply max pooling afterconvolution to extract the numerical-highest features from everysub-region in the feature map that is output from Con1D layer.By max pooling, we both select the most important feature andreduce the number of features to speed up the training process.The flatten layer reshapes the feature map from max pooling layerto a 1d vector which can be considered as a 1d time series data.The flatten layer can be considered as a bridging layer betweenCNN and LSTM, as the input of LSTM is 1d time series data. Theoutput shape of our Conv1D layer is two-dimensional, which isnot consistent with the required input shape of our LSTM that isone-dimensional. Flatten layer can transfer the two-dimensionaldata to one-dimension to unify the LSTM layer without losing anyinformation. The last two layers are fully connected layers. Weimplement the Rectified Linear Unit (ReLu) activation in the sec-ond last fully connected layer, and Softmax activation in the lastfully connected layer, which will output the class labels. ReLu iscommomly used as an activation function in deep learning modelincluding CNN models. ReLu has two advantages: fast convergencein training and solving gradient vanishing. Firstly, its gradient isnon-saturation, which greatly accelerates the convergence of gra-dient descent compared to other activation function. Secondly, itsolves the gradient vanishing by having a gradient of either 0 or1. Softmax is another commonly used activation function in clas-sification problems. It is always used in the last layer of a modeland give the probabilities for each class label as the output. The

5

Page 6: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

sum of probabilities of all class label is 1. The class label with thehighest probabilities will be the final prediction class label by themodel. Neural networks trained with a relatively small dataset canlead to overfitting the training data. It is because the model willlearn the statistical noise in the training data, which produce poorperformance when the trained model is evaluated on test or newdata. It has been demonstrated that a dropout layer can help themodel to learn robust features and prevent the model from over-fitting and reduce the generalization error [42]. We thereby inserttwo dropout layers after the second convolutional layer and theLSTM layer respectively. The dropout rate is empirically selectedto be 0.5, which means that 50% of randomly-chosen input units ofthe dropout layers will be set to be zeros.

Figure 5: The CNN-LSTM model structure of E-Jacket.

5 E-JACKET PROTOTYPING AND DATACOLLECTION

5.1 Sensing UnitWith the new strain sensor discussed in Section 3, we build ourprototype attachable sensing unit as shown in Figure 6. We use

the SensorTag manufactured by Texas Instruments3 to capture thestrain measurements from the sensor. The data logger SensorTagfeatures a Cortex-M4 microcontroller, and a 2.4 GHz low powerradio transceiver that supports both Bluetooth Low Energy andIEEE 802.15.4. Furthermore, in order to log the resistance changesin the SensorTag, we designed and implemented an amplificationcircuit to convert the resistance changes into voltage changes. Aftertuning the amplification circuit, the initial voltage is approximately1.2v and the ratio between sampled voltage and the stretch levelis approximately 0.05 V/mm. As such, the stretch levels can bemeasured by a 12-bit on-board Analog-to-Digital Converter (ADC)of the SensorTag within its dynamic ranges. The sampling rate forthe ADC is 128Hz. Finally, the ADC voltage readings will be storedin the on-board flash memory of the SensorTag for off-line analysis.

Figure 6: A sensing unit comprises of a strain sensor with anamplification circuit and a SensorTag as the data logger.

The synchronization module discussed in Section 4 was writtenin the firmware of the SensorTag along with sampling applicationusing Contiki OS4.

5.2 Data CollectionSince the strain sensor can be sewed directly on the garment, westitch our sensing unit matching the direction of the garment fabrics,which is compliant with common practice. To evaluate the impactof different body locations, we selected two joint locations, shoulderand elbow, as well as two non-joint locations, waist and abdomen,and attached our sensing units onto a L size jacket for male. Theleft side of Figure 7 shows our E-Jacket prototype.

We recruited 12 subjects (10 males and 2 females) for data col-lection5. Their age ranges from 20 to 34, height from 159 to 184cm, and weight from 55 to 94 kg. After a subject put on the jacket,she/he was asked to stay still for 5 minutes for 3 stationary postures,including sitting, standing and lying, walk for 3 minutes and thenrun for 20 seconds, as shown in the right side of Figure 7.

6 EVALUATIONIn this section, we evaluate the feasibility of a loose-fitting E-Jacketvia three case studies: daily activities recognition, stationary posturerecognition with random hand movements and slouch detection.Since these case studies can be considered as classification problems,we use accuracy or recognition rate as the evaluation metric. As aclassification problem, there are three different ways to design aposture detector with E-Jacket as follows.3SensorTag: http://www.ti.com/ww/en/wireless_connectivity/sensortag2015/index.html4Contiki OS: http://www.contiki-os.org5Ethical approval has been granted by the corresponding organization (ApprovalNumber HC190407)

6

Page 7: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

Figure 7: E-Jacket prototype with 4 sensor locations (Left).Five different postures and activities to be recognized by E-Jacket (Right).

• Subject-dependent: a separate posture detector is trainedbased on data from each subject. It implies that a user isrequired to train E-Jacket smart garment system before us-ing it. With the assistance from each user, this strategy isexpected to have the highest accuracy. 10-fold cross valida-tion is applied for this experiment. The size of training set is57600 samples.

• General-model: a general posture detector is trained basedon all available data and use this general detector for theclassification tasks of different individuals. Similarly, weapply 10-fold cross validation here. The size of training setis 691200 samples.

• Unseen-subject: a population posture detector is trainedbased on all the data except for those from the subject tobe tested. This strategy is more user-friendly than the othertwo since a user can use the smart garment directly basedon the population model trained by the smart garment man-ufacturer. However, as we will show later, it is apparently tohave the lowest accuracy among three strategies. The sizeof training set is 704000 samples.

To benchmark the performance of the proposed CNN-LSTMmodel (see Section 4) against conventional machine learning algo-rithms, we have also implemented SVM as the baseline for all casestudies.

6.1 Posture and Activity RecognitionWe study the strain signals for the recognition of five typical dailyactivities including 3 most common daily static postures, sitting,standing, and laying. Two common activities, walking and runningwere also included in the test to further evaluate the system. In thiscase study, we aim to investigate the feasibility of using loose-fittingsmart garment for activity recognition, and evaluate the impactof different parameters of the smart garment system, such as the

sensor locations on the jacket, the sampling rates for the sensors,and the Body Mass Index (BMI) of the subjects.

6.1.1 Activity recognition results. Figure 8 shows examples of strainvoltage signals after pre-processing in four different channels/bodylocations. We can see that different activities produce differentpatterns of strain signals and such patterns also vary slightly atdifferent locations of the body. After we segment the strain signalsand input them into proposed CNN-LSTM models, we can traina subject-dependent classifier with 90.9% accuracy, a generalmodel with 81.3% accuracy, and a population model tested withunseen subject at 73.5% accuracy. The confusion matrix for asubject-dependent classifier is shown in Table 3. Because of thesimilarity of walking and running in strain signals, E-Jacket has thelowest accuracy in recognizing running, and with 13.2% probabilityto incorrectly classify it as walking.

Table 3: Confusion matrix of 5 activities including 3 staticpostures without hand movements.

Stand Sit Lie Walk Run Recognition rateStand 87.6% 1.4% 8.1% 1.9% 0.9% 87.6%Sit 0% 98.4% 1.4% 0.2% 0 98.4%Lie 4.6% 2.7% 91.2% 0.9% 0.5% 91.2%Walk 3.3% 1.2% 0.2% 93.4% 1.9% 93.4%Run 1.6% 1.9% 1.5% 13.2% 81.7% 81.7%

Overall 90.9%

6.1.2 Impact of sensor placements . Figure 8 also shows that themeasurements from the strain sensors on different body locationshave a significant impact in recognizing specific postures. If thereis only very weak signal in certain body locations, it implies that itis difficult to capture posture information on these body locations.Among three stationary postures (Figures 8(a) to 8(c)), the signalson the elbow has the strongest energy when the user is standing;the signals from abdomen location becomes dominant when theuser is sitting; and the signals in all body locations are relativelyweak when the user is lying.

On the other hand, when the user is mobile (i.e., walking andrunning, Figures 8(d) and 8(e)), the piezo-resistive sensors in nearall body locations can capture strong signal, except for the sensoron elbow captures relatively weak signals when the user is running.Furthermore, in the two mobile activities, we can observe repeatedpatterns, i.e., the peaks in the strain signals which correspond to thesteps of the user. Intuitively, they shows that the steps frequenciesare higher in running activity that those in walking because thereare more signal peaks in one unit of time.

Table 4 shows the impact of different body locations on activ-ity recognition, where E stands for elbow, S for shoulder, W forwaist, and A for abdomen. A common perception is that all ma-chine learning models can benefit from more input strain signalsfrom different body locations as they contain useful information.Counter-intuitively, the results in Table 4 show that if we re-move the signals from shoulder or waist locations, the recognitionrates can actually improve, though slightly. Our thesis is that thestrain signals in shoulder or waist locations do not contain relevantinformation to distinguish daily activities. Since the results in Ta-ble 4 show that the best sensor combination include those in elbow,

7

Page 8: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

0 1 2 3 4

Time(sec)

0.8

1

1.2

1.4

1.6V

oltage(V

)Sit

Elbow

Shoulder

Abdomen

Waist

(a) Sitting

0 1 2 3 4

Time(sec)

0.8

1

1.2

1.4

1.6

Voltage(V

)

Stand

Elbow

Shoulder

Abdomen

Waist

(b) Standing

0 1 2 3 4

Time(sec)

0.8

1

1.2

1.4

1.6

Voltage(V

)

Lie

Elbow

Shoulder

Abdomen

Waist

(c) Lying

0 1 2 3 4

Time(sec)

0.8

1

1.2

1.4

1.6

Voltage(V

)

Walk

Elbow

Shoulder

Abdomen

Waist

(d) Walking

0 1 2 3 4

Time(sec)

0.8

1

1.2

1.4

1.6

Voltage(V

)

Run

Elbow

Shoulder

Abdomen

Waist

(e) Running

Figure 8: Strain voltage signal curves grouped by different activities.

Table 4: The activity recognition accuracy with strain sensors in different body locations. (E – elbow, S – shoulder, W – waist,A – abdomen in the first row.)

Locations EWA ESA ESWA ESW EA SWA WA EW ES SWCNN-LSTM 91.7% 91.4% 90.9% 84.3% 83.4% 81.5% 80.4% 74.1% 73.9% 66.3%

SVM 90.7% 90.1% 89.6% 84.0% 83.5% 80.4% 77.7% 69.9% 66.4% 60.5%

waist, and abdomen (i.e, EWA), we will use this combination in thefollowing evaluation of E-Jacket.

6.1.3 Subjects details. Since the size of the E-Jacket prototype thatwemade (see Figure 7) fits for the male subjects who have the heightof 175 cm the best, the ‘looseness’ on different subjects varies dueto their different BMIs. Specifically, the E-Jacket prototype is looserfitting for lighter and shorter subjects than their heavier and tallercounterparts. We expect the activity recognition rates to be lowerwhen the E-Jacket is looser fit because it will decrease the corre-lation between strain voltage signals and sensing skin movement.Table 5 shows that the activity recognition rates of Subjects #1,#3, #7 and #8, all whose BMIs are more than 24, are above 93%with the subject-dependent CNN-LSTM model. Unsurprisingly, theactivity recognition rates of the subjects with smaller BMIs (i.e.,#2, #9, #10) are lower than the subjects with larger BMIs discussedabove. Nevertheless, the activity recognition rates do not stronglycorrelated with the body sizes of the subjects. For example, Sub-jects #6 and #11 have BMIs smaller than 22.5, and they still havehigh recognition rates (more than 94%). The standard deviation foractivity recognition accuracy for 12 subjects ranges from 3.7% to4.7%.

Table 5 also shows that it is challenging to train a populationactivity recognition model for E-Jacket during the manufacturingtime. Specifically, the recognition rates for unseen subjects decreasemore than 15%, compared to the subject-dependent model.

20 40 60 80 100 120

Sampling Rate(Hz)

80

85

90

95

100

Accu

racy(%

)

CNN-LSTMSVM

Figure 9: Recognition accuracy vs. sampling rates.

6.1.4 Impact of sampling rates. For resource-constraint smart gar-ment systems, (high frequency) data sampling is one of the domi-nant system power consuming components. It is thereby desirable

8

Page 9: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

Table 5: Individual result for all subjects with different CNN-LSTM models and SVM.

Subject Details Subject-dependent Accuracy(%) General-model Accuracy(%) Unseen-subject Accuracy(%)No. Gender BMI CNN-LSTM SVM CNN-LSTM SVM CNN-LSTM SVM1 M 28.1 95.5 95.7 85.7 84.2 76 66.22 M 18.2 88.1 85.3 76.9 73.7 72.8 65.63 M 30.7 93.2 90.4 82.4 78.2 73 65.84 M 22.6 89.2 87.6 79 76.1 68.6 61.35 M 24 87.9 85.3 78 72.6 66.9 596 M 22.4 95.9 96.8 85.6 83.8 75.7 67.17 M 26.3 93.6 94.2 83.1 81.7 73.9 67.88 M 24.2 95.4 94.9 85.7 83.2 77.3 67.99 F 22.5 84.2 82.1 73.5 69.9 68.2 6010 F 21.9 91.2 91.7 81.3 80.5 73.8 64.311 M 20.7 94.4 93 84.1 80.1 78.6 71.212 M 24 91.7 91.2 81 78.4 76.6 68.8

Overall 91.7 ± 3.7 90.7 ± 4.7 81.3 ± 3.9 78.5 ± 4.6 73.5 ± 3.8 65.4 ± 3.7

to investigate an optimal sampling rate to minimize system energyconsumption with an acceptable activity recognition rate. Figure 9shows that E-jacket has a recognition rate of 90.2% at 40 Hz withCNN-LSTM, which can be increased slightly to 91.7% at 128 Hz.It shows that 1.5% improvement in recognition accuracy requiresmore than three times higher sampling rates, which implies thatwith the proposed novel ultra-sensitive piezo-resistive sensor, E-Jacket does not require a high sampling rate to achieve a highrecognition rate.

6.1.5 Comparison of CNN-LSTMmodels with SVM. To demonstratethe advantages of CNN-LSTM model, we compare the performanceof E-Jacket using proposed CNN-LSTM model against a conven-tional machine learning algorithm, i.e.,SVM. Table 4 shows that theCNN-LSTM have higher recognition rates than those of the SVM inmost cases except for the following case. When we use strain sig-nals observed from two locations, elbow and abdomen (i.e., ColumnEA in the table), SVM has the recognition performance of 83.5%while that of CNN-LSTM is 83.4%. We note the difference of 0.1% isvery small and probably negligible. A further study of the compari-son between CNN-LSTM and SVM in Table 5 and Figure 9 showsthat the activity recognition rates of E-Jacket with CNN-LSTM isaverage 2% higher than those with SVM. Specifically, SVM achievesa closer performance to that of CNN-LSTM when the recognitionrates are high, such as subject-dependent accuracy of Subjects #1,#6, #7, and #8 in Table 5. However, the performance gap betweenCNN-LSTM and SVM becomes significant when the recognitionrates are low. For example, for unseen subjects, the recognitionperformance gap between CNN-LSTM and SVM is more than 8%that favors CNN-LSTM.

6.1.6 Comparison of CNN-LSTM models with LSTM. To evaluatethe effectiveness of CNN layers for LSTMmodel, we compare resultsof our CNN-LSTM using Conv1D layer against conventional LSTM.With conventional LSTM, the first 5 layers described in Section 4 areremoved and we don’t have to reshape the input from 1D to 2D. Theresults in Table 6 shows that with extra CNN layers (first 5 layers),

the overall accuracy can be improved slightly from conventional 4-layer LSTM. This results match the existing literature of CNN-LSTMon HAR tasks [33].

Table 6: Overall results of CNN-LSTM and conventionalLSTM.

CNN-LSTM LSTMSubject-dependent Accuracy(%) 91.7 ± 3.7 90.2 ± 3.1General-model Accuracy(%) 81.3 ± 3.9 80.5 ± 2.6Unseen-subject Accuracy(%) 73.5 ± 3.8 71.4 ± 2.2

6.1.7 Parameters Evaluation of CNN-LSTM. Epochs and batch sizeare two important tuning parameters of NN networks. Epochsdenote the times that the entire dataset is passed forward and back-ward through the neural network model for training. However,the entire dataset is usually too large. As introduced in the be-ginning of this section, the size training set is 57600 samples forSubject-dependent, 691200 samples for general-model, 704000 forUnseen-subject. Therefore, the entire dataset are usually dividedinto batches. The batch size is another tuning parameters for NNnetwork. As the sampling rate of the data is 128 Hz. We evaluatethe batch size in multiples of 64 samples.

The evaluation of epochs and batch size are shown in Figure 10.By training deep learning models long enough, the model can fullyreflect the characteristics of the training set. However, there isa limit that longer training will not further improve the modelperformance. Therefore, it is expected that performance improvesas number of epochs increases until it reaches the limit. Figure 10(a)shows that the limit of epochs is approximately 350. Regarding thebatch size, the experience shows that too large batch size will leadto poor performance and there is an optimal batch size. Figure 10(b)shows that the optimal batch size is 192 samples.

6.2 Stationary Posture Recognition withRandom Hand Movements

One of the applications for smart garment is posture monitoringduring rehabilitation. Such a system will need to monitor a user’s

9

Page 10: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

(a) Epochs (b) Batch size

Figure 10: Parameter Tuning in CNN-LSTM.

Figure 11: The strain sensor voltage curves on four body lo-cations when the subject is standing with random hand mo-tions.

stationary postures, such as sitting, standing, lying, whenever theuser wears the smart garment. In daily life, the user will not be to-tally static in these stationary postures since she may use her handsfor tasks such as typing, drinking, eating and using a smartphone.Therefore, the deep learning models of E-Jacket must consider handmovement as well. If we take the random hand movement as a newnoise energy source, E-Jacket smart garment system should berobust to such random noise. Ideally, the (stationary) activity recog-nition results with such random noise should be the same as thosewithout.

Earlier in Section 5, we collected strain signals from three sta-tionary postures, i.e., sitting, standing and lying. Here, we collectedadditional data of three stationary postures from the subjects whowere asked to type, drink, and use smartphones when they weresitting, standing, or lying. There are five minutes strain signal datafor each posture.

Figure 11 shows the strain sensor voltage curves on differentbody locations, and it shows that voltage curves on the joint bodylocations (i.e., elbow and shoulder), especially shoulder locations,will change significantly when a subject performs activities usingher hands. For non-joint body locations (i.e., abdomen and waist),the voltage curves are significantly smoother than those on jointbody locations, though there are some random artifacts on non-jointbody locations some times.

We further train the CNN-LSTM and SVM models with thisdataset and evaluate the performance of stationary posture recog-nition with random hand movement using 10-fold cross-validation.The results are listed in Table 7. In the case study earlier, the activityrecognition rates for five daily activities can be up to 91.7% (see

Table 7: The activity recognition accuracy with strain sen-sors in different body locations with random hand move-ments. (E – elbow, S – shoulder, W – waist, A – abdomen inthe first row.)

Locations ESWA EWA WA SWA ESA ESWCNN-LSTM 84.4% 83.7% 83.4% 79.5% 72.3% 67.9%

SVM 77.6% 76.2% 79.9% 76.4% 64% 54.5%

Table 4). With random hand movements as additional noise sources,the recognition accuracy decreases to 84.4% with strain sensors inall four locations as the input of CNN-LSTM model. We note thatconventional machine learning algorithm SVM performs signifi-cantly worse than the proposed CNN-LSTM with the present ofstrong noise such as random hand movement. This further justifiesour design choice of CNN-LSTM for E-Jacket.

In this application, strain sensors on non-joint body locations(i.e., abdomen and waist), especially on abdomen, produce moreinformative signals as shown in Figure 11 earlier. Without themeasurements from the sensor on abdomen, the stationary posturerecognition rate decreases to 67.9% only (a decrease of more than15% from 84.4%). Although the sensor on shoulder location has littleimpact on posture recognition as shown in both Tables 4 and 7,e.g., Columns EWA and EWSA in both tables, the signals observedby the sensor on shoulder location changes significantly when theuser use her hands. Therefore, we may take the signals observed bythe sensor on the shoulder location as an indicator of random handmotions, and the smart garment system may discard the data whenthe random hand movement is detected to increase the posturerecognition rates.

6.3 Slouch Detection

Figure 12: Sitting postures with 90, 120, 150 angle betweenspine andhips respectively. Themiddle and right sitting pos-tures are considered as slouch.

Another important application of smart garment system is todetect and alert slouch, which refers to poor sitting postures. Poorsitting habits and bad sitting postures are the common cause formusculoskeletal disorders such back pains [19, 34]. A proper sittingposture requires the spine of a person to be fully supported in thechair. Similar to [12], we use the angle between the spine and hips asthe indicator to detect slouch, as shown in Figure 12. Postures withangle of 90 will be considered as proper sitting postures, whilesittingwith angles of 120 or 150 is slouch. To this end, we collecteddata of three sitting postures with strain sensors on four different

10

Page 11: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

Table 8: The recognition rate of two slouch detection tasks

Tasks Slouch Recognition (Three) Angle RecognitionCNN-LSTM 88.4% 86.3%

SVM 77.6% 73.9%

body locations. Figure 13 shows strain sensor voltage curves forthree different sitting postures on different body locations. It showsthat the signal measured by the sensor on abdomen location is mostinformative with significant difference in energy for three types ofsitting postures.

Figure 13: The strain sensor voltage curves on four body lo-cations when the subject is sitting with different angles be-tween spine and hips.

Our first recognition task in this case study is to investigatewhether E-Jacket can distinguish a proper sitting posture from twotypes of slouch shown in Figure 12, which we called Slouch Recog-nition. Our second recognition task is to study whether E-Jacketcan classify three sitting postures, which have finer grade hip-spineangles information compared to the first task. We called the sec-ond task (Three) Angle Recognition. Table 8 shows that E-Jacketcan recognize slouches with 88.4% accuracy (CNN-LSTM). For themore challenging task of Angle Recognition, the performanceof E-Jacket CNN-LSTM) decreases slightly to 86.3%. Similarly tostationary posture recognition with random hand movement dis-cussed earlier, CNN-LSTM can improve the recognition rates bymore than 10% compared to SVM. We further study the confusionmatrix forAngle Recognition, which is shown in Table 9. E-Jackethas an accuracy of 93.2% when a user sits with 150 hip-spine angle,which is considered “highly slouching", and an accuracy of 83.4%for a less slouching angle of 120.

Table 9: The confusion matrix of Angle Recognition.

90 120 150 Recognition rate90 86.6% 9.9% 3.5% 86.6%120 11.1% 83.4% 5.5% 83.4%150 3% 3.7% 93.2% 93.2%

6.4 Energy ConsumptionIn the energy consumption measurement experiment, we connectthe prototype device (the SensorTag) to a GDS-800 digital oscillo-scope to measure the average energy consumption for each piezo-resistive sensor sampling process. The measurement setup is shownin Figure 14. We duty cycle the micro-controller in the SensorTagto reduce energy consumption. Our measurement shows that thesampling rate of 128Hz results in approximately 492 µW energyconsumption in sampling a strain sensor.

Figure 14: Energy consumption setup.

7 CONCLUSIONWehave designed, fabricated, and evaluated a new garment-friendlypiezo-resistive strain sensor, which has more than 150 times highersensitivity than previously reported sensors. Using our novel sen-sors, we have implemented a loose smart garment prototype anddesigned a deep-learning model to address sensor noise arisingfrom the loose-fitting. We evaluated the performance of the proto-type in experiment environments with 12 subjects for both staticand dynamic postures, which confirms the effectiveness of the newsensor in detecting human postures with loose-fitting garments.

REFERENCES[1] M. Amjadi, A. Pichitpajongkit, S. Lee, S. Ryu, and I. Park, “Highly stretchable and

sensitive strain sensor based on silver nanowire–elastomer nanocomposite,” ACSnano, vol. 8, no. 5, pp. 5154–5163, 2014.

[2] M. Amjadi, M. Turan, C. P. Clementson, and M. Sitti, “Parallel microcracks-basedultrasensitive and highly stretchable strain sensors,” ACS applied materials &interfaces, vol. 8, no. 8, pp. 5618–5626, 2016.

[3] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “Human activityrecognition on smartphones using a multiclass hardware-friendly support vectormachine,” in International workshop on ambient assisted living. Springer, 2012,pp. 216–223.

[4] ——, “A public domain dataset for human activity recognition using smartphones.”in Esann, 2013.

[5] Y. Chen and Y. Xue, “A deep learning approach to human activity recognitionbased on single accelerometer,” in 2015 IEEE International Conference on Systems,Man, and Cybernetics. IEEE, 2015, pp. 1488–1492.

[6] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan,K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visualrecognition and description,” in Proceedings of the IEEE conference on computervision and pattern recognition, 2015, pp. 2625–2634.

[7] X.-W. Fu, Z.-M. Liao, J.-X. Zhou, Y.-B. Zhou, H.-C. Wu, R. Zhang, G. Jing, J. Xu,X. Wu, W. Guo et al., “Strain dependent resistance in chemical vapor depositiongrown graphene,” Applied Physics Letters, vol. 99, no. 21, p. 213107, 2011.

[8] G. Gioberto, J. Coughlin, K. Bibeau, and L. E. Dunne, “Detecting bends and fabricfolds using stitched sensors,” in Proceedings of the 2013 International Symposiumon Wearable Computers. ACM, 2013, pp. 53–56.

[9] I. . W. Group et al., “Ieee standard for local and metropolitan area networks?part15.4: Low-rate wireless personal area networks (lr-wpans),” IEEE Std, vol. 802, pp.4–2011, 2011.

[10] A. Hanuska, B. Chandramohan, L. Bellamy, P. Burke, R. Ramanathan, and V. Bal-akrishnan, “Smart clothing market analysis,” Technical Report, Tech. Rep., 2016.

[11] H. Harms, O. Amft et al., “Estimating posture-recognition performance in sensinggarments using geometric wrinkle modeling,” IEEE Transactions on InformationTechnology in Biomedicine, vol. 14, no. 6, pp. 1436–1445, 2010.

11

Page 12: ABSTRACT Ready Cameramahbub/PDF_Publications/E... · 2020. 3. 21. · Measurement Unit(IMU) or accelerometers are the most popular sensor modalities [5, 37, 50, 53], though there

[12] H. Harms, O. Amft, D. Roggen, and G. Tröster, “Rapid prototyping of smartgarments for activity-aware applications,” Journal of Ambient Intelligence andSmart Environments, vol. 1, no. 2, pp. 87–101, 2009.

[13] M. D. Ho, Y. Ling, L. W. Yap, Y. Wang, D. Dong, Y. Zhao, and W. Cheng, “Percolat-ing network of ultrathin gold nanowires and silver nanowires toward “invisible”wearable sensors for detecting emotional expression and apexcardiogram,” Ad-vanced Functional Materials, vol. 27, no. 25, p. 1700845, 2017.

[14] Y. R. Jeong, H. Park, S. W. Jin, S. Y. Hong, S.-S. Lee, and J. S. Ha, “Highly stretch-able and sensitive strain sensors using fragmentized graphene foam,” AdvancedFunctional Materials, vol. 25, no. 27, pp. 4228–4236, 2015.

[15] W. Jiang and Z. Yin, “Human activity recognition using wearable sensors by deepconvolutional neural networks,” in Proceedings of the 23rd ACM internationalconference on Multimedia. Acm, 2015, pp. 1307–1310.

[16] D. Kang, P. V. Pikhitsa, Y. W. Choi, C. Lee, S. S. Shin, L. Piao, B. Park, K.-Y. Suh,T.-i. Kim, and M. Choi, “Ultrasensitive mechanical crack-based sensor inspiredby the spider sensory system,” Nature, vol. 516, no. 7530, p. 222, 2014.

[17] S. Khalifa, G. Lan, M. Hassan, A. Seneviratne, and S. K. Das, “Harke: Humanactivity recognition from kinetic energy harvesting data in wearable devices,”IEEE Transactions on Mobile Computing, vol. 17, no. 6, pp. 1353–1368, 2017.

[18] A. Kiaghadi, M. Baima, J. Gummeson, T. Andrew, and D. Ganesan, “Fabric asa sensor: Towards unobtrusive sensing of human behavior with triboelectrictextiles,” in Proceedings of the 16th ACMConference on Embedded Networked SensorSystems. ACM, 2018, pp. 199–210.

[19] J. B. Kim, J.-K. Yoo, and S. Yu, “Neck–tongue syndrome precipitated by prolongedpoor sitting posture,” Neurological Sciences, vol. 35, no. 1, pp. 121–122, 2014.

[20] S. J. Kim, W. Song, Y. Yi, B. K. Min, S. Mondal, K.-S. An, and C.-G. Choi, “Highdurability and waterproofing rgo/swcnt-fabric-based multifunctional sensors forhuman-motion detection,” ACS applied materials & interfaces, vol. 10, no. 4, pp.3921–3928, 2018.

[21] Y. Kim, “Convolutional neural networks for sentence classification,” arXiv preprintarXiv:1408.5882, 2014.

[22] R. B. Ladani, S. Wu, A. J. Kinloch, K. Ghorbani, J. Zhang, A. P. Mouritz, andC. H. Wang, “Improving the toughness and electrical conductivity of epoxynanocomposites by using aligned carbon nanofibres,” Composites Science andTechnology, vol. 117, pp. 146–158, 2015.

[23] J. Lee, S. Shin, S. Lee, J. Song, S. Kang, H. Han, S. Kim, S. Kim, J. Seo, D. Kim et al.,“Highly sensitive multifilament fiber strain sensors with ultrabroad sensing rangefor textile electronics,” ACS nano, vol. 12, no. 5, pp. 4259–4268, 2018.

[24] M. Lee, J. U. Kim, J. S. Lee, B. I. Lee, J. Shin, and C. B. Park, “Mussel-inspiredplasmonic nanohybrids for light harvesting,” Advanced Materials, vol. 26, no. 26,pp. 4463–4468, 2014.

[25] J. Lester, B. Hannaford, and G. Borriello, “?are you with me??–using accelerome-ters to determine if two devices are carried by the same person,” in InternationalConference on Pervasive Computing. Springer, 2004, pp. 33–50.

[26] X. Li, T. Yang, Y. Yang, J. Zhu, L. Li, F. E. Alam, X. Li, K. Wang, H. Cheng, C.-T.Lin et al., “Large-area ultrathin graphene films by single-step marangoni self-assembly for highly sensitive strain sensing application,” Advanced FunctionalMaterials, vol. 26, no. 9, pp. 1322–1329, 2016.

[27] Y. Lin, S. Liu, S. Chen, Y. Wei, X. Dong, and L. Liu, “A highly stretchable andsensitive strain sensor based on graphene–elastomer composites with a noveldouble-interconnected network,” Journal of Materials Chemistry C, vol. 4, no. 26,pp. 6345–6352, 2016.

[28] Q. Liu, J. Chen, Y. Li, and G. Shi, “High-performance strain sensors with fish-scale-like graphene-sensing layers for full-range detection of human motions,”ACS nano, vol. 10, no. 8, pp. 7901–7906, 2016.

[29] Q. Liu, M. Zhang, L. Huang, Y. Li, J. Chen, C. Li, and G. Shi, “High-quality grapheneribbons prepared from graphene oxide hydrogels and their application for strainsensors,” ACS nano, vol. 9, no. 12, pp. 12 320–12 326, 2015.

[30] Z. Liu, D. Qi, G. Hu, H. Wang, Y. Jiang, G. Chen, Y. Luo, X. J. Loh, B. Liedberg,and X. Chen, “Surface strain redistribution on structured microfibers to enhancesensitivity of fiber-shaped stretchable strain sensors,” Advanced materials, vol. 30,no. 5, p. 1704229, 2018.

[31] C. Mattmann, O. Amft, H. Harms, G. Troster, and F. Clemens, “Recognizingupper body postures using textile strain sensors,” in 2007 11th IEEE internationalsymposium on wearable computers. IEEE, 2007, pp. 29–36.

[32] B. Moradi, M. Aghapour, and A. Shirbandi, “Compare of machine learning anddeep learning approaches for human activity recognition,” EasyChair, Tech. Rep.,2019.

[33] J. C. Nunez, R. Cabido, J. J. Pantrigo, A. S. Montemayor, and J. F. Velez, “Convolu-tional neural networks and long short-term memory for skeleton-based humanactivity and hand gesture recognition,” Pattern Recognition, vol. 76, pp. 80–94,2018.

[34] C. Obermair, W. Reitberger, A. Meschtscherjakov, M. Lankes, and M. Tsche-ligi, “perframes: Persuasive picture frames for proper posture,” in Internationalconference on persuasive technology. Springer, 2008, pp. 128–139.

[35] C. Pang, G.-Y. Lee, T.-i. Kim, S. M. Kim, H. N. Kim, S.-H. Ahn, and K.-Y. Suh, “Aflexible and highly sensitive strain-gauge sensor using reversible interlocking ofnanofibres,” Nature materials, vol. 11, no. 9, p. 795, 2012.

[36] J. J. Park, W. J. Hyun, S. C. Mun, Y. T. Park, and O. O. Park, “Highly stretchable andwearable graphene strain sensors with controllable sensitivity for human motionmonitoring,” ACS applied materials & interfaces, vol. 7, no. 11, pp. 6317–6324,2015.

[37] T. Plötz, N. Y. Hammerla, and P. L. Olivier, “Feature learning for activity recogni-tion in ubiquitous computing,” in Twenty-Second International Joint Conferenceon Artificial Intelligence, 2011.

[38] S. Ryu, P. Lee, J. B. Chou, R. Xu, R. Zhao, A. J. Hart, and S.-G. Kim, “Extremelyelastic wearable carbon nanotube fiber strain sensor for monitoring of humanmotion,” ACS nano, vol. 9, no. 6, pp. 5929–5936, 2015.

[39] E. Sardini, M. Serpelloni, and V. Pasqui, “Wireless wearable t-shirt for posturemonitoring during rehabilitation exercises,” IEEE Transactions on Instrumentationand Measurement, vol. 64, no. 2, pp. 439–448, 2014.

[40] S. Seneviratne, Y. Hu, T. Nguyen, G. Lan, S. Khalifa, K. Thilakarathna, M. Has-san, and A. Seneviratne, “A survey of wearable devices and challenges,” IEEECommunications Surveys & Tutorials, vol. 19, no. 4, pp. 2573–2620, 2017.

[41] S. Sridhar, P. Misra, G. S. Gill, and J. Warrior, “Cheepsync: a time synchronizationservice for resource constrained bluetooth le advertisers,” IEEE CommunicationsMagazine, vol. 54, no. 1, pp. 136–143, 2016.

[42] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov,“Dropout: a simple way to prevent neural networks from overfitting,” The journalof machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.

[43] J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu, “Deep learning for sensor-basedactivity recognition: A survey,” Pattern Recognition Letters, vol. 119, pp. 3–11,2019.

[44] S. Wu, R. B. Ladani, J. Zhang, E. Bafekrpour, K. Ghorbani, A. P. Mouritz, A. J.Kinloch, and C. H. Wang, “Aligning multilayer graphene flakes with an externalelectric field to improve multifunctional properties of epoxy nanocomposites,”Carbon, vol. 94, pp. 607–618, 2015.

[45] S. Wu, R. B. Ladani, J. Zhang, K. Ghorbani, X. Zhang, A. P. Mouritz, A. J. Kinloch,and C. H. Wang, “Strain sensors with adjustable sensitivity by tailoring themicrostructure of graphene aerogel/pdms nanocomposites,”ACS applied materials& interfaces, vol. 8, no. 37, pp. 24 853–24 861, 2016.

[46] S. Wu, R. B. Ladani, J. Zhang, A. J. Kinloch, Z. Zhao, J. Ma, X. Zhang, A. P. Mouritz,K. Ghorbani, and C. H. Wang, “Epoxy nanocomposites containing magnetite-carbon nanofibers aligned using a weak magnetic field,” Polymer, vol. 68, pp.25–34, 2015.

[47] S. Wu, S. Peng, Z. J. Han, H. Zhu, and C. H. Wang, “Ultrasensitive and stretch-able strain sensors based on mazelike vertical graphene network,” ACS appliedmaterials & interfaces, vol. 10, no. 42, pp. 36 312–36 322, 2018.

[48] S. Wu, S. Peng, and C. H. Wang, “Stretchable strain sensors based on pdms com-positeswith cellulose sponges containing one-and two-dimensional nanocarbons,”Sensors and Actuators A: Physical, vol. 279, pp. 90–100, 2018.

[49] C. Yan, J. Wang, W. Kang, M. Cui, X. Wang, C. Y. Foo, K. J. Chee, and P. S. Lee,“Graphene: Highly stretchable piezoresistive graphene–nanocellulose nanopaperfor strain sensors (adv. mater. 13/2014),” Advanced Materials, vol. 26, no. 13, pp.1950–1950, 2014.

[50] J. Yang, M. N. Nguyen, P. P. San, X. L. Li, and S. Krishnaswamy, “Deep convo-lutional neural networks on multichannel time series for human activity recog-nition,” in Twenty-Fourth International Joint Conference on Artificial Intelligence,2015.

[51] T. Yang, X. Li, X. Jiang, S. Lin, J. Lao, J. Shi, Z. Zhen, Z. Li, and H. Zhu, “Struc-tural engineering of gold thin films with channel cracks for ultrasensitive strainsensing,” Materials Horizons, vol. 3, no. 3, pp. 248–255, 2016.

[52] T. Yang, W. Wang, H. Zhang, X. Li, J. Shi, Y. He, Q.-s. Zheng, Z. Li, and H. Zhu,“Tactile sensing system based on arrays of graphene woven microfabrics: elec-tromechanical behavior and electronic skin application,” ACS nano, vol. 9, no. 11,pp. 10 867–10 875, 2015.

[53] M. Zeng, L. T. Nguyen, B. Yu, O. J. Mengshoel, J. Zhu, P. Wu, and J. Zhang,“Convolutional neural networks for human activity recognition using mobilesensors,” in 6th International Conference on Mobile Computing, Applications andServices. IEEE, 2014, pp. 197–205.

[54] Z. Zeng, S. I. S. Shahabadi, B. Che, Y. Zhang, C. Zhao, and X. Lu, “Highly stretch-able, sensitive strain sensors with a wide linear sensing region based on com-pressed anisotropic graphene foam/polymer nanocomposites,” Nanoscale, vol. 9,no. 44, pp. 17 396–17 404, 2017.

[55] J. Zhao, C. He, R. Yang, Z. Shi, M. Cheng, W. Yang, G. Xie, D. Wang, D. Shi, andG. Zhang, “Ultra-sensitive strain sensors based on piezoresistive nanographenefilms,” Applied Physics Letters, vol. 101, no. 6, p. 063112, 2012.

[56] J. Zhou, X. Xu, Y. Xin, and G. Lubineau, “Coaxial thermoplastic elastomer-wrapped carbon nanotube fibers for deformable and wearable strain sensors,”Advanced Functional Materials, vol. 28, no. 16, p. 1705591, 2018.

12