12
Augmented and Virtual Reality techniques for footwear Antonio Jimeno-Morenilla a, *, Jose Luis Sa ´ nchez-Romero a , Faustino Salas-Pe ´ rez b a Computer Technology and Computation Department, University of Alicante, Spain b CAD/CAM Department of Spanish Footwear Research Institute (INESCOP), Spain 1. Introduction 1.1. New technologies for achieving competitive advantages The search for competitive advantages has been a constant feature of the footwear industry, which has become more intensive in recent years in view of the emergence of new competitors from different places in the world. The reduction in manufacturing costs is no longer the key for this search; instead, the objective currently pursued is related to an increase in the quality of the final product and the customisation or small-scale ad-hoc manufacturing. Customisation can be achieved either by obtaining a shoe adapted to the anatomical features of a user’s foot [1–4,30], or by adapting it aesthetically to the customer’s liking [5,31]. New technologies have become the essential tools to achieve these objectives. Realistic 2D vision is a well-consolidated standard in the industry, and for some years now research has been focusing on the use of stereoscopic vision at different production stages, including displaying the finished product to the consumer. 1.2. Virtual and augmented reality in designing and manufacturing processes The term Virtual Reality is defined as a set of technologies that allow a user to interact with a computer simulated environment, whether the environment is a simulation of the real world or an imaginary world [6]. Normally, it is possible to experiment with this virtual environment using a conventional monitor or other devices compatible with stereoscopic vision. Furthermore, [7] described augmented reality as a phase between absolute reality and absolute virtuality, which has the following properties: (a) it combines real and virtual or augmented objects; (b) it runs interactively and in real time; and (c) it aligns real and augmented objects with each other. The three-dimensional representation of the product provides a higher degree of realism. This factor has been implemented at different stages of production processes in order to improve productivity and product quality. For instance, at the product creation stage, the concurrent design review by different engineers Computers in Industry 64 (2013) 1371–1382 A R T I C L E I N F O Article history: Received 17 July 2012 Received in revised form 19 February 2013 Accepted 5 June 2013 Available online 10 July 2013 Keywords: Augmented reality in footwear sector Stereoscopic vision for design and display of footwear 3D gloves for footwear design A B S T R A C T The use of 3D imaging techniques has been early adopted in the footwear industry. In particular, 3D imaging could be used to aid commerce and improve the quality and sales of shoes. Footwear customization is an added value aimed not only to improve product quality, but also consumer comfort. Moreover, customisation implies a new business model that avoids the competition of mass production coming from new manufacturers settled mainly in Asian countries. However, footwear customisation implies a significant effort at different levels. In manufacturing, rapid and virtual prototyping is required; indeed the prototype is intended to become the final product. The whole design procedure must be validated using exclusively virtual techniques to ensure the feasibility of this process, since physical prototypes should be avoided. With regard to commerce, it would be desirable for the consumer to choose any model of shoes from a large 3D database and be able to try them on looking at a magic mirror. This would probably reduce costs and increase sales, since shops would not require storing every shoe model and the process of trying several models on would be easier and faster for the consumer. In this paper, new advances in 3D techniques coming from experience in cinema, TV and games are successfully applied to footwear. Firstly, the characteristics of a high-quality stereoscopic vision system for footwear are presented. Secondly, a system for the interaction with virtual footwear models based on 3D gloves is detailed. Finally, an augmented reality system (magic mirror) is presented, which is implemented with low-cost computational elements that allow a hypothetical customer to check in real time the goodness of a given virtual footwear model from an aesthetical point of view. ß 2013 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +34 59034002453. E-mail address: [email protected] (A. Jimeno-Morenilla). Contents lists available at SciVerse ScienceDirect Computers in Industry jo ur n al ho m epag e: ww w.els evier .c om /lo cat e/co mp in d 0166-3615/$ see front matter ß 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.compind.2013.06.008

Augmented and Virtual Reality techniques for footwear

Embed Size (px)

Citation preview

Page 1: Augmented and Virtual Reality techniques for footwear

Computers in Industry 64 (2013) 1371–1382

Augmented and Virtual Reality techniques for footwear

Antonio Jimeno-Morenilla a,*, Jose Luis Sanchez-Romero a, Faustino Salas-Perez b

a Computer Technology and Computation Department, University of Alicante, Spainb CAD/CAM Department of Spanish Footwear Research Institute (INESCOP), Spain

A R T I C L E I N F O

Article history:

Received 17 July 2012

Received in revised form 19 February 2013

Accepted 5 June 2013

Available online 10 July 2013

Keywords:

Augmented reality in footwear sector

Stereoscopic vision for design and display of

footwear

3D gloves for footwear design

A B S T R A C T

The use of 3D imaging techniques has been early adopted in the footwear industry. In particular, 3D

imaging could be used to aid commerce and improve the quality and sales of shoes. Footwear

customization is an added value aimed not only to improve product quality, but also consumer comfort.

Moreover, customisation implies a new business model that avoids the competition of mass production

coming from new manufacturers settled mainly in Asian countries.

However, footwear customisation implies a significant effort at different levels. In manufacturing,

rapid and virtual prototyping is required; indeed the prototype is intended to become the final product.

The whole design procedure must be validated using exclusively virtual techniques to ensure the

feasibility of this process, since physical prototypes should be avoided.

With regard to commerce, it would be desirable for the consumer to choose any model of shoes from a

large 3D database and be able to try them on looking at a magic mirror. This would probably reduce costs

and increase sales, since shops would not require storing every shoe model and the process of trying

several models on would be easier and faster for the consumer.

In this paper, new advances in 3D techniques coming from experience in cinema, TV and games are

successfully applied to footwear. Firstly, the characteristics of a high-quality stereoscopic vision system

for footwear are presented. Secondly, a system for the interaction with virtual footwear models based on

3D gloves is detailed. Finally, an augmented reality system (magic mirror) is presented, which is

implemented with low-cost computational elements that allow a hypothetical customer to check in real

time the goodness of a given virtual footwear model from an aesthetical point of view.

� 2013 Elsevier B.V. All rights reserved.

Contents lists available at SciVerse ScienceDirect

Computers in Industry

jo ur n al ho m epag e: ww w.els evier . c om / lo cat e/co mp in d

1. Introduction

1.1. New technologies for achieving competitive advantages

The search for competitive advantages has been a constantfeature of the footwear industry, which has become more intensivein recent years in view of the emergence of new competitors fromdifferent places in the world. The reduction in manufacturing costsis no longer the key for this search; instead, the objective currentlypursued is related to an increase in the quality of the final productand the customisation or small-scale ad-hoc manufacturing.Customisation can be achieved either by obtaining a shoe adaptedto the anatomical features of a user’s foot [1–4,30], or by adapting itaesthetically to the customer’s liking [5,31].

New technologies have become the essential tools to achievethese objectives. Realistic 2D vision is a well-consolidated standardin the industry, and for some years now research has been focusing

* Corresponding author. Tel.: +34 59034002453.

E-mail address: [email protected] (A. Jimeno-Morenilla).

0166-3615/$ – see front matter � 2013 Elsevier B.V. All rights reserved.

http://dx.doi.org/10.1016/j.compind.2013.06.008

on the use of stereoscopic vision at different production stages,including displaying the finished product to the consumer.

1.2. Virtual and augmented reality in designing and manufacturing

processes

The term Virtual Reality is defined as a set of technologies thatallow a user to interact with a computer simulated environment,whether the environment is a simulation of the real world or animaginary world [6]. Normally, it is possible to experiment withthis virtual environment using a conventional monitor or otherdevices compatible with stereoscopic vision. Furthermore, [7]described augmented reality as a phase between absolute realityand absolute virtuality, which has the following properties: (a) itcombines real and virtual or augmented objects; (b) it runsinteractively and in real time; and (c) it aligns real and augmentedobjects with each other.

The three-dimensional representation of the product provides ahigher degree of realism. This factor has been implemented atdifferent stages of production processes in order to improveproductivity and product quality. For instance, at the productcreation stage, the concurrent design review by different engineers

Page 2: Augmented and Virtual Reality techniques for footwear

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–13821372

can be more efficient if Virtual Reality and augmented reality toolsare used, as stated by [8]. This paper also set out an augmentedreality environment shared by several designers with tangible userinterfaces (TUI) integrating digital information with physicalobjects and environments. Users wearing HMD or head-mounted-displays can capture the real physical world and thesystem can augment it using virtual objects. Since virtual objectsare coupled to physical objects, these must be tracked using a set ofmarkers attached to them, which are tracked and interpreted bythe ARToolkit library. Virtual objects can be rotated, translated,assembled or disassembled by means of voice commands. Thevisualisation of the virtual world does not necessarily require aHMD or a similar device. In many cases, users do not bring withthem the vision system. Instead, a monitor-based configurationdisplaying stereo images is used, for which the user shall wearspecific glasses for stereoscopic vision. This is the case of theARGOS (Augmented Reality through Graphic Overlays on Stereo-video) system, developed at the University of Toronto to work inenvironments with poor visibility [9,10].

Khan [6] proposed developing a virtual manufacturing systemas an application of augmented reality to improve several processstages, including quality control, human–machine interface andinformation flows, practicing e-commerce and possibility ofimplementing different production philosophies. Oh et al. [11]presented the Direct Modelling approach, which allows the user tointuitively select geometric entities in real time regardless of themodifications made. This was initially developed for conceptualdesign and architectural planning [12]. Fiorentino et al. [13]showed the advantages derived from using a semi-immersiveenvironment, combining stereoscopic vision with 3D inputs,addressed to conceptual and aesthetic design.

Lee et al. [14] proposed embedding augmented reality into therapid prototype of a digital product, and special attention wasgiven to those aspects that are of interest for the consumer:tangibility, functionality and aesthetics. A rapid prototype of theproduct is created with which it is possible to interact using asimple and cheap tangible pointer instead of using more expensivehardware interfaces, all this visualised in a three-dimensionalcollaborative environment. Augmented reality approaches havealso been proposed for automotive services [14], for the design offlexible manufacturing systems [15], for architectural design andurban planning [16].

1.3. Interfaces for virtual worlds

With regard to the interface that allows the interaction with theobjects from the virtual world, the use of gloves stands out, both forthe manipulation of objects and for commands. Lee et al. [17] usedhand gesture and vibrotactile feedback. Using a sensorised glove,the user can grab and point at virtual objects; one of the gloves isconfigured as the dominant one to manipulate objects, while theother is used to choose options from the different system menus.The gloves feature microprocessors, LED, a wireless system forcommunicating with the main computer, and some conductivepatches mounted on the finger tips and the palm. The possibleactions are indicated by different finger positions with respect tothe hand palm, which are detected through the connections of theconductive patches.

Buchmann et al. [18] used an interaction system based onmarkers mounted on hands and fingers to track the user’s gestures,as well as haptic feedback devices. The main contribution of thiswork is that this approach allows users to interact with virtualobjects using natural hand gestures. This system, FingARTips,employs visual tracking to detect a set of simple gestures based onthe artificial vision library ARToolkit. The gloves feature markerson the thumb, index finger and the base knuckles of both fingers,

with which the location and orientation of the hand in the 3Denvironment is accurately tracked. Gesture recognition is based onfinger position relative to the scene.

Yi et al. [19] detailed the characteristics of an augmented realitysystem for the design of architectural models. The generation ofdifferent objects is based on the detection of hand movements andgestures. The left hand is used to make semantic signs for eachbuilding unit within a hierarchy, while the right hand is used tosketch the geometry using a pen-marker. The movement of bothhands is detected through a motion capturing system made up ofMotionAnalysis� and EVaRT R4.6 in order to subsequently analyseand detect the corresponding gesture.

Thomas and Piekarski [20] Hoang et al. [21] analysed aninteraction system in an outdoor augmented reality environment,called Tinmith-Hand. Users can work in a collaborative environ-ment for which they have a HMD and a pair of pinch gloves. Pinchgloves are used to select options from the system menus. Eachoption has a finger assigned to select it by making a pinch gesturewith the thumb and the corresponding finger. In addition, in orderto capture the spatial coordinates of the user’s hands, thumbs areprovided with small markers in such a way that the virtual objectson the scene are selected according to the hand position. Hosoyaet al. [22] proposed an interaction system in which the user couldtouch virtual objects rendered in a monitor using a markermounted on his/her finger.

1.4. Virtual and augmented reality applied to footwear industry

It is difficult to find well-documented research papers on virtualor augmented reality applied to the footwear industry. There is, forinstance, the work of Ruperez et al. [23]; however, theircontribution mainly focused on the analysis of the user’s footmorphology and modelling a suitable shoe taking the shoe upperand the pressure withstood in different areas of the foot as thebasic criteria. Greci et al. [24] showed the characteristics of a hapticdevice simulating the internal volume of a shoe, called FootGlove.It is a sort of insole that adapts to the user’s foot through amechatronic mechanism. This allows four essential dimensions ofthe last to be obtained, from which the customised shoe isdesigned by the customer. However, this device is not directlyintegrated with an augmented reality system.

Mottura et al. [25] and Redaelli et al. [26] detailed thecharacteristics of an augmented reality system applied tofootwear. It is a magic mirror for in-shop footwear customisations.First of all, the user’s foot is scanned taking five basic parameters.Starting from an initial configuration, the user can co-design his/her shoes choosing certain aspects such as the type of leather, thecolour, the sole, etc. The system features an LCD display acting as amirror, a camera that captures shots in real time of the user’s feet,and a tracking system to compute in real time the position andorientation of feet. One of the drawbacks of this system is that itrequires wearing special socks in order for the foot position andorientation to be detected, as well as using a special carpet onwhich the user shall move.

Eisert et al. [27] proposed a virtual mirror system for the real-time visualisation of customised sports shoes. The client canchange the design and colours of a shoe model at a special terminaland add individual embroideries and decorations. There is no needfor markers on the shoes. The system mainly consists of a displayand a camera mounted close to the display looking down tocapture the client’s feet. The camera captures and transfers theimages with a resolution of 1024 � 768 pixels. Each shoe modelconsists of different sub-objects composed of triangle mesheswhich can be replaced to create different geometries. One of thedrawbacks of this system is the lack of robustness under differentlighting conditions, which requires the floor in front of the camera

Page 3: Augmented and Virtual Reality techniques for footwear

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–1382 1373

to be painted in a special colour to allow the use of the chroma-keying technique for segmentation. The maximum rate achievedby the visualisation system is 35 frames per second; however,there is no reference to the complexity of rendering models(number of polygons), which makes it difficult to check itsefficiency.

The purpose of this paper is to present some of the advancesmade in Augmented and Virtual Reality techniques used in thefootwear sector. All the steps forward taken try to seek highperformance using low-cost equipment.

This paper is structured in the following sections: Section 2describes a stereoscopic vision system specialising in the visuali-sation of complex footwear models; Section 3 presents a VirtualReality system for footwear design using data gloves; Section 4describes an Augmented Reality system (magic mirror) aimed atcomplex footwear models. Finally, Section 5 presents the mainconclusions drawn and a list of references used.

2. High-quality stereoscopic vision system for footwear

Stereoscopy is a technique for creating or enhancing the illusionof depth using two images of the same scene but captured fromdifferent angles, just like human eyes see the world. For thisreason, it is necessary to work with two images simultaneously, sothe whole process that is normally carried out for a monoscopicimage will double in the case of a stereoscopic image.

It is then required to calculate the offset in relation to the pointwhere the camera is allegedly located, which shall capture acertain scene for both viewpoints, the left and the right one. Thisoffset will also imply a change in the angle of vision incidence onthe scene, so this calculation is not elementary although it is wellknown in the field of computer vision [28].

2.1. Parameters for the calculation of projections

For the calculation of projections, there are certain parametersto be taken into consideration (see Fig. 1). These can besummarised as follows:

Far plane

Screen

Near plane

Half of Intraocular distance (IOD/2)

Centre eye viewpoint Right eye viewpoint

Frustum shi�

IOD/2

Screen viewport

Fig. 1. Diagram showing the operation of stereoscopy and its parameters.

� Intraocular distance: It is the distance between the two pointsfrom which the image is captured. These points are normallydetermined using as the centre point the one from which theimage would be captured in the monoscopic mode. It should bepossible to modify this value according to the person who isseeing the image, since everyone has a different distancebetween the eyes. This distance is usually 65 mm, but it canrange from 45 mm to 75 mm� Projection plane: It is the plane on which the elements making up

the scene are located and will be captured.� Distance to the projection centre: It is the perpendicular distance

from the centre point located between the two points fromwhich both images will be captured to the projection plane.� Parallax: It depends on the above factors and is the distance

between the two projections of a point located on the projectionplane. Thus changing the parallax determines the depth at whichthe objects are located in relation to the projection plane. Therewill be no parallax for a point located on the very projectionplane; there will be positive parallax for a point located behindthe projection plane, and there will be negative parallax for apoint located in front of the projection plane.

2.2. Basic outline for drawing with quad buffering

In a computer-aided monoscopic vision system, a drawingsystem based on double buffering (back and front buffers) isused to enhance vision performance. Such system follows theoperation outline shown in the left part of Table 1. Althoughthere are other techniques to produce stereoscopic images, thereis no doubt that one of the most efficient is the one that replacesdouble buffering with quad buffering (double buffering foreach eye).

Quad buffering requires graphic hardware able to support thissystem in order to achieve optimum performance. Therefore,stereoscopic vision systems use a similar drawing outline to that ofmonoscopic systems, but the process is repeated twice and buffersare swapped once as a final step, as shown in the right part ofTable 1.

2.3. Experiments with CAD software for footwear

A stereoscopic vision system has been developed based on thefootwear design and manufacturing software 3D+1, which iswidespread on the market. This software allows the realisticvisualisation of models and their modification to produce all typesof footwear. For this reason, the footwear models used feature ahigh definition, sometimes exceeding 0.5 million polygons. Suchhigh definition provides images with great realism but in turnimplies a serious handicap for performance in that there is the riskfor the image not to be viewed in real time if the required hardwareis not available.

In view of the high accuracy, precision and performancerequirements, we opted for quad buffering with hardware support,

Table 1Tasks involved in monoscopic and stereoscopic vision.

Monoscopic drawing process

with double buffering

Stereoscopic drawing process with quad

buffering

Clearing the back buffer. Clearing the left back buffer. Clearing the

right back buffer

Drawing the scene to

the back buffer.

Calculating the left viewpoint and drawing

the scene to the left back buffer. Calculating

the right viewpoint and drawing

the scene to the right back buffer.

Swapping back and

front buffers.

Swapping back and front buffers

Page 4: Augmented and Virtual Reality techniques for footwear

Fig. 2. Hardware used: nVidia1 Quadro 600 graphic card and nVidia 3DVision1 pack.

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–13821374

using OpenGL1 graphic library. The devices used are describedbelow.

2.3.1. Hardware

2.3.1.1. PC equipped with an nVidia1 Quadro 600 graphic card. Weused a desktop computer with an Intel i5 processor, 4GB RAMmemory and nVidia1 Quadro 600 graphic card (see Fig. 2)equipped with 96 graphic processors (cores) with quad bufferingsupport. Nevertheless, any card provided with Quadbuffer technol-ogy may be valid for the use of the stereoscopic vision system. Saidtechnology is the one provided by nVidia’s Quadro1 range. In thiscase, we opted for one of the cheapest models of said range.

With regard to the software, we used the operating system 64-bit Windows 7 Professional and the development platformCodeGear C++ RAD Studio 2007.

2.3.1.2. 3D monitor. In order to be able to view good-qualitystereoscopic images, it is necessary to have a monitor able todisplay two images simultaneously. There are two technologiesavailable on the market able to produce said effect: doublefrequency monitors and double display monitors. The formerfeature better quality image but in turn require using glassessynchronised with the monitor to alternately cover the eyes (activeglasses). The latter produce less definition images but requirepolarised glasses, which are simpler and more comfortable for theuser.

For this prototype, we used a double frequency LED 3D monitordeveloped by BENQ1 with different refresh rates to be selected(60 Hz, 100 Hz, 110 Hz and 120 Hz), so it can perfectly operateeither in monoscopic or in stereoscopic mode and adapt the refreshrate to ambient light conditions. This way, given that the lightinginside buildings often operates at 50 Hz, it would be advisable toselect 100 Hz (multiple of 50) to avoid excessive eye strain; whilefor ambient light, 120 Hz would be the most adequate option. Ofcourse, any 3D monitor can be used for stereoscopic vision. Inaddition, 3D projectors or TVs can be used, as long as they have

Fig. 3. Some of the footwear mod

connectors compatible with those of the output connectors of thegraphic card used.

2.3.1.3. nVidia1 3DVision pack. The nVidia 3dVision pack (seeFig. 2) comprises a pair of active glasses and an infrared (IR)emitter. By means of a driver, the graphic card synchronises themonitor refresh rate with the IR emissions in such a way that the IRemitter sends the signals to the glasses so that they can activate theshutters. The reach of the IR signal is limited and the emitter shallbe visible to the glasses. There are other more expensivealternatives operating in the same way, but they replace IR withradiofrequency for communicating, which provides further reachand allows various glasses to be simultaneously synchronised,even if they are not directly located in front of the emitter.

2.3.2. Performance experiments

A battery of experiments was carried out to check theperformance of the stereoscopic vision system. For this, differentfootwear models with different definition were used. Fig. 3 showssome of the rendered models.

As shown in Fig. 3, the models featured high precision, evenreaching 600,000 polygons in the case of the ladies’ model. Thevisualisation performance is shown on the right side of the graph(in frames per second) for the same models. Using the quadbuffering technique, it was possible to reach real time values evenfor the heaviest model (24 frames/s).

It is to be noted that in stereoscopic vision, one frame equalstwo monoscopic renderings, so the performance achieved wassignificantly good. In the case of the ladies’ model, even though itfeatured twice as many polygons, the performance was notreduced by half thanks to the action of the multicore card, whichincreases its performance with large amounts of information(Fig. 4).

The whole system proved to provide a high-quality image andagility in the interaction with models thanks to the high fps rateachieved. Fig. 5 shows some shots of the whole system inoperation. The pictures show the ‘‘double image’’ effect caused by

els used for the experiments.

Page 5: Augmented and Virtual Reality techniques for footwear

Fig. 4. Complexity of the models and performance of the vision system.

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–1382 1375

the superimposition of images for the left and right eye. This effectcan be avoided with the use of active glasses; in fact, the rightphoto shows how the left eye was being covered letting the righteye see the image when the photo was taken.

3. Virtual manipulation system using 3D gloves

Virtual Reality is a technology based on the use of computersand other devices aiming to produce the appearance of reality tomake users feel they are present in it. Some equipments arecomplemented with suits and gloves equipped with sensorsspecifically designed to simulate the perception of differentstimuli, which intensify the feeling of reality. Despite beinginitially focused on videogames, its application has extended tomany other fields, such as medicine and flight simulators.

The virtuality concept establishes a new way of relating the useof space and time coordinates, overcoming spatial–temporalbarriers and configuring an environment in which informationand communication are accessible from hitherto unknownperspectives at least in terms of volume and possibilities. VirtualReality allows the generation of interactive environments thatseparate the need to share the space–time dual, thus facilitating inthis case new contexts for the exchange of information andcommunication.

Currently, the use of different Virtual Reality control periph-erals like gloves is increasing exponentially thanks to thevideogame industry and the technological evolution that nowa-days allows new control techniques to be developed.

Fig. 5. Example of a three-dimensional model displayed in stereoscopic mode; left: deta

This section presents a development that incorporates intoCAD/CAM footwear design systems a Virtual Reality controlfunctionality using a sensorised glove.

3.1. Resources used

3.1.1. Desktop computer

The same PC used for the previous section was used: a i51 Intelprocessor with 4GB RAM, an nVidia Quadro 600 graphic card, and64-bit Windows 7 operating system.

3.2. 5DT Virtual Reality glove

The Virtual Reality glove developed by the company 5DT wasalso used, which can be seen in Fig. 6.

The glove used had 5 flexion sensors (one for each finger) aswell as 2 rotation (pitch and roll) sensors that allow its inclinationto be known. The glove offers individualised information for eachof the sensors it has. Fig. 7 shows the sensor monitoring interfaceshowing the standard level of flexion of each finger as well as theinclination in both senses of the hand (pitch and roll). Thisinformation is treated by the software to produce the virtual effectsshown below.

3.3. Control functionality using the glove

From the point of view of the CAD/CAM software, the glove wasused for two different purposes. The first one consisted in

iled view of the quad buffer stereo image; right: whole system using active glasses.

Page 6: Augmented and Virtual Reality techniques for footwear

Fig. 6. 7-sensor virtual reality glove developed by the company 5DT.

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–13821376

simulating a movement of the design object. Currently, theindustrial CAD systems carry out these movements using themouse complemented with a key to produce different types ofmovements. The movements mostly used are the trackball, whichmakes three-dimensional rotations of the object around its centre,and the scroll, which moves the object on the three-dimensionalspace.

The second purpose consisted in recognising certain gesturesmade by the hand to perform some common operation in thesoftware: orthogonal positioning, material selection. . .

Both functionalities are described in more detail below.

3.4. Virtual tracking

As previously stated, in this mode of operation the glove willmake movements of the object that were previously done usingdifferent combinations of keys and the mouse. Given that the gloveis a peripheral that is continuously informing the system of theposition of the sensors, it is necessary to intuitively specify thesystem as to when it should activate the tracking and when to stopdoing it. Here, the activation/deactivation of the tracking wascarried out through the recognition of two different gestures; also athird gesture was added on the thumb to differentiate the trackballmovement from the scroll movement.

MR ¼

cosr þ~v2x ð1 � cosrÞ ~vx~vy 1 � cosrð Þ �~vzsinr ~vx~vzð1 � cosrÞ �~vysinr 0

~vx~vyð1 � cosrÞ �~vzsinr cosr þ~v2y 1 � cosrð Þ ~vy~vzð1 � cosrÞ �~vxsinr 0

~vx~vzð1 � cosrÞ �~vzsinr ~vy~vz 1 � cosrð Þ �~vxsinr cosr þ~v2z ð1 � cosrÞ 0

0 0 0 1

26664

37775 (1)

MP ¼

cos p þ ~w2x ð1 � cos pÞ ~wx~wy 1 � cos pð Þ � ~wzsin p ~wx~wzð1 � cos pÞ � ~wysin p 0

~wx~wyð1 � cos pÞ � ~wzsin p cos p þ ~w2y 1 � cos pð Þ ~wy~wzð1 � cos pÞ � ~wxsin p 0

~wx~wzð1 � cos pÞ � ~wysin p ~wy~wz 1 � cos pð Þ � ~wxsin p cos p þ ~w2z ð1 � cos pÞ 0

0 0 0 1

26664

37775 (2)

~v ¼ ~y p�1 ~w ¼~x p�1 (3)

MT ¼ TcMRMPT�1c (4)

where r and p are the angular values of roll and pitch respectively,P�1 is the inverse matrix of the perspective transformation, c is thecentre of the object to be transformed, Tc is a scroll matrix to c.

The gestures and their operation are the following:

1. Fist Gesture (the five fingers folded, see Fig. 8a). This gesture,simulating the action of picking up an object, will be used toactivate sensor tracking and perform the trackball movement.

Figure 7. Right: interface to monitor the information pro

Once the fist is closed, the displayed model will be able to berotated in the active window through the use of the inclinationvalues (pitch and roll) provided by the glove. These values areused to determine two rotation angles of the object in 3D on its

centre. For this, two rotation matrices are made: MR is obtainedwith the roll angle value and makes a rotation on the 3D axisthat projects the vertical axis of the screen (see expressions (1)and (3)) and Mp with the pitch value carries out a rotation on the3D axis that results from the projection of the horizontal axis ofthe screen (see expressions (2) and (3)). Finally a globaltransformation matrix MT is made (see expression (4)) that isused to modify the pile of transformations from the visualisationsystem.

2. Thumb Gesture (the thumb stretched out and the rest folded, seeFig. 8b). With the tracking function active, this gesture will beused to activate the scrolling mode. Once this gesture is done,

vided by the glove; left: glove and sensor location.

Page 7: Augmented and Virtual Reality techniques for footwear

Fig. 8. Implemented virtual tracking modes.

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–1382 1377

the displayed model will be able to be moved in the activewindow through the use of the inclination (pitch and roll) valuesprovided by the glove.

3. Palm Gesture (the five fingers stretched out). This gesture will beused to deactivate the tracking mode of the glove. Once the handis completely open, the user will not be interacting with themodel loaded in the application. This gesture is, therefore, thebase gesture to use when one does not want the inclinationinformation of the glove to have an effect on the transformationsthat the displayed model can experience in the active window(see Fig. 8c).

3.5. Virtual positioning

The gestures that are shown in Fig. 9 have been configured tomake different orthogonal views:

1. Index Gesture (index finger stretched out, the rest folded). Thisgesture (associated to the value of number 1) serves to establishthe first of the orthogonal views in the window, showing themodel on the XY plane (see Fig. 9a).

2. Index and middle gesture (Index and middle fingers stretched out,the rest folded). This gesture (associated to the value of number 2)serves to establish the second of the orthogonal views in thewindow, showing the model on the YZ plane (see Fig. 9b).

3. Index, Middle and Ring Gesture (index, middle and ring fingersstretched out, the rest folded). This gesture (associated to thevalue of number 3) serves to establish the third of the orthogonalviews in the window, showing the model on the XZ plane (seeFig. 9c).

3.6. Calibration

The glove calibration is important given that each person canopen and close their fingers in a different way when makinggestures; therefore the glove must be recalibrated each time a newperson is to use it. To calibrate, the flexion thresholds of the digitalsensors must be known and also the gestures recognised by theglove must be redefined.

The calibration is completed visually through an application(see Fig. 10a) where bars appear that show the level of amplitude ofthe sensor value during the process of moving the fingers from anopen position, to a closed one. Once the flexion range has beendefined for each finger, the opening and closing thresholds must beselected to allow the recognition of the different gestures by theglove (see Fig. 10b). There is an option within the application tosave calibrations or upload them and thus have various calibra-tions available according to the user. The gestures are configured,once the finger flexion thresholds have been defined, indicating theflexion position that should be reached by each finger.

The whole system in operation is shown in Fig. 11. This systemcan be complemented with the stereo vision technique describedin the previous section to increase the user’s feeling of immersion.

4. High quality augmented reality system for footwear (magicmirror)

An Augmented Reality Magic Mirror consists in a user’s captureand display, to which virtual elements are added so that the usercan be seen in his/her own environment but with elements that, inreality, do not physically exist. This utility is currently beginning tobe used in sectors like accessories or cosmetics. As discussed in theintroduction section, there are systems developed for footwearalthough they basically have two drawbacks: the lack of flexibilityand the slowness for high resolution images.

This section details the making of an experimental prototype ofa Magic Mirror specific for footwear whose main novelty lies in thedissociation of the visual capture of the user’s referentialenvironment.

4.1. Structure of the system

The prototype of the magic mirror for footwear (patentpending) developed is shown in Fig. 12. The main difference withother similar systems lies in the use of an IR camera to detect theobject concerned, in this case the user’s foot. The advantage thatthis camera brings is that the segmentation of the image is muchsimpler and faster and, therefore, the software can detect, in real

Page 8: Augmented and Virtual Reality techniques for footwear

Fig. 9. Positioning gestures: (a) azimuthal positioning, activated using the index finger, (b) front positioning, activated using index and middle fingers, and (c) lateral

positioning, activated using index, middle and ring fingers.

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–13821378

time, the location of the foot regardless of the resolution of thecamera that is filming the real scene.

To locate the position of the user’s foot, IR (LED type) markers of0.5 W were used. Contrary to other augmented reality systems, theuse of IR markers is much more discreet, because of their sizealmost invisible to the user and can be easily adapted to the foot.Also, having their own light makes them less sensitive to lighting

Fig. 10. Calibration of the glove: (a) sensor range; (b)

or shadow changes of uncontrolled systems as for instance afootwear store.

With regard to the camera that captures the scene, the onlylimitations are those of the associated hardware, since its qualitywill not affect the system’s performance. In most augmentedreality systems only one camera is used; therefore, if the quality ofthe image is very high, the time taken by the system to segment the

sensor threshold for the fist gesture recognition.

Page 9: Augmented and Virtual Reality techniques for footwear

Fig. 11. Virtual glove system for footwear design.Fig. 12. Structure of the magic mirror prototype: (a) high resolution camera, (b) IR

camera, (c) computer with footwear design software, (d) high resolution monitor,

and (e) IR LED diodes.

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–1382 1379

image and locate the foot increases considerably. Furthermore, thequality of the virtual scene (virtualised model) cannot be too highbecause it would also compromise the system’s response time.

The components of the developed prototype are detailed below:

Desktop computer

The same desktop computer used in the previous sections wasused: an i5 intel processor, 4GB of RAM and a nVidia Quadro 600graphic card with 64-bit Windows 7 operating system.

Scene capturing camera

This camera was used exclusively to film the real scene;therefore, it did not condition the system’s performance. A Web LGWebpro2 LIC-3001 camera with a resolution of 1024 � 768 pixelswas used in the experiments.

IR camera

To locate the position of the user’s foot, the IR camera wasused, which incorporated the Wiimote1 remote controller fromthe Nintendo Wii1 game console (see Fig. 12). It is a low-costdevice (approx. 40s) that also has software libraries thatfacilitate the automatic segmentation of the image, givinginformation on the position of IR light sources. Contrary to whathappens when acting with the game console, the Wiimote isfixed and the IR emitter moves in order for the remote to act as acamera detecting the positioning and orientation of the emitter(Fig. 13).

Emitting device

Fig. 13. Wiimote1 device and a view o

For this first version of the prototype, the emitting device usedwas comprised of 4 LED diodes of 0.5 w of power arranged on aprinted circuit board (see Fig. 14). Obviously, the device had to besubsequently adapted so that the emitting devices could be placedon the user’s foot without the risk of them moving or beingdetached when the user was moving his/her feet. Both the size ofthese LEDs (2 mm of diameter) and their low consumption allowedensuring that the final device would be able to incorporate button-type batteries so that it is autonomous, discreet and easy to placeon the user’s foot.

4.2. System operation

As briefly mentioned in the introduction of this section,Augmented Reality is a mixed reality that groups and links virtualelements with actual elements of the environment. For this, thevirtual elements are placed and rendered on a series of base imagescaptured from the real world. This base will require a device able tocapture the images that make it up.

When it comes to establishing a coherent real-virtual positionrelationship it is necessary to take two aspects into account:Regarding the IR camera, a stable capture configuration must beestablished to be able to determine the position of each LED at anytime; in other words, it is to be anticipated that the LED points canbe temporarily hidden in certain positions of the foot and

f the IR signal emitting prototype.

Page 10: Augmented and Virtual Reality techniques for footwear

Fig. 14. System in operation, at the right of the monitor the IR segmentation can be

seen. The sensitivity of the photo camera allows the IR light of the device to be

captured.

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–13821380

therefore their real position in case of concealment must beestimated. Furthermore, this two-dimensional point captureinformation provided by the IR segmentation must be extrapo-lated in order to be converted into three-dimensional informationthat indicates what type of transformations have to be done on thevirtual model so that it seems to be positioned on the image of thereal foot. For this, different calculations are done that will requirenon-variable characteristics both in the IR emitter and in thecamera that captures them. This will be addressed in the followingsections.

4.2.1. Estimating points in the event of concealment

The IR LED diodes are the landmarks that the system uses tolocate the user’s foot in the environment. Therefore, theidentification of these 2D points in the IR camera is essential. Itmay occur that due to the user’s movements or environmentalinterferences, part of the diodes can be hidden from the infraredcamera. In such case, it is necessary to estimate the position of thenon-visible points so that the system has the most reliableinformation possible of the user’s foot location at all times.

In each camera capture a correspondence of the closest pointswith regard to the previous capture was performed. If the numberof points captured was smaller than the number of LEDs, anestimation was performed following a minimisation procedure ofan error function that used the conjugate gradient method (seeexpressions (5)–(7)). The error function had the parameters givenin Table 2. It was about deducing the movement that took place onthe visible LEDs to estimate the movement made by the non-visible ones.

Once the values that minimise the error function in theconjugate gradient method had been found, the transformationthat appears in the expression X was applied to each one of thenon-visible points.

prix ¼ ð pixcos aÞSx þ tx (5)

Table 2IR movement estimation parameters.

Parameter Description

tx ty Translation in the horizontal and vertical axes respectively

a Rotation angle value in the capture plane with respect to

the centre of the window

sx sy Scale factors for the horizontal a vertical axes respectively

priy ¼ ð piysin aÞSy þ ty (6)

Eðtx; ty; a; sxsyÞ ¼X

i 2 v; j 2 v�1j pi � pjjj

2 (7)

where v represents the group of visible points in the currentcapture and v�1 the group of v points corresponding to the previouscapture.

Once the function’s minimum had been found, the transforma-tion of the expressions (5) and (6) was applied to the non-visiblepoints. The tests carried out show that the method is suitable forpossible LED concealment.

4.2.2. Calculating the 3D position and orientation

Another important issue is the correspondence between the 2Dpoints captured by the IR camera and the 3D ones visualised in thevirtual environment. The problem lies in finding a geometric andperspective transformation that places the real elements detectedin the virtual image so that, by superimposing both, the illusionthat the virtual element is embedded in the real scene is generated.In general, the problem is analytically resolved using a system ofequations in accordance with homography concepts [29].

In this research and for performance reasons, an alternativesolution to the analytical method was chosen, by minimising anerror function, using for this the conjugate gradient method. This isa fast method where the solution is approximate. This approachwas adequate for the problem, since the exact solution was notnecessary as it was only intended to track in real time. Also, havingthe previous solution (previous frame) considerably acceleratedthe convergence of the method.

To establish the correspondence, there were two sets of pointsavailable. On the one hand, the 2D points captured by the IR image(hereinafter, pIRi) and on the other hand, the 3D points placed onthe virtual model (pzDi). Each camera capture established acorrespondence between the IR points detected and their 3Dequivalents associated to the virtual model. For this, the errorfunction that appears in expression 9 was minimised. This errorfunction had seven parameters given in Table 3.

The objective was to find the geometric transformation thatmade the 3D points of the virtual world place themselves on the 2Dpoints captured (see expression (8)). The process involved threetransformations: a three-dimensional rotation of a radians on afree vector ~v, called R~va; a generic three-dimensional translationTxyzand a pers pective trans formation p: The R~va and Txyz transfor-mations ensured that the set of points equivalent to the IR pointsRzDi could be placed anywhere in the virtual space and with anyorientation, simulating the movement that the IR device wouldhave in the real scene. The matrix P was given by the current valuesof the rendering system.

The correspondence was achieved by simulating a positioningof the 3D points that was equivalent to the 2D ones obtained usingan IR camera. The minimisation of the function that appears inexpression (9) was intended to give the values of the geometrictransformation that superimposed the virtual points on the realones. The solution obtained would also be used as a starting pointto obtain the solution for the following frame.

pZDi ¼ p3DiP R~vaTxyz (8)

Table 3Parameters for the estimation of foot’s position and orientation.

Parameter Description

tx, ty, tz Translation on the Cartesian axes

a Rotation angle value on the v vector

vx vy vz Components of the rotation vector

Page 11: Augmented and Virtual Reality techniques for footwear

Fig. 15. Complexity of the models and system performance.

Fig. 16. System performance test.

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–1382 1381

Eðtx; ty; tz; a; vx; vy; vyÞ ¼ Sij pIRi � pzDij2 (9)

where P represents the perspective transformation matrix of therendering system, R~va a matrix of alfa radians rotation on thevector ~v and Txyz a translation matrix on the Cartesian axes ofvalues tx, ty, and tz.

4.3. System performance

In order to carry out the experiments, the same parameters andmodels used in Section 2.1 were used. The specific IR capturehardware and the camera are shown in Section 4.1.

The whole system showed a great agility in the interactionand precision with the models thanks to the high fps ratereached. Fig. 16 shows an image of the whole system inoperation. As observed in Fig. 15, the models have greatprecision, the ladies’ model exceeding the 600,000 polygons. Theright side of the graph shows the visualisation performance (inframes per second) for the same models. Thanks to the use of theestimation methods based on the conjugated gradient, it waspossible to easily obtain real time values even for the heaviestmodel (30.1 frames/s).

5. Conclusions

The footwear sector is a traditional sector that is quitereluctant to the use of new technologies. It is a sector dominatedby Small and Medium sized companies (SMEs) that cannot affordthe expensive high computational capacity equipments. Ingeneral, standard, low cost hardware is used by these companies.Here, we find CAD/CAM software operating on low costequipment, which does however require high computationaleffort. As an example, it can be said that 3D models are made up ofNURBS surfaces that can have hundreds of thousands of controlpoints. In conclusion, any technique or method that is intended forintroduction in this software should be efficient. This researchwork has presented a series of AR and VR techniques adapted toreal footwear design software that has high quality and precisionrequirements.

First of all, a stereoscopic rendering system is presented. Thissystem gives the user the illusion of physically seeing the virtualshoe model. For a CAD designer, this tool will provide valuableinformation in order to interact with the model precisely. For theconsumer, stereoscopic vision improves the product display byadding realism. Thanks to the quad buffering technique employed,it has been possible to obtain real time (24 frames/s) values for the

Page 12: Augmented and Virtual Reality techniques for footwear

A. Jimeno-Morenilla et al. / Computers in Industry 64 (2013) 1371–13821382

heaviest shoe model (600,000 polygons), while with other modelsa performance of 38 frames/s can even be reached.

Secondly, a 3D interface using gloves is presented. This interfaceallows the designer to perform a more intuitive and effectiveinteraction with the 3D environment. Besides, the use of 3D glovesallows hand gesture recognition in order to perform the mostcommon tasks.

Finally, a magic mirror system for shoes is shown. A cameracaptures the customer’s image trying a virtual shoe on. This way,the customer can see himself/herself wearing a virtual shoe. Thereare other similar systems available in the sector, but the novelty ofthis one lies in the dissociation of recognition and rendering byusing two cameras instead of just one. An IR camera performs thedetection, and a high resolution camera captures the scene. Theresult is a real time system that implements an augmented realitytool which makes it possible to work with a higher level of detailfor both shoe models and scenes.

All of the new advances presented in this paper are included incommercial footwear software spread worldwide. The footwearmodels used for the experiments have been provided by theSpanish Footwear Technological Institute (INESCOP). This work hasbeen supported by the Valencian Government in the framework ofthe project IMDEEA/80.

References

[1] P. Olivato, M. Morricone, E. Fubini, A. Re, Foot digitalization for last design andindividual awareness of personal foot characteristics, Digital Human Modeling,Lecture Notes in Computer Science 4561 (2007) 949–958.

[2] S. Xiong, J. Zhao, Z. Jiang, M. Dong, A computer-aided design system for foot-feature-based shoe last customization, The International Journal of AdvancedManufacturing Technology 46 (1–4) (2010) 11–19.

[3] J. Wang, H. Zhang, G. Lu, Z. Liu, Rapid parametric design methods for shoe-lastcustomization, The International Journal of Advanced Manufacturing Technology54 (1–4) (2011) 173–186.

[4] T. Nishiwaki, Footwear fitting design, in: Emotional Engineering, Springer-Verlag,London, 2011, pp. 345–363.

[5] P. Fatur, S. Dolinsek, Mass customization as a competitive strategy for labourintensive industries, Advances in Production Engineering and Management 4 (1)(2009) 77–84.

[6] W.A. Khan, A. Abdul Raouf, K. Cheng, Virtual Manufacturing, Springer-Verlag,London, 2011.

[7] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, B. MacIntyre, Recent advancesin augmented reality, IEEE Computer Graphics and Applications 21 (6) (2001).

[8] R. Sidharta, J. Oliver, A. Sannier, Augmented reality tangible interface for distrib-uted design review, in: Proceedings of the International Conference on ComputerGraphics, Imaging and Visualisation (CGIV’06), 2006.

[9] P. Milgram, D. Dracsic, J.J. Grodski, A. Restogi, S. Zhai, C. Zhou, Merging real andvirtual worlds, in: Proceedings of IMAGINA’95, Montecarlo, Monaco, (1995), pp.218–230.

[10] D. Dracsic, J.J. Grodski, P. Milgram, K. Ruffo, P. Wong, S. Zhai, ARGOS: a displaysystem for augmenting reality, in: Proceedings of INTERCHI, 93: Human Factors inComputing Systems, Amsterdam, The Netherlands, (1993), pp. 24–29.

[11] J. Oh, W. Stuerzlinger, J. Danahy, SESAME: Towards better 3D conceptual designsystems, in: Proceedings of the 6th Conference on Designing Interactive Systems,2006, 2006.

[12] M. Fiorentino, A.E. Uva, M. Dellisanti Fabiano, G. Monno, Improving bi-manual 3Dinput in CAD modelling by part rotation optimization, Computer-Aided Design 42(5) (2010) 462–470.

[13] M. Fiorentino, R. de Amicis, G. Monno, A. Stork, Spacedesign: a mixed realityworkspace for aesthetic industrial design, in: Proceedings of the InternationalSymposium on Mixed and Augmented Reality, ISMAR2002, 2002.

[14] J.Y. Lee, G.W. Rhee, H. Park, AR/RP-based tangible interactions for collaborativedesign evaluation of digital products, International Journal of AdvancedManufacturing 45 (2009) 649–655.

[15] J. Gausemeier, J. Fruend, C. Matysczok, AR Planning tool: designing flexiblemanufacturing systems with augmented reality, in: Proceedings of the 8thEurographics Workshop on Virtual Environment, 2002, pp. 19–25.

[16] W. Broll, I. Lindt, J. Ohlenburg, M. Wittkamper, C. Yuan, T. Novotny, A.F. Schiecky,C. Mottramy, A. Strothmannz, ARTHUR: a collaborative augmented environmentfor architectural design and urban planning, Proceedings of International Con-ference on Humans and Computers 2004 (2004) 102–109.

[17] J.Y. Lee, G.W. Rhee, D.W. Seo, Hand gesture-based tangible interactions formanipulating virtual objects in a mixed reality environment, The InternationalJournal of Advanced Manufacturing Technology 51 (9–12) (2010) 1069–1082.

[18] V. Buchmann, S. Violich, M. Billinghurst, A. Cockburn, FingARtips – Gesture baseddirect manipulation in augmented reality, in: Proceedings of the 2nd Interna-tional conference on Computer graphics and interactive techniques in Australasiaand South East Asia, 2004, pp. 212–221.

[19] X. Yi, S. Qin, J. Kang, Generating 3D architectural models based on hand motionand gesture, Computers in Industry 60 (2009) 677–685.

[20] B.H. Thomas, W. Piekarski, Glove based user interaction techniques for augment-ed reality in an outdoor environment, Virtual Reality 6 (2002) 167–180.

[21] T.H. Hoang, S.R. Porter, B.H. Thomas, Augmenting image plane AR 3D interactionsfor wereable computers, in: Proceedings of the 10th Australasian User InterfaceConference (AUIC2009), Wellington, New Zealand, 2009.

[22] E. Hosoya, M.Sato. Kitabata, H. Harada, I. Nojima, H. Morisawa, F. Mutoh, S.A.Onozawa, A mirror metaphor interaction system: touching remote real objects inan augmented reality environment, in: Proceedings of the 2nd IEEE and ACMInternational Symposium on Mixed and Augmented Reality, Washington, USA,(2003), pp. 2003–2350.

[23] M.J. Ruperez, C. Monserrat, S. Alemany, M.C. Juan, M. Alcaniz, Contact model, fitprocess and foot animation for the virtual simulator of the footwear comfort,Computer-Aided Design 42 (2010) 425–431.

[24] L. Greci, M. Sacco, N. Cau, F. Buonanno, FootGLove: a Haptic device supporting thecustomer in the choice if the best fitting shoes, EuroHaptics 2012, Part I, LNCS7282 (2012) 148–159.

[25] S. Mottura, L. Greci, M. Sacco, C.R. Boer, An augmented reality system for thecustomized shoe shop, in: Proceedings of the 2nd Interdisciplinary World Con-gress on Mass Customization and Personalization, Munich, Germany, 2003.

[26] C. Redaelli, R. Pellegrini, S. Mottura, M. Sacco, Shoe customers’ behaviour withnew technologies: the Magic Mirror case, in: 15th International Conference onConcurrent Enterprising, ICE 2009, Leiden, The Netherlands, June 22–24, 2009.

[27] P. Eisert, P. Fechteler, J. Rurainsky, 3-D tracking of shoes for virtual mirrorapplication, in: Proc. IEEE Conf on Computer Vision and Pattern Recognition,Achorage, AK, June, 2008.

[28] D. Southard, Transformations for stereoscopic visual simulation, Computers &Graphics 16 (4) (1992) 401–410.

[29] G. Simon, A.W. Fitzgibbon, A. Zisserman, Markerless tracking using planar struc-tures in the scene, in: Proceedings of the IEEE and ACM International Symposiumon Augmented Reality 2000 (ISAR 2000), 2000, pp. 120–128.

[30] Y. Zhang, A. Luximon, X. Ma, X. Guo, M. Zhang, Digital Human Modeling, in: Masscustomization methodology for footwear design, Springer, Berlin Heidelberg,2011, pp. 367-375.

[31] Y.P. Luh, J.B. Wang, J.W. Chang, S.Y. Chang, C.H. Chu, Augmented reality-baseddesign customization of footwear for children, Journal of Intelligent Manufactur-ing (2012), http://dx.doi.org/10.1007/s10845-012-0642-9.

Antonio Jimeno-Morenilla is associate professor withthe Computer Technology department at the Universityof Alicante (Spain). He received his PhD from theUniversity of Alicante in 2003. His research interestsinclude sculptured surface manufacturing, computa-tional geometry for design and manufacturing, rapidand virtual prototyping, surface flattening, and highperformance computer architectures. Dr. Jimeno hasconsiderable experience in the development of CADsystems. In particular, he has been involved in manygovernment and industrial funded projects, most ofthem in collaboration with the Spanish FootwearResearch Institute (INESCOP).

Jose Luis Sanchez-Romero received his BS degree inComputer Science from the Polytechnic University ofValencia, Spain, in 1995. He received the PhD degree incomputer science from the University of Alicante in2009. He is currently a senior lecturer within theComputer Technology Department at the University ofAlicante, Spain, where he develops his research workwithin the CAD Research Group. He has publishedseveral journal and international conference papers. Hisresearch interests include high performance computerarchitecture, computer arithmetic, and CAD/CAM sys-tems.

Faustino Salas-Perez heads the CAD/CAM Departmentof the Spanish Footwear Research Institute (INESCOP)since 1982. His research includes sculptured surfacemanufacturing, computational geometry for design andmanufacturing, rapid and virtual prototyping focusedon the footwear sector. Mr. Salas has a lot of experiencein the design and development of CAD/CAM systems forshoes. In particular, he has been the principal investi-gator of more than 50 research projects, most of themfunded by the European Union.