Workshop SXSW 2017 - Iskander Smit - The Shape of New Things

  • Published on
    22-Jan-2018

  • View
    173

  • Download
    9

Embed Size (px)

Transcript

<ul><li><p>ThingsCon Salon SXSW New Type of Things @NewDutchWave, March 12, 2017</p></li><li><p>AMS</p><p>@iskandr</p></li><li><p>Introducing ThingsCon (5) </p><p>What is new in the new type of things? (45) </p><p>presentation </p><p>discussion </p><p>New possibilities (45) exploration in groups </p><p>share insights </p><p>Human centric &amp; responsible IoT (45) </p><p>reflection in groups </p><p>share insights </p><p>Program this Salon</p></li><li><p>Acknowledgements sources &amp; inspiration</p><p>Research paper Nazli Cila: Products As Agents, Metaphors for designing the products of the IoT age </p><p>PhD research Gijs Huisman on Social Touch Technology </p><p>Tom Coates: essay The Shape of Things </p><p>Just Things foundation, The IoT Manifesto</p></li><li><p>We craft digital productsthat people love to use</p></li><li><p>making IoT work</p><p>connectable .io</p></li><li><p>AMSTERDAM</p><p>#ThingsConAMS</p></li><li><p>2014. The Internet of Xmas Gifts</p></li><li><p>2015. Products with an app</p></li><li><p>2016. Embodiment of new possibilities</p></li><li><p>A new type of Things and our intelligent future Hardware as a Platform</p></li><li><p>An example; a new bbq experienceProducts become systems</p></li><li><p>The Helpful BBQ Sietse Taams</p></li><li><p>22</p></li><li><p>Master of the party occasionally</p></li><li><p>24</p></li><li><p>25</p></li><li><p>26</p><p>(ecosystem of) things</p><p>data</p><p>activities while using</p></li><li><p>27</p></li><li><p>28</p></li><li><p>29</p></li><li><p>30</p></li><li><p>56 57</p><p>13 A basic systemThe exploration approach has led to several design directions that should come together somehow. In a first attempt to combine the solutions to sub-problems, a basic system is created that shows the elements and their connection. </p><p>Combining the solutions regarding measurement, control and interface, a system is composed that enhances the barbecue in a way that it can boost the users performance. Currently based on meat, the user input and real-time temperature findings are used to estimate the outcome.</p><p>OutcomeA prediction is made based on the thickness of the meat and the temperature on the barbecue. The time it will take to cook the </p><p>meat all the way through is estimated. Because of the nature of the interface, where the progress can be seen, the user can make a conscious decision about taking off the meat, whether it has to be cooked all the way or not. </p><p>Testing</p><p>Input</p><p>InterfaceMeasure height</p><p>Well done</p><p>Raw</p><p>Measure temperature</p><p>Computation</p><p>Position</p><p>Energy = Time x Heat x Height</p><p>Measure temperature</p><p>31</p></li><li><p>Hardware as a platform</p><p>Conversations with the machines</p><p>Beyond screen interactions</p><p>Context aware, rule-based</p><p>New type of things</p></li><li><p>the software layer is defining the experienceIn the product system</p></li><li><p>Rethink products Cloudwash, BERG</p></li><li><p>Tangible User Interfaces CHI 2006 workshop (draft: please do not distribute) 3 </p><p>models, and the information associated with their position and orientation upon the workbench represent and control the state of the urban simulation. </p><p>Although standard interface devices for GUIs such as keyboards, mice, and screens are also physical in form, the role of the physical representation in TUI provides an important distinction. The physical embodiment of the buildings to represent the computation involving building dimensions and location allows a tight coupling of control of the object and manipulation of its parameters in the underlying digital simulation. </p><p>In Urp, the building models and interactive tools are both physical representations of digital information (shadow dimensions and wind speed) and computational functions (shadow interplay). The physical artifacts also serve as controls of the underlying computational simulation (specifying the locations of objects). The specific physical embodiment allows a dual use in representing the digital model and allowing control of the digital representation. In the next section, the model of TUI is introduced in comparison with GUI to illustrate this mechanism. </p><p>Basic Model of Tangible User Interface The interface between people and digital information requires two key components; input and output, or control and representation. ontrols enable users to manipulate the information, while external representations are perceived with the human senses. Fig. 1 illustrates this simple model of a user interface consisting of control, representation, and information. </p><p>In the Smalltalk-80 programming language (Burbeck, 1992; Goldberg, 1984), the relationship between these components is illustrated by the "model-view-controller" or "MVC" archetype which has become a basic interaction </p><p>model for GUIs. </p><p>Drawing from the MVC approach, we have developed an interaction model for both GUI and TUI. We carry over the "control" element from MVC, while dividing the "view" element into two subcomponents: tangible and intangible representations, and renaming "model" as "digital information" to generalize this framework to illustrate the difference between GUI and TUI. </p><p>In Computer Science, the term "representation" often relates to the programs and data structures serving as the computer's internal representation (or model) of information. In this article, the meaning of "representation" centers upon external representations the external manifestations of information in fashions directly perceivable by the human senses that include visual, hearing and tactile senses. </p><p>GUI In 1981, the Xerox Star workstation set the stage for the first generation of GUI (Johnson, et al., 1989; Smith, 1982), establishing the "desktop metaphor" which simulates a desktop on a bit-mapped screen. The Star workstation was the first commercial system that demonstrated the power of a mouse, windows, icons, property sheets, and modeless interaction. The Star also set several important HCI design principles, such as "seeing and pointing vs. remembering and typing," and "what you see is what you get (WYSIWYG)." The Apple Macintosh brought this new style of HCI to the public's attention in 1984, creating a new trend in the personal computer industry. Now, the GUI </p><p>is widespread, largely through the pervasiveness of Microsoft Windows, PDAs, and cellular phones. </p><p>GUI uses windows, icons, and menus made of pixels on bit-mapped displays to visualize information. This is an intangible representation. GUI pixels are made interactive through general "remote controllers" such as mice, tablets, or keyboards. In the pursuit of generality, GUI introduced a deep separation between the digital (intangible) representation provided by the bit-mapped display, and the controls provided by the mouse and keyboard. </p><p>Figure 2 illustrates the current GUI paradigm in which generic input devices allow users to remotely interact with digital information. Using the metaphor of seashore that separates a sea of bits from the land of atoms, the digital information is illustrated at the bottom of the water, and mouse and screen are above sea level in the physical </p><p>Fig. 1 User Interface The interface between people and digital information requires two key components: 1) external representation (or view) that users can perceive, and 2) control with which users can manipulate the representation. </p><p> digital information </p><p>control representation </p><p>input output </p><p>Fig. 2 Graphical User Interface GUI represents information with intangible pixels on a bit mapped display and sound General purpose input </p><p> digital information </p><p>remote control </p><p>input output </p><p>pixels </p><p>sound </p><p>physical digital </p><p>intangible representation </p></li><li><p>Tangible User Interfaces CHI 2006 workshop (draft: please do not distribute) 3 </p><p>models, and the information associated with their position and orientation upon the workbench represent and control the state of the urban simulation. </p><p>Although standard interface devices for GUIs such as keyboards, mice, and screens are also physical in form, the role of the physical representation in TUI provides an important distinction. The physical embodiment of the buildings to represent the computation involving building dimensions and location allows a tight coupling of control of the object and manipulation of its parameters in the underlying digital simulation. </p><p>In Urp, the building models and interactive tools are both physical representations of digital information (shadow dimensions and wind speed) and computational functions (shadow interplay). The physical artifacts also serve as controls of the underlying computational simulation (specifying the locations of objects). The specific physical embodiment allows a dual use in representing the digital model and allowing control of the digital representation. In the next section, the model of TUI is introduced in comparison with GUI to illustrate this mechanism. </p><p>Basic Model of Tangible User Interface The interface between people and digital information requires two key components; input and output, or control and representation. ontrols enable users to manipulate the information, while external representations are perceived with the human senses. Fig. 1 illustrates this simple model of a user interface consisting of control, representation, and information. </p><p>In the Smalltalk-80 programming language (Burbeck, 1992; Goldberg, 1984), the relationship between these components is illustrated by the "model-view-controller" or "MVC" archetype which has become a basic interaction </p><p>model for GUIs. </p><p>Drawing from the MVC approach, we have developed an interaction model for both GUI and TUI. We carry over the "control" element from MVC, while dividing the "view" element into two subcomponents: tangible and intangible representations, and renaming "model" as "digital information" to generalize this framework to illustrate the difference between GUI and TUI. </p><p>In Computer Science, the term "representation" often relates to the programs and data structures serving as the computer's internal representation (or model) of information. In this article, the meaning of "representation" centers upon external representations the external manifestations of information in fashions directly perceivable by the human senses that include visual, hearing and tactile senses. </p><p>GUI In 1981, the Xerox Star workstation set the stage for the first generation of GUI (Johnson, et al., 1989; Smith, 1982), establishing the "desktop metaphor" which simulates a desktop on a bit-mapped screen. The Star workstation was the first commercial system that demonstrated the power of a mouse, windows, icons, property sheets, and modeless interaction. The Star also set several important HCI design principles, such as "seeing and pointing vs. remembering and typing," and "what you see is what you get (WYSIWYG)." The Apple Macintosh brought this new style of HCI to the public's attention in 1984, creating a new trend in the personal computer industry. Now, the GUI </p><p>is widespread, largely through the pervasiveness of Microsoft Windows, PDAs, and cellular phones. </p><p>GUI uses windows, icons, and menus made of pixels on bit-mapped displays to visualize information. This is an intangible representation. GUI pixels are made interactive through general "remote controllers" such as mice, tablets, or keyboards. In the pursuit of generality, GUI introduced a deep separation between the digital (intangible) representation provided by the bit-mapped display, and the controls provided by the mouse and keyboard. </p><p>Figure 2 illustrates the current GUI paradigm in which generic input devices allow users to remotely interact with digital information. Using the metaphor of seashore that separates a sea of bits from the land of atoms, the digital information is illustrated at the bottom of the water, and mouse and screen are above sea level in the physical </p><p>Fig. 1 User Interface The interface between people and digital information requires two key components: 1) external representation (or view) that users can perceive, and 2) control with which users can manipulate the representation. </p><p> digital information </p><p>control representation </p><p>input output </p><p>Fig. 2 Graphical User Interface GUI represents information with intangible pixels on a bit mapped display and sound General purpose input </p><p> digital information </p><p>remote control </p><p>input output </p><p>pixels </p><p>sound </p><p>physical digital </p><p>intangible representation </p><p>Tangible User Interfaces CHI 2006 workshop (draft: please do not distribute) 3 </p><p>models, and the information associated with their position and orientation upon the workbench represent and control the state of the urban simulation. </p><p>Although standard interface devices for GUIs such as keyboards, mice, and screens are also physical in form, the role of the physical representation in TUI provides an important distinction. The physical embodiment of the buildings to represent the computation involving building dimensions and location allows a tight coupling of control of the object and manipulation of its parameters in the underlying digital simulation. </p><p>In Urp, the building models and interactive tools are both physical representations of digital information (shadow dimensions and wind speed) and computational functions (shadow interplay). The physical artifacts also serve as controls of the underlying computational simulation (specifying the locations of objects). The specific physical embodiment allows a dual use in representing the digital model and allowing control of the digital representation. In the next section, the model of TUI is introduced in comparison with GUI to illustrate this mechanism. </p><p>Basic Model of Tangible User Interface The interface between people and digital information requires two key components; input and output, or control and representation. ontrols enable users to manipulate the information, while external representations are perceived with the human senses. Fig. 1 illustrates this simple model of a user interface consisting of control, representation, and information. </p><p>In the Smalltalk-80 programming language (Burbeck, 1992; Goldberg, 1984), the relationship between these components is illustrated by the "model-view-controller" or "MVC" archetype which has become a basic interaction </p><p>model for GUIs. </p><p>Drawing from the MVC approach, we have developed an interaction model for both GUI and TUI. We carry over the "control" element from MVC, while dividing the "view" element into two subcomponents: tangible and intangible representations, and renaming "model" as "digital information" to generalize this framework to illustrate the difference between GUI and TUI. </p><p>In Computer Science, the term "representation" often relates to the programs and data structures serving as the computer's internal representation (or model) of information. In this article, the meaning of "representation" centers upon external representations the external manifestations of information in fashions directly perceivable by the human senses that include visual, hearing and tactile senses. </p><p>GUI In 1981, the Xerox Star workstation set the stage for the first generation of GUI (Johnson, et al., 1989; Smith, 1982), establishing the "desktop metaphor" which simulates a desktop on a bit-mapped screen. The Star workstation was the first commercial system that demonstrated the power of a mouse, windows, icons, property sheets, and modeless interaction. The Star also set several important HCI design principles, such as "seeing and pointing vs. remember...</p></li></ul>