Draft:Heads-Up Computing

From Wikipedia, the free encyclopedia

Introduction[edit]

Heads-Up Computing is a human-computer interaction approach initially proposed by Shengdong Zhao, professor at the City University of Hong Kong.[1]. This interactive design approach seeks to seamlessly integrate computing support into daily activities within ubiquitous environments.

Various related applications have been explored in the design of games[2], video-learning[3] and subtle interaction techniques[4]. This vision suggests a potential solution to address issues arising from competing activities between digital and real-world interactions. For instance, the phenomenon known as “smartphone zombie” highlights how using a mobile phone while walking can diminish situational awareness[5]. Instead, Heads-Up Computing aims to position digital interactions as complementary to real-world activities.

It is important to note that Heads-Up Computing represents an evolving field of research and development, with ongoing exploration into its practical applications and implications. While the long-term vision may involve embedding computing capabilities directly into human bodies, the current definition of Heads-Up Computing primarily involves wearable technology incorporating body-compatible hardware. It also includes multimodal interactions[6] and resource-aware interactions that dynamically adjust based on the user's context[7]

The human's co-evolution with tools

Characteristics[edit]

Heads-Up Computing is defined by three characteristics:

  1. Body-compatible hardware components. This design principle aligns the device's input and output modules with human sensory channels[8]. Recognizing our head and hands as key sensing and actuating hubs, the design includes a head-piece for visual and audio output (such as smart glasses or earphones), a hand-piece (like a ring or wristband) for manual input and haptic feedback, and potentially a body-piece (like a robot) that can perform additional physical tasks for the user.
  2. Multimodal voice, gaze, and gesture interaction. With the head-, hand-, and body-pieces in place, users can input commands via voice, gaze, or subtle gestures involving the head, mouth, and fingers. These modalities are chosen as they can largely be performed during scenarios when the eyes and hands are busy, therefore covering a broad range of interaction needs in daily activities.
  3. Resource-aware interaction model. The interface of Heads-Up Computing needs to be dynamically created according to the available resources a user has at any given moment. Therefore, the system needs to monitor and be aware of the current activity the user is engaged in, as well as the environmental constraints faced at that given moment. An important area of development for this paradigm is a quantitative model that optimizes interactions by predicting the relationship between human perceptual space constraints and primary tasks. This model will be responsible for delivering just-in-time information to and from the head-, hand-, and body-pieces.
  1. ^ Zhao, Shengdong; Tan, Felicia; Fennedy, Katherine (September 2023). "Heads-Up Computing Moving Beyond the Device-Centered Paradigm". Communications of the ACM. 66 (9): 56–63. doi:10.1145/3571722.
  2. ^ Soute, Iris; Markopoulos, Panos; Magielse, Remco (July 2010). "Head Up Games: combining the best of both worlds by merging traditional and digital play". Personal and Ubiquitous Computing. 14 (5): 435–444. doi:10.1007/s00779-009-0265-0.
  3. ^ Ram, Ashwin; Zhao, Shengdong (19 March 2021). "LSVP: Towards Effective On-the-go Video Learning Using Optical Head-Mounted Displays". Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 5 (1): 1–27. doi:10.1145/3448118.
  4. ^ Sapkota, Shardul; Ram, Ashwin; Zhao, Shengdong (27 September 2021). "Ubiquitous Interactions for Heads-Up Computing: Understanding Users' Preferences for Subtle Interaction Techniques in Everyday Settings". Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction. pp. 1–15. doi:10.1145/3447526.3472035. ISBN 978-1-4503-8328-8.
  5. ^ Appel, Markus; Krisch, Nina; Stein, Jan-Philipp; Weber, Silvana (June 2019). "Smartphone zombies! Pedestrians' distracted walking as a function of their fear of missing out". Journal of Environmental Psychology. 63: 130–133. doi:10.1016/j.jenvp.2019.04.003. S2CID 150545607.
  6. ^ Ghosh, Debjyoti; Foong, Pin Sym; Zhao, Shengdong; Liu, Can; Janaka, Nuwan; Erusu, Vinitha (21 April 2020). "EYEditor: Towards On-the-Go Heads-Up Text Editing Using Voice and Manual Input". Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. pp. 1–13. doi:10.1145/3313831.3376173. ISBN 978-1-4503-6708-0. S2CID 218483565.
  7. ^ Lindlbauer, David; Feit, Anna Maria; Hilliges, Otmar (17 October 2019). "Context-Aware Online Adaptation of Mixed Reality Interfaces". Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. pp. 147–160. doi:10.1145/3332165.3347945. hdl:20.500.11850/378788. ISBN 978-1-4503-6816-2. S2CID 201702543.
  8. ^ Mueller, Florian ‘Floyd’; Semertzidis, Nathan; Andres, Josh; Marshall, Joe; Benford, Steve; Li, Xiang; Matjeka, Louise; Mehta, Yash (31 October 2023). "Toward Understanding the Design of Intertwined Human–Computer Integrations" (PDF). ACM Transactions on Computer-Human Interaction. 30 (5): 1–45. doi:10.1145/3590766. S2CID 257927193.