Hello! It's nice to meet you. I’m Aca, and I’m a developer in the AR team in Arbeon.
Today, I’m here to share with you some tech content, particularly about Human-Computer Interaction (HCI). For those who are interested in this, this post will be a joyful read! So let me jump right in! ☺️
Human-Computer Interaction
New Trends 2022
The Development of HCI
Human-Computer Interaction (HCI) first appeared when personal computers began to be made available in offices and at home in the 1980s. In the early days, HCI focused on making PCs more useful and easier to use and learn for everyone. But after it developed along with various technological developments such as smartphones, IoT, AI, and others, it became a crucial tool for making computers interact with humans in similar ways with how humans do with each other.
And today, in this Arbeon Tech section, we will go over the various directions and technologies for HCI and the changes they will bring to our lives.
Zero UI
Most users interact with their devices daily through the Graphical User Interface (GUI). The GUI exists on the screen. And it requires user input, such as using the mouse and keyboard, tapping, or swiping to transmit information. So there is a need to constantly click on devices to perform daily tasks, and this type of interaction takes a toll on the users, eventually becoming a dissatisfactory experience.
So, designers and developers are taking a new approach to address such issues in a more natural way. That is through Zero UI, which is the evolved phase of the HCI. The term “Zero UI” was first used by Fjord's designer Andy Goodman at the 2017 SOLID Conference, and he explained, “Zero UI refers to a paradigm where our movements, voice, glances, and even thoughts can all cause systems to respond to us through our environment.”
Why don’t we take a real-life example?
Let's say you're meeting up with a friend at 7 PM for dinner. You'd have to click your apps countless times to book the restaurant, get ready, flag a taxi, and finally show up at the restaurant. But now, just a voice or simple gesture will be enough to do all those works by processing the natural language and analyzing the user pattern.
Voice
AI assistant services such as Bixby (Samsung) and Siri (Apple), or AI Speakers already utilize our voice as their feature. But it isn’t used in all options because there are plenty of assignments that still need to be solved. There is a consistent need to increase voice recognition rate,, and to develop an AI that understands human language better. But despite the constant room for improvement, there is no doubt that our voice will be used in various ways in the future because of its convenience and effectiveness.
Hand Gesture
The introduction of Microsoft Kinect has greatly influenced the actualization of a natural interface where human body can act as a controller. Poses and gestures can be identified by recognizing the location and the direction of bones. The information can be mapped, allowing the users to command into the device. Coupled with that is the sensor that can track the user’s hands and fingers.
For example, Leap Motion recognizes the fingertips and the center of the palm, and uses the IK Solver to calculate the finger joints and track both hands. Wii recognizes the body movements of the user, allowing various activities through the console. Moreover, many car manufacturers are suggesting gesture-based interaction method as a replacement for the current touch screens in order for the drivers to easily manage the "infotainment" feature.
Gaze
Since the act of gazing is highly intuitive, it is a highly ideal interaction method for users. It fundamentally requires a head-mounted display (HMD), a wearable display device that can track users' eye movements. The gaze can be used in various ways, from being a simple input method to replace computer mice to floating useful information by analyzing where the users' eyes gaze at the most, or making where the eyes gaze at more vivid.
Since people can easily control the motions of their eyes, this technology is sure to be very effective and innovative for users.
AR/VR and Immersive Interaction
Up until today, we have been interacting with our devices using the GUI on the screen. But recently, AR and VR expanded the graphic elements, and now users can directly touch them. Google and Apple provided the framework to create AR apps through ARCore and ARKit, Microsoft launched HoloLens, and Facebook acquired Oculus and changed its company name to Meta to make their ways to the field of AR and VR. Like this, there are increasing cases and studies on AR and VR. Even though users are dissatisfied with the current state because of the nausea and the lack of immersiveness, various technologies are under way to solve them in the future.
Haptic Technology
One of the ways to make things more immersive is through touch. Given the nature of virtual objects, users can't really feel those objects when they touch them. And so, ensuring the sense of touch is known as one of the most important interactive methods along with the senses of sight and hearing. At the moment, Facebook Reality Labs (FRL) revealed the “Bellowband,” a soft wristband with pneumatic bellows, and Tasbi (Tactile and Squeeze Bracelet Interface), which uses vibrotactile actuators and a wrist squeeze mechanism, while Meta revealed the haptic glove. Moreover, Moiin revealed X-1, the motion tracking suit that has haptic technology, announcing to the world that interactions can be more immersive and real through touch.
Teleportation
& Holoportation
Making the users feel as if they are actually there--this is another good way of immersion. A lot of those who have tried AR and VR were disappointed as they failed to immerse the users in the virtual world due to poor graphics and inconvenient control. However, the technological development in the future is set to make it difficult for users to distinguish whether the objects right in front of their eyes are real or fake.
Last March, Mark Zuckerberg, the CEO of Meta, made an appearance at the famous podcast "The Information’s 411," and said that the next-generation Oculus will provide teleportation experiences to people. He argued that the body doesn’t have to be physically transported to another area. The area simply needs to look very real, just like the real world, and that such an experience wouldn't be too different than teleportation. But this poses a serious challenge not limited to the field of graphics, as immersiveness in spatial senses, such as hearing and touching, must be convincing enough to feel real. And to pick it up a notch, users must be able to feel them without any additional devices.
On the other hand, “holoportation,” a subconcept of teleportation, refers to the transportation of the hologram. It became widely known after Microsoft released a video in 2016. In short, if teleportation pertains to the concept of teleporting humans and objects to a space that can feel like a new place, holoportation means humans and objects are brought to a previously recognized space. In the released video, we can see that we are experiencing the point of view of a person wearing the HoloLens, watching the person in the studio where cameras are installed to record her in real time.
Not only that, Los Angeles-based startup “PORTL” is evaluated to have overcome the shortcomings of holoportation and upped its graphics to be as real as possible. This type of holoportation takes place in a phone booth-shaped device called the “PORTL Epic.” The video recorded in real time is actualized as a hologram in the PORTL Epic, and not only does it transfer 4K high-resolution video (1:1 ratio), it also conveys shadows and voices, making a natural change in surroundings possible. There's still room for improvement though, as it still requires the person being recorded to stay in the studio.

The Future of HCI
In the past, physical devices such as mice and keyboards were considered HCI tools, but they brought down the intuitiveness of the interface and hampered the users from using the computers as naturally as possible. So the issue on how to achieve the most natural interaction with computers and other devices is becoming increasingly important in the field. Moreover, the user interface will not be limited to display screens--It will be integrated with all aspects of the users' daily lives, tailored to their needs, and become ubiquitous in the future.
Adaptive Interface and Intelligent Click
The user interface will gradually evolve with the AI development, and it will be able to select appropriate gestures in every situation by predicting the behaviors of users and the situations they are in. For example, the IoT will understand that the user has left the house when he or she is out for jogging, and the AI will learn that the user often listens to music once he or she steps out of the house.
Through this, to play some music and adjust its volume, the user will only have to make a simple gesture after leaving the house. So clicking will become a thing of the past, because the system is there to recommend tasks for the users. The only thing the users would have to do is make a simple gesture to “click,” just to confirm the task.
Ambient Computing
Before we finish, allow me to introduce ambient computing. Ambient computing refers to an environment where people use computers without realizing that they are using computers. The term pertains to the breaking down of the barriers in computing, making users incognizant of the computer's existence. This is the last phase of the immersive interaction. A basic level of ambient computing is experienced in many households at the moment as people can use the IoT to turn lights on and off with words. But it isn't perfect because it still requires a separate device and activation phrases.
To Wrap Up
In the near future, humankind will be able to cook under a lighting that turns on automatically without noticing that there’s a computer involved. The kitchen tools will analyze which and how much ingredients are needed, according to the users’ preferences, ensuring a more convenient lifestyle for all. When that happens, it would be the computers that will adjust to humans, and not the other way around like right now. Things that might happen in science fiction films like “Minority Report” will happen to all of us in the near future, and Arbeon is confident that it will be at the center of this new lifestyle.
Hello! It's nice to meet you. I’m Aca, and I’m a developer in the AR team in Arbeon.
Today, I’m here to share with you some tech content, particularly about Human-Computer Interaction (HCI). For those who are interested in this, this post will be a joyful read! So let me jump right in! ☺️
Human-Computer Interaction
New Trends 2022
The Development of HCI
Human-Computer Interaction (HCI) first appeared when personal computers began to be made available in offices and at home in the 1980s. In the early days, HCI focused on making PCs more useful and easier to use and learn for everyone. But after it developed along with various technological developments such as smartphones, IoT, AI, and others, it became a crucial tool for making computers interact with humans in similar ways with how humans do with each other.
And today, in this Arbeon Tech section, we will go over the various directions and technologies for HCI and the changes they will bring to our lives.
Zero UI
Most users interact with their devices daily through the Graphical User Interface (GUI). The GUI exists on the screen. And it requires user input, such as using the mouse and keyboard, tapping, or swiping to transmit information. So there is a need to constantly click on devices to perform daily tasks, and this type of interaction takes a toll on the users, eventually becoming a dissatisfactory experience.
So, designers and developers are taking a new approach to address such issues in a more natural way. That is through Zero UI, which is the evolved phase of the HCI. The term “Zero UI” was first used by Fjord's designer Andy Goodman at the 2017 SOLID Conference, and he explained, “Zero UI refers to a paradigm where our movements, voice, glances, and even thoughts can all cause systems to respond to us through our environment.”
Why don’t we take a real-life example?
Let's say you're meeting up with a friend at 7 PM for dinner. You'd have to click your apps countless times to book the restaurant, get ready, flag a taxi, and finally show up at the restaurant. But now, just a voice or simple gesture will be enough to do all those works by processing the natural language and analyzing the user pattern.
Voice
AI assistant services such as Bixby (Samsung) and Siri (Apple), or AI Speakers already utilize our voice as their feature. But it isn’t used in all options because there are plenty of assignments that still need to be solved. There is a consistent need to increase voice recognition rate,, and to develop an AI that understands human language better. But despite the constant room for improvement, there is no doubt that our voice will be used in various ways in the future because of its convenience and effectiveness.
Hand Gesture
The introduction of Microsoft Kinect has greatly influenced the actualization of a natural interface where human body can act as a controller. Poses and gestures can be identified by recognizing the location and the direction of bones. The information can be mapped, allowing the users to command into the device. Coupled with that is the sensor that can track the user’s hands and fingers.
For example, Leap Motion recognizes the fingertips and the center of the palm, and uses the IK Solver to calculate the finger joints and track both hands. Wii recognizes the body movements of the user, allowing various activities through the console. Moreover, many car manufacturers are suggesting gesture-based interaction method as a replacement for the current touch screens in order for the drivers to easily manage the "infotainment" feature.
Gaze
Since the act of gazing is highly intuitive, it is a highly ideal interaction method for users. It fundamentally requires a head-mounted display (HMD), a wearable display device that can track users' eye movements. The gaze can be used in various ways, from being a simple input method to replace computer mice to floating useful information by analyzing where the users' eyes gaze at the most, or making where the eyes gaze at more vivid.
Since people can easily control the motions of their eyes, this technology is sure to be very effective and innovative for users.
AR/VR and Immersive Interaction
Up until today, we have been interacting with our devices using the GUI on the screen. But recently, AR and VR expanded the graphic elements, and now users can directly touch them. Google and Apple provided the framework to create AR apps through ARCore and ARKit, Microsoft launched HoloLens, and Facebook acquired Oculus and changed its company name to Meta to make their ways to the field of AR and VR. Like this, there are increasing cases and studies on AR and VR. Even though users are dissatisfied with the current state because of the nausea and the lack of immersiveness, various technologies are under way to solve them in the future.
Haptic Technology
One of the ways to make things more immersive is through touch. Given the nature of virtual objects, users can't really feel those objects when they touch them. And so, ensuring the sense of touch is known as one of the most important interactive methods along with the senses of sight and hearing. At the moment, Facebook Reality Labs (FRL) revealed the “Bellowband,” a soft wristband with pneumatic bellows, and Tasbi (Tactile and Squeeze Bracelet Interface), which uses vibrotactile actuators and a wrist squeeze mechanism, while Meta revealed the haptic glove. Moreover, Moiin revealed X-1, the motion tracking suit that has haptic technology, announcing to the world that interactions can be more immersive and real through touch.
Teleportation
& Holoportation
Making the users feel as if they are actually there--this is another good way of immersion. A lot of those who have tried AR and VR were disappointed as they failed to immerse the users in the virtual world due to poor graphics and inconvenient control. However, the technological development in the future is set to make it difficult for users to distinguish whether the objects right in front of their eyes are real or fake.
Last March, Mark Zuckerberg, the CEO of Meta, made an appearance at the famous podcast "The Information’s 411," and said that the next-generation Oculus will provide teleportation experiences to people. He argued that the body doesn’t have to be physically transported to another area. The area simply needs to look very real, just like the real world, and that such an experience wouldn't be too different than teleportation. But this poses a serious challenge not limited to the field of graphics, as immersiveness in spatial senses, such as hearing and touching, must be convincing enough to feel real. And to pick it up a notch, users must be able to feel them without any additional devices.
On the other hand, “holoportation,” a subconcept of teleportation, refers to the transportation of the hologram. It became widely known after Microsoft released a video in 2016. In short, if teleportation pertains to the concept of teleporting humans and objects to a space that can feel like a new place, holoportation means humans and objects are brought to a previously recognized space. In the released video, we can see that we are experiencing the point of view of a person wearing the HoloLens, watching the person in the studio where cameras are installed to record her in real time.
Not only that, Los Angeles-based startup “PORTL” is evaluated to have overcome the shortcomings of holoportation and upped its graphics to be as real as possible. This type of holoportation takes place in a phone booth-shaped device called the “PORTL Epic.” The video recorded in real time is actualized as a hologram in the PORTL Epic, and not only does it transfer 4K high-resolution video (1:1 ratio), it also conveys shadows and voices, making a natural change in surroundings possible. There's still room for improvement though, as it still requires the person being recorded to stay in the studio.
The Future of HCI
In the past, physical devices such as mice and keyboards were considered HCI tools, but they brought down the intuitiveness of the interface and hampered the users from using the computers as naturally as possible. So the issue on how to achieve the most natural interaction with computers and other devices is becoming increasingly important in the field. Moreover, the user interface will not be limited to display screens--It will be integrated with all aspects of the users' daily lives, tailored to their needs, and become ubiquitous in the future.
Adaptive Interface and Intelligent Click
The user interface will gradually evolve with the AI development, and it will be able to select appropriate gestures in every situation by predicting the behaviors of users and the situations they are in. For example, the IoT will understand that the user has left the house when he or she is out for jogging, and the AI will learn that the user often listens to music once he or she steps out of the house.
Through this, to play some music and adjust its volume, the user will only have to make a simple gesture after leaving the house. So clicking will become a thing of the past, because the system is there to recommend tasks for the users. The only thing the users would have to do is make a simple gesture to “click,” just to confirm the task.
Ambient Computing
Before we finish, allow me to introduce ambient computing. Ambient computing refers to an environment where people use computers without realizing that they are using computers. The term pertains to the breaking down of the barriers in computing, making users incognizant of the computer's existence. This is the last phase of the immersive interaction. A basic level of ambient computing is experienced in many households at the moment as people can use the IoT to turn lights on and off with words. But it isn't perfect because it still requires a separate device and activation phrases.
To Wrap Up
In the near future, humankind will be able to cook under a lighting that turns on automatically without noticing that there’s a computer involved. The kitchen tools will analyze which and how much ingredients are needed, according to the users’ preferences, ensuring a more convenient lifestyle for all. When that happens, it would be the computers that will adjust to humans, and not the other way around like right now. Things that might happen in science fiction films like “Minority Report” will happen to all of us in the near future, and Arbeon is confident that it will be at the center of this new lifestyle.