+ All Categories
Home > Documents > EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as...

EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as...

Date post: 13-Aug-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
10
1077-2626 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information. Manuscript received 10 Sept. 2019; accepted 5 Feb. 2020. Date of publication 18 Feb. 2020; date of current version 27 Mar. 2020. Digital Object Identifier no. 10.1109/TVCG.2020.2973441 EarVR: Using Ear Haptics in Virtual Reality for Deaf and Hard-of-Hearing People ´ Mohammadreza Mirzaei, Peter Kan, and Hannes Kaufmann Fig. 1. The experience of using EarVR among Deaf and Hard-of-Hearing users. Abstract—Virtual Reality (VR) has a great potential to improve skills of Deaf and Hard-of-Hearing (DHH) people. Most VR applications and devices are designed for persons without hearing problems. Therefore, DHH persons have many limitations when using VR. Adding special features in a VR environment, such as subtitles, or haptic devices will help them. Previously, it was necessary to design a special VR environment for DHH persons. We introduce and evaluate a new prototype called “EarVR” that can be mounted on any desktop or mobile VR Head-Mounted Display (HMD). EarVR analyzes 3D sounds in a VR environment and locates the direction of the sound source that is closest to a user. It notifies the user about the sound direction using two vibro-motors placed on the user’s ears. EarVR helps DHH persons to complete sound-based VR tasks in any VR application with 3D audio and a mute option for background music. Therefore, DHH persons can use all VR applications with 3D audio, not only those applications designed for them. Our user study shows that DHH participants were able to complete a simple VR task significantly faster with EarVR than without. The completion time of DHH participants was very close to participants without hearing problems. Also, it shows that DHH participants were able to finish a complex VR task with EarVR, while without it, they could not finish the task even once. Finally, our qualitative and quantitative evaluation among DHH participants indicates that they preferred to use EarVR and it encouraged them to use VR technology more. Index Terms—Virtual reality, haptic, vibrotactile, 3D audio, sound localization, deaf and hard-of-hearing ´ 1 I NTRODUCTION Three main pillars of immersive experiences are visual quality, sound quality, and intuitive interactions [1]. Focusing on these pillars simultaneously can help a user achieve the feeling of full immersion. Sound quality is a very important factor and has a crucial role in our perceptions of VR. High-quality sound contributes significantly to an immersive experience in VR [2]. It enables the user to focus on important objects and interact properly in VR. Spatial audio is a fundamental concept that has been under research for a long time. It includes important concepts such as surround sound and binaural audio [3, 4]. These different types of audio improve the feeling of immersion in VR [5, 6]. Mohammadreza Mirzaei, *. E-mail: [email protected]. Peter Kan, *. E-mail: [email protected]. Hannes Kaufmann, *. E-mail: [email protected]. * Institute of Visual Computing and Human-Centered Technology, Vienna University of Technology, Vienna, Austria. In a VR environment, we can move around freely and perceive sound from different directions, as in a real environment. With 3D audio, we can create virtual environments where audio sources act as real world audio sources [7]. 3D audio is a very important factor in immersive VR environments. It helps to increase the feeling of immersion in VR and can draw user’s attention to different locations (e.g. behind, below, above, left, and right). This raises the question if DHH persons have the same VR experience as persons without hearing problems. Previous research showed that their experience is different and designing a special VR environment for them improves DHH persons experience with VR [8, 9]. According to a report from the World Federation of Deaf [10], there are more than 70 million people with hearing problems around the world. They have a lot of limitations to use many of today’s new technologies, such as VR. These limitations also affect their educational skills [11]. Thus, attention to a DHH person’s needs is very important. VR provides great opportunities for DHH persons, such as improving their learning skills [11,12]. Their skills can be improved in a secure and controlled VR environment, with methods that might not be possible in the real world [13]. However, they cannot interact very well in the VR environment because of a gap in the feedback loop which influences task performance [13]. 2084 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 26, NO. 5, MAY 2020
Transcript
Page 1: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

1077-2626 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.

Manuscript received 10 Sept. 2019; accepted 5 Feb. 2020.Date of publication 18 Feb. 2020; date of current version 27 Mar. 2020.Digital Object Identifier no. 10.1109/TVCG.2020.2973441

EarVR: Using Ear Haptics in Virtual Realityfor Deaf and Hard-of-Hearing People

´Mohammadreza Mirzaei, Peter Kan, and Hannes Kaufmann

Fig. 1. The experience of using EarVR among Deaf and Hard-of-Hearing users.

Abstract—Virtual Reality (VR) has a great potential to improve skills of Deaf and Hard-of-Hearing (DHH) people. Most VR applicationsand devices are designed for persons without hearing problems. Therefore, DHH persons have many limitations when using VR.Adding special features in a VR environment, such as subtitles, or haptic devices will help them. Previously, it was necessary to designa special VR environment for DHH persons. We introduce and evaluate a new prototype called “EarVR” that can be mounted on anydesktop or mobile VR Head-Mounted Display (HMD). EarVR analyzes 3D sounds in a VR environment and locates the direction of thesound source that is closest to a user. It notifies the user about the sound direction using two vibro-motors placed on the user’s ears.EarVR helps DHH persons to complete sound-based VR tasks in any VR application with 3D audio and a mute option for backgroundmusic. Therefore, DHH persons can use all VR applications with 3D audio, not only those applications designed for them. Our userstudy shows that DHH participants were able to complete a simple VR task significantly faster with EarVR than without. The completiontime of DHH participants was very close to participants without hearing problems. Also, it shows that DHH participants were able tofinish a complex VR task with EarVR, while without it, they could not finish the task even once. Finally, our qualitative and quantitativeevaluation among DHH participants indicates that they preferred to use EarVR and it encouraged them to use VR technology more.

Index Terms—Virtual reality, haptic, vibrotactile, 3D audio, sound localization, deaf and hard-of-hearing

´

1 INTRODUCTION

Three main pillars of immersive experiences are visual quality, soundquality, and intuitive interactions [1]. Focusing on these pillarssimultaneously can help a user achieve the feeling of full immersion.Sound quality is a very important factor and has a crucial role in ourperceptions of VR. High-quality sound contributes significantly to animmersive experience in VR [2]. It enables the user to focus on importantobjects and interact properly in VR. Spatial audio is a fundamentalconcept that has been under research for a long time. It includes importantconcepts such as surround sound and binaural audio [3, 4]. Thesedifferent types of audio improve the feeling of immersion in VR [5, 6].

• Mohammadreza Mirzaei, *. E-mail: [email protected].• Peter Kan, *. E-mail: [email protected].• Hannes Kaufmann, *. E-mail: [email protected].* Institute of Visual Computing and Human-Centered Technology,

Vienna University of Technology, Vienna, Austria.

In a VR environment, we can move around freely and perceive soundfrom different directions, as in a real environment. With 3D audio, wecan create virtual environments where audio sources act as real worldaudio sources [7]. 3D audio is a very important factor in immersiveVR environments. It helps to increase the feeling of immersion in VRand can draw user’s attention to different locations (e.g. behind, below,above, left, and right). This raises the question if DHH persons have thesame VR experience as persons without hearing problems. Previousresearch showed that their experience is different and designing aspecial VR environment for them improves DHH persons experiencewith VR [8, 9].

According to a report from the World Federation of Deaf [10], thereare more than 70 million people with hearing problems around theworld. They have a lot of limitations to use many of today’s newtechnologies, such as VR. These limitations also affect their educationalskills [11]. Thus, attention to a DHH person’s needs is very important.VR provides great opportunities for DHH persons, such as improvingtheir learning skills [11,12]. Their skills can be improved in a secure andcontrolled VR environment, with methods that might not be possible inthe real world [13]. However, they cannot interact very well in the VRenvironment because of a gap in the feedback loop which influencestask performance [13].

DHH persons declare that they can feel sound waves (audio basswaves) using their bones and muscles. Also, some research showedthat people can feel sounds and haptic cues with their muscles [14]and on the face [15]. Furthermore, Digital Signal Processing (DSP)has driven new paradigms of audio and music. For example, music,controlled through gesture or dance [16], or music without an audiocomponent (only visuals) [17], or vibro-tactile feedback for voicingenhancement [18]. Recently, scientists used tactile music to developvibro-tactile audio systems [19]. These systems can deliver musicpatterns to the skin through vibrations [20]. Based on recent studies, itis possible to use vibro-tactile haptic devices to improve immersive VRexperiences among DHH persons.

In this paper, we introduce a prototype called “EarVR”. It can bemounted on any VR Head-Mounted Display (HMD). Therefore, it canbe used with both desktop and mobile VR applications. We studiedthe effects of EarVR on DHH persons using VR. We examined theirexperience, sound localization, task completion time, and desire to useVR. Our proposed system performs the following tasks:

1. Analyzes VR sounds and determines the direction of the closestsound to the user in real-time.

2. Sends the result to the user’s ears through vibrotactile feedback.3. Helps the user to accomplish given tasks in VR applications.

The rest of the paper is organized as follows. In Section 2, relatedwork in using VR and haptic devices for DHH users is presented. InSection 3, we explain our proposed system called “EarVR” in terms ofstructure, system requirements, and system design. In Section 4, theexperimental result of different VR tasks is presented, and finally, inSection 5, we discuss the results and conclude the paper.

2 RELATED WORK

Researchers used VR in different areas to help DHH persons, suchas education, health-care, entertainment, etc. P. Paudyal, et al. [12]designed a VR classroom called “DAVEE”. It helps DHH persons toask questions and receive their answers in every session. They canalso interact with other students in this virtual class and record it foroff-line usage. The result showed the effects of using VR to improvecommunications of DHH students in the class. M. Teofilo, et al. [8]developed a system for interpreting speech, converting it to text, andshowing it as subtitles in VR environment. This system helps DHHpersons attend to live theaters and understand the dialogues. Basedon their results, more DHH persons desire to attend to and enjoy livetheaters by using their system. D. Passig and S. Eden [21] used VRto improve induction skills among DHH children. They showed VRcan help DHH children to learn school skills better. Also, it is veryeffective in their future relations and interactions in society.

Scientists also showed that haptic and wearable devices are veryuseful for DHH persons, especially in the field of sound awareness.Some researchers such as M. Lee et al. (ActivEarring) [22], D. Huanget al. (Orecchio) [23], and Y. Kojima et al. (Pull-Navi) [24] showedthat we can use the sensitivity of ears to create smart ear-worn devicesfor transferring the tactile information on both ears. Also, F. Wolf andR. Kuber [25] and V.A. de Jesus Oliveira [26] designed and evaluated ahead-mounted tactile prototype and a vibro-tactile VR HMD to supportspatial and situational awareness among users. However, none of theseresearchers tested their devices on DHH persons.

D. Jain et al. [27] and F.W. Ho-Ching et al. [28] evaluated visualiza-tions for spatially locating sound (on Augmented Reality HMDs) andproviding awareness of environmental audio (on visual displays) to deafindividuals. They showed that persons with hearing loss use visual cuesto recognize and locate sounds. L. Findlater, et al. [29] conducted anonline survey to investigate preference for mobile and wearable soundawareness systems among DHH participants. They showed that almostall participants wanted both visual and haptic feedback devices. L.Sicong et al. [30] designed a smartphone-based acoustic event sensingand notification system called ”UbiEar” and showed that this systemcan assist DHH students in awareness of important acoustic events intheir daily life.

Some other researchers such as M. Shibasaki, et al. [31] used ahaptic feedback system to induce the feeling of tap dancing to DHHpersons. They showed that DHH persons who used their system, havea very different feeling of tap dancing and enjoyed it a lot, even if theycould not hear the sound. B. Petry, et al. [32] developed a system forDHH persons called “MuSS-Bits”. This system helps DHH personsto feel musical instruments’ sounds on their skin using haptic feed-back. D. Guzman, et al. [33] developed a special gauntlet for DHHpersons that can interpret sign language to speech. It improves DHHpersons’s communications and interactions in VR environments. Thereare many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects of usinghaptic feedback devices for DHH persons.

Nowadays, using different types of wearable devices in VR is verycommon. Few companies, such as TeslaSuit [37], bHaptic TactSuit [38],and HardLight VR [39], developed smart haptic and biometric VRsuits. Unfortunately, most of these devices are designed for personswithout hearing problems. There is no comprehensive research aboutthe usability of these devices for DHH persons. The important point inmost of these cases is that DHH persons can use them, but they cannotenjoy them like persons without hearing problems. In almost all theseresearches, a special VR environment or device must be designed forDHH persons. VR developers must use subtitles, symbols, translators,highlights, etc. in the VR environment to make it usable for DHHpersons. Therefore, DHH persons cannot use VR applications that aredesigned for persons without hearing problems.

To the best of our knowledge, there is no research in this area thatused vibro-motors haptic feedbacks on DHH persons’ ears to achievethe sound source localization in VR environments. EarVR can workin both desktop and mobile VR. It does not need a pre-designed VRenvironment for DHH persons. VR developers can design any VRapplication which both DHH persons (with using EarVR) and personswithout hearing problems can use. They only need to consider thefollowing simple factors in their designs to make them compatible withEarVR: 1) Mute option for background music, and 2) 3D Audio. Thesetwo factors do not change the appearance of the VR environment. So,it is impossible to detect if it is designed for DHH persons.

3 PROPOSED SYSTEM

Our proposed system, EarVR, consists of a hardware and softwarecomponent. Our hardware prototype consists of an Arduino Nano, twocoin-vibro-motors, a stereo audio cable, and a Universal Bus Controller(USB) cable. The Arduino is used to process the VR environment’ssounds. Vibro-motors are used to transfer the feeling of vibrationto the user’s ears. Stereo and USB cables transfer audio and powerthe Arduino from desktop or mobile phones. Also, a VR HMD isneeded to display the VR application. Arduino hardware designs differfrom 8-bit micro-controllers to fully featured 32-bit ARM ControlProcessing Units (CPUs). Arduino provides many advantages foracademic purposes [40], such as inexpensive, mobility (low-power),open-source and cross-platform, and extensible software and hardware.

Arduino has some limitations despite all of its advantages, especiallywhen it comes to the processing power of DSP projects. Some specifichardware that is designed for DSP is based on the Arduino [41, 42].Previous research showed that the Arduino can be used for real-timedigital audio processing [43]. For our use case, we need to perform afrequency analysis and to switch on/off the vibro-motors without usingPulse Width Modulation (PWM). No additional hardware is neededbecause the processing power of the Arduino Nano is sufficient and weintend to keep our prototype as simple as possible.

In the software part, we developed code for the Arduino to processand analyze input stereo sound channels and control two vibro-motors.The input stereo sound has a frequency range between 44 KHz to 48KHz through stereo channels which depends on the specifications ofthe sound card (output from PC) and the audio files that are used inthe VR environment. Based on a design guideline for tactile devicesby J.B. van Erp [44], the intervals of the vibro-motors were set to 250ms which allow them to provide an effective minimum duty cycle. Theduration of vibration can be changed depending on the VR scenario.

2084 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 26, NO. 5, MAY 2020

Page 2: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

MIRZAEI ET AL.: EARVR: USING EAR HAPTICS IN VIRTUAL REALITY F OR DEAF AND HARD-OF-HEARING PEOPLE 2085

EarVR: Using Ear Haptics in Virtual Realityfor Deaf and Hard-of-Hearing People

´Mohammadreza Mirzaei, Peter Kan, and Hannes Kaufmann

Fig. 1. The experience of using EarVR among Deaf and Hard-of-Hearing users.

Abstract—Virtual Reality (VR) has a great potential to improve skills of Deaf and Hard-of-Hearing (DHH) people. Most VR applicationsand devices are designed for persons without hearing problems. Therefore, DHH persons have many limitations when using VR.Adding special features in a VR environment, such as subtitles, or haptic devices will help them. Previously, it was necessary to designa special VR environment for DHH persons. We introduce and evaluate a new prototype called “EarVR” that can be mounted on anydesktop or mobile VR Head-Mounted Display (HMD). EarVR analyzes 3D sounds in a VR environment and locates the direction of thesound source that is closest to a user. It notifies the user about the sound direction using two vibro-motors placed on the user’s ears.EarVR helps DHH persons to complete sound-based VR tasks in any VR application with 3D audio and a mute option for backgroundmusic. Therefore, DHH persons can use all VR applications with 3D audio, not only those applications designed for them. Our userstudy shows that DHH participants were able to complete a simple VR task significantly faster with EarVR than without. The completiontime of DHH participants was very close to participants without hearing problems. Also, it shows that DHH participants were able tofinish a complex VR task with EarVR, while without it, they could not finish the task even once. Finally, our qualitative and quantitativeevaluation among DHH participants indicates that they preferred to use EarVR and it encouraged them to use VR technology more.

Index Terms—Virtual reality, haptic, vibrotactile, 3D audio, sound localization, deaf and hard-of-hearing

´

1 INTRODUCTION

Three main pillars of immersive experiences are visual quality, soundquality, and intuitive interactions [1]. Focusing on these pillarssimultaneously can help a user achieve the feeling of full immersion.Sound quality is a very important factor and has a crucial role in ourperceptions of VR. High-quality sound contributes significantly to animmersive experience in VR [2]. It enables the user to focus on importantobjects and interact properly in VR. Spatial audio is a fundamentalconcept that has been under research for a long time. It includes importantconcepts such as surround sound and binaural audio [3, 4]. Thesedifferent types of audio improve the feeling of immersion in VR [5, 6].

• Mohammadreza Mirzaei, *. E-mail: [email protected].• Peter Kan, *. E-mail: [email protected].• Hannes Kaufmann, *. E-mail: [email protected].* Institute of Visual Computing and Human-Centered Technology,

Vienna University of Technology, Vienna, Austria.

In a VR environment, we can move around freely and perceive soundfrom different directions, as in a real environment. With 3D audio, wecan create virtual environments where audio sources act as real worldaudio sources [7]. 3D audio is a very important factor in immersiveVR environments. It helps to increase the feeling of immersion in VRand can draw user’s attention to different locations (e.g. behind, below,above, left, and right). This raises the question if DHH persons have thesame VR experience as persons without hearing problems. Previousresearch showed that their experience is different and designing aspecial VR environment for them improves DHH persons experiencewith VR [8, 9].

According to a report from the World Federation of Deaf [10], thereare more than 70 million people with hearing problems around theworld. They have a lot of limitations to use many of today’s newtechnologies, such as VR. These limitations also affect their educationalskills [11]. Thus, attention to a DHH person’s needs is very important.VR provides great opportunities for DHH persons, such as improvingtheir learning skills [11,12]. Their skills can be improved in a secure andcontrolled VR environment, with methods that might not be possible inthe real world [13]. However, they cannot interact very well in the VRenvironment because of a gap in the feedback loop which influencestask performance [13].

DHH persons declare that they can feel sound waves (audio basswaves) using their bones and muscles. Also, some research showedthat people can feel sounds and haptic cues with their muscles [14]and on the face [15]. Furthermore, Digital Signal Processing (DSP)has driven new paradigms of audio and music. For example, music,controlled through gesture or dance [16], or music without an audiocomponent (only visuals) [17], or vibro-tactile feedback for voicingenhancement [18]. Recently, scientists used tactile music to developvibro-tactile audio systems [19]. These systems can deliver musicpatterns to the skin through vibrations [20]. Based on recent studies, itis possible to use vibro-tactile haptic devices to improve immersive VRexperiences among DHH persons.

In this paper, we introduce a prototype called “EarVR”. It can bemounted on any VR Head-Mounted Display (HMD). Therefore, it canbe used with both desktop and mobile VR applications. We studiedthe effects of EarVR on DHH persons using VR. We examined theirexperience, sound localization, task completion time, and desire to useVR. Our proposed system performs the following tasks:

1. Analyzes VR sounds and determines the direction of the closestsound to the user in real-time.

2. Sends the result to the user’s ears through vibrotactile feedback.3. Helps the user to accomplish given tasks in VR applications.

The rest of the paper is organized as follows. In Section 2, relatedwork in using VR and haptic devices for DHH users is presented. InSection 3, we explain our proposed system called “EarVR” in terms ofstructure, system requirements, and system design. In Section 4, theexperimental result of different VR tasks is presented, and finally, inSection 5, we discuss the results and conclude the paper.

2 RELATED WORK

Researchers used VR in different areas to help DHH persons, suchas education, health-care, entertainment, etc. P. Paudyal, et al. [12]designed a VR classroom called “DAVEE”. It helps DHH persons toask questions and receive their answers in every session. They canalso interact with other students in this virtual class and record it foroff-line usage. The result showed the effects of using VR to improvecommunications of DHH students in the class. M. Teofilo, et al. [8]developed a system for interpreting speech, converting it to text, andshowing it as subtitles in VR environment. This system helps DHHpersons attend to live theaters and understand the dialogues. Basedon their results, more DHH persons desire to attend to and enjoy livetheaters by using their system. D. Passig and S. Eden [21] used VRto improve induction skills among DHH children. They showed VRcan help DHH children to learn school skills better. Also, it is veryeffective in their future relations and interactions in society.

Scientists also showed that haptic and wearable devices are veryuseful for DHH persons, especially in the field of sound awareness.Some researchers such as M. Lee et al. (ActivEarring) [22], D. Huanget al. (Orecchio) [23], and Y. Kojima et al. (Pull-Navi) [24] showedthat we can use the sensitivity of ears to create smart ear-worn devicesfor transferring the tactile information on both ears. Also, F. Wolf andR. Kuber [25] and V.A. de Jesus Oliveira [26] designed and evaluated ahead-mounted tactile prototype and a vibro-tactile VR HMD to supportspatial and situational awareness among users. However, none of theseresearchers tested their devices on DHH persons.

D. Jain et al. [27] and F.W. Ho-Ching et al. [28] evaluated visualiza-tions for spatially locating sound (on Augmented Reality HMDs) andproviding awareness of environmental audio (on visual displays) to deafindividuals. They showed that persons with hearing loss use visual cuesto recognize and locate sounds. L. Findlater, et al. [29] conducted anonline survey to investigate preference for mobile and wearable soundawareness systems among DHH participants. They showed that almostall participants wanted both visual and haptic feedback devices. L.Sicong et al. [30] designed a smartphone-based acoustic event sensingand notification system called ”UbiEar” and showed that this systemcan assist DHH students in awareness of important acoustic events intheir daily life.

Some other researchers such as M. Shibasaki, et al. [31] used ahaptic feedback system to induce the feeling of tap dancing to DHHpersons. They showed that DHH persons who used their system, havea very different feeling of tap dancing and enjoyed it a lot, even if theycould not hear the sound. B. Petry, et al. [32] developed a system forDHH persons called “MuSS-Bits”. This system helps DHH personsto feel musical instruments’ sounds on their skin using haptic feed-back. D. Guzman, et al. [33] developed a special gauntlet for DHHpersons that can interpret sign language to speech. It improves DHHpersons’s communications and interactions in VR environments. Thereare many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects of usinghaptic feedback devices for DHH persons.

Nowadays, using different types of wearable devices in VR is verycommon. Few companies, such as TeslaSuit [37], bHaptic TactSuit [38],and HardLight VR [39], developed smart haptic and biometric VRsuits. Unfortunately, most of these devices are designed for personswithout hearing problems. There is no comprehensive research aboutthe usability of these devices for DHH persons. The important point inmost of these cases is that DHH persons can use them, but they cannotenjoy them like persons without hearing problems. In almost all theseresearches, a special VR environment or device must be designed forDHH persons. VR developers must use subtitles, symbols, translators,highlights, etc. in the VR environment to make it usable for DHHpersons. Therefore, DHH persons cannot use VR applications that aredesigned for persons without hearing problems.

To the best of our knowledge, there is no research in this area thatused vibro-motors haptic feedbacks on DHH persons’ ears to achievethe sound source localization in VR environments. EarVR can workin both desktop and mobile VR. It does not need a pre-designed VRenvironment for DHH persons. VR developers can design any VRapplication which both DHH persons (with using EarVR) and personswithout hearing problems can use. They only need to consider thefollowing simple factors in their designs to make them compatible withEarVR: 1) Mute option for background music, and 2) 3D Audio. Thesetwo factors do not change the appearance of the VR environment. So,it is impossible to detect if it is designed for DHH persons.

3 PROPOSED SYSTEM

Our proposed system, EarVR, consists of a hardware and softwarecomponent. Our hardware prototype consists of an Arduino Nano, twocoin-vibro-motors, a stereo audio cable, and a Universal Bus Controller(USB) cable. The Arduino is used to process the VR environment’ssounds. Vibro-motors are used to transfer the feeling of vibrationto the user’s ears. Stereo and USB cables transfer audio and powerthe Arduino from desktop or mobile phones. Also, a VR HMD isneeded to display the VR application. Arduino hardware designs differfrom 8-bit micro-controllers to fully featured 32-bit ARM ControlProcessing Units (CPUs). Arduino provides many advantages foracademic purposes [40], such as inexpensive, mobility (low-power),open-source and cross-platform, and extensible software and hardware.

Arduino has some limitations despite all of its advantages, especiallywhen it comes to the processing power of DSP projects. Some specifichardware that is designed for DSP is based on the Arduino [41, 42].Previous research showed that the Arduino can be used for real-timedigital audio processing [43]. For our use case, we need to perform afrequency analysis and to switch on/off the vibro-motors without usingPulse Width Modulation (PWM). No additional hardware is neededbecause the processing power of the Arduino Nano is sufficient and weintend to keep our prototype as simple as possible.

In the software part, we developed code for the Arduino to processand analyze input stereo sound channels and control two vibro-motors.The input stereo sound has a frequency range between 44 KHz to 48KHz through stereo channels which depends on the specifications ofthe sound card (output from PC) and the audio files that are used inthe VR environment. Based on a design guideline for tactile devicesby J.B. van Erp [44], the intervals of the vibro-motors were set to 250ms which allow them to provide an effective minimum duty cycle. Theduration of vibration can be changed depending on the VR scenario.

Page 3: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

2086 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 26, NO. 5, MAY 2020

We set only one voltage level for vibro-motors to turn them on or off(3 Volts = on). This means that if the Arduino detects a sound, it turnsthe vibro-motor on (related to the sound direction) in its full resonancevibration frequency (usually between 170 Hz to 240 Hz, depending onthe coin vibro-motor specifications). Based on a research guideline byK. Myles and J.T. Kalb [45], the recommended range for effective andcomfortable tactile-only communication on the head is between 32 Hzto 64 Hz [25]. Therefore, for using the vibro-motors in the ears (forVR HMDs without embedded headphones), we can set the resonancevibration frequency to 45 Hz by controlling the voltage level of thevibro-motors.

Fig. 2. The EarVR design.

3.1 HardwareFigure 3 shows the EarVR hardware elements. We used the followingelements to implement the EarVR prototype:

1. Arduino Nano (ATmega328)2. Two coin-vibro-motors (14mm)3. Samsung Odyssey VR HMD (or any VR HMD)

Fig. 3. EarVR mounted on VR HMD.

Samsung Odyssey VR HMD is used to display the VR environment.It has dual 3.5” AMOLED (1440x1600 dots) screens, 110 degrees FOV,and AKG 360 degrees spatial sound headphones that are very usefulfor our work.

We mounted EarVR on the Samsung Odyssey VR HMD. The Ar-duino is placed on top of the VR HMD and vibro-motors are placedinside the AKG headphones. But as mentioned before, it is possible touse any VR HMD. EarVR only needs to connect to a headphone jackand a USB port. For mobile VR, we can use the headphone jack onmobile devices with a Universal Serial Bus (USB) On-The-Go (OTG)cable, Figure 4.

Fig. 4. EarVR connections: PC or laptop (left), and mobile phone (right).

Also, we attached special soft and flexible plastics to the vibro-motors. This allows users to put vibro-motors inside their ears withoutany unpleasant feeling when using VR HMDs without embedded head-phones, such as Samsung Gear VR or Google Daydream, Figure 5.

Fig. 5. Put vibro-motors in the ears.

The Samsung Odyssey VR HMD is connected to a powerful VRready laptop with the following specifications: Intel Core-i7 processor,16 Gigabytes of RAM, Nvidia GeForce GTX 1080 graphics card, anda 64-bit Windows 10 Operating System (OS). The processing powerof the Arduino Nano is sufficient to analyze the input stereo soundsand to control two vibro-motors. We can provide the Arduino Nano’soperating voltage (5 volts) using a USB cable connected to a PC USBport, or a mobile phone’s USB connector (through an OTG cable). Arechargeable lithium-ion (Li-ion) battery can also be used to providethe required power.

The stereo sounds are transferred to the Arduino through a stereoaudio cable connected to the PC or mobile phone’s 3.5mm headphonejack. To keep the system as simple as possible and avoid using ad-ditional circuit boards, we programmed the Arduino to switch on/offthe vibro-motors instead of controlling their speed with PWM. Thevibro-motors can switch extremely fast and can be controlled by a lowcurrent source. Our goal was to indicate the direction of the closestsound to the user in real-time, so using PWM was not necessary. Wenoticed that using the power of vibration to indicate the loudness ofthe incoming sound as a continuous indicator instead of a binary one(on/off), not only causes confusion for the user in complex scenarioswith multiple sound sources, but also increases the processing load onthe Arduino.

The Arduino processes the input stereo sound by analyzing left andright stereo channels. The left vibro-motor (on the left ear) will vibrateif the intensity of the left channel is higher than the right and vice versa.Therefore, the user knows the direction of the incoming sound.

If the intensity is equal in both channels, the vibro-motors will notvibrate. In this case, there are two main situations: 1) The sound sourceis in front of the user, which is easy to find because the user can see it,and 2) The sound source is behind, above, or below the user. If userscannot find the sound source in the front or behind, they pay attentionto above or below.

3.2 SoftwareWe developed the Arduino code using Fast Fourier Transform (FFT)and a custom algorithm, and deploy the code onto Arduino using theArduino Integrated Development Environment (IDE) [40]. For the VRapplication, we developed two task-based sound-related VR gamesusing the Unity3D game engine version 2019.1.5f1 [46]. Google’sResonance Audio Software Development Kit (SDK) [47] was used toadd 3D audio to the objects in the VR environment. The SteamVR v2plugin [48] was utilized to program the VR controllers.

We decided to design two task-based VR games. The VR environ-ment is an enclosed room (Figure 6), the user spawns in the center.The base room 3D model is from Google Resonance Audio SDK andwe improved it for our project. 3D audio is essential for EarVR be-cause the signal processing code inside the Arduino analyzes input3D sounds (stereo channels) and it does not work in VR applicationswith background music (background music must be muted before usingEarVR).

Fig. 6. Our VR environment for Task 1 (top) and Task 2 (bottom).

We designed two tasks for two specific purposes: Task 1 was de-signed to measure the completion time, and Task 2 was dedicated tocount task completions. Two groups of people were tested: (1) DHHpersons and (2) persons without hearing problems. We stored andanalyzed the results for each group. Our goal was to determine theeffects of using EarVR on the speed of completing VR tasks and thedesire of using VR technology among DHH persons.

4 USER STUDIES

As mentioned in section 3.2, we designed two different VR games fortesting EarVR. A user study was designed to check functionality andacceptability of EarVR. We used comments from DHH communitywhen designing our final tasks.

Some of these comments were very useful for us. For example, manyDHH persons did not want to have to use a heavy and bulky VR HMD.Instead, we used Samsung Odyssey VR HMD which is comfortableto use. Also, we used coin-vibro-motors because they preferred smallvibro-motors.

4.1 Pilot Study

We recruited two groups of people for our pilot study. Group 1 consistedof 5 persons without hearing problems from university staff (3 men,2 women; ages 18-50, X = 33.6). And, Group 2 consisted of 5 DHHpersons (no hearing in both ears) from the DHH community (4 men, 1woman; ages 18-45, X = 32.2) who volunteered to do our test. All theparticipants were familiar with VR technology and had tried it at leastonce before. They also did not have any physical or emotional problemswith using VR. VR safety warnings were stated in our consent formcarefully and all volunteers were completely aware of them.

4.1.1 Task and Procedure

We designed a very simple VR game for our pilot study and asked theparticipants to play it. In this simple VR game, the user was placed inthe center of the environment and 4 speakers (3D objects) were placedaround him (left, right, front, back), Figure 7.

Fig. 7. VR environment of the pilot study.

In this experiment, one speaker starts playing a continues sound(loop wave) at a random time and the user must select it using VRcontroller as a pointer. If the user select the correct speaker, the nextrandom speaker will start playing a sound. This process will continuefour times. After selecting the fourth speaker, the task completion timeof the user is recorded which shows the user’s success in completingthe given task (win state). If the user selects a wrong speaker, the taskwill be over, and it will be recorded as a failure for the user (gameover state). Each participant experienced two conditions: 1) WithEarVR and 2) Without EarVR. Users can play the game three timesin total if they want. We applied a Wilcoxon Signed-Rank Test onthe average completion time, number of plays, number of wins, andnumber of game overs for with and without EarVR conditions. Weused significance level α = 0.05.

4.1.2 Pilot Study Results

Persons Without Hearing Problems: The results are shown inTable 1. Persons without hearing problems were able to complete thegame on average completion time of 5.84 seconds without EarVR and5.34 seconds with EarVR. Each user in this group played the game atleast two times. Two members played two times and the other threeplayed three times.

A Wilcoxon test indicated a significant main effect of using EarVRon average completion time (Z =−2.032, p = 0.042), but the test didnot elicit a statistically significant change in number of plays (Z =−1, p = 0.317) and number of wins (Z = −1, p = 0.317), and alsonumber of game overs (no game over).

Page 4: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

MIRZAEI ET AL.: EARVR: USING EAR HAPTICS IN VIRTUAL REALITY F OR DEAF AND HARD-OF-HEARING PEOPLE 2087

We set only one voltage level for vibro-motors to turn them on or off(3 Volts = on). This means that if the Arduino detects a sound, it turnsthe vibro-motor on (related to the sound direction) in its full resonancevibration frequency (usually between 170 Hz to 240 Hz, depending onthe coin vibro-motor specifications). Based on a research guideline byK. Myles and J.T. Kalb [45], the recommended range for effective andcomfortable tactile-only communication on the head is between 32 Hzto 64 Hz [25]. Therefore, for using the vibro-motors in the ears (forVR HMDs without embedded headphones), we can set the resonancevibration frequency to 45 Hz by controlling the voltage level of thevibro-motors.

Fig. 2. The EarVR design.

3.1 HardwareFigure 3 shows the EarVR hardware elements. We used the followingelements to implement the EarVR prototype:

1. Arduino Nano (ATmega328)2. Two coin-vibro-motors (14mm)3. Samsung Odyssey VR HMD (or any VR HMD)

Fig. 3. EarVR mounted on VR HMD.

Samsung Odyssey VR HMD is used to display the VR environment.It has dual 3.5” AMOLED (1440x1600 dots) screens, 110 degrees FOV,and AKG 360 degrees spatial sound headphones that are very usefulfor our work.

We mounted EarVR on the Samsung Odyssey VR HMD. The Ar-duino is placed on top of the VR HMD and vibro-motors are placedinside the AKG headphones. But as mentioned before, it is possible touse any VR HMD. EarVR only needs to connect to a headphone jackand a USB port. For mobile VR, we can use the headphone jack onmobile devices with a Universal Serial Bus (USB) On-The-Go (OTG)cable, Figure 4.

Fig. 4. EarVR connections: PC or laptop (left), and mobile phone (right).

Also, we attached special soft and flexible plastics to the vibro-motors. This allows users to put vibro-motors inside their ears withoutany unpleasant feeling when using VR HMDs without embedded head-phones, such as Samsung Gear VR or Google Daydream, Figure 5.

Fig. 5. Put vibro-motors in the ears.

The Samsung Odyssey VR HMD is connected to a powerful VRready laptop with the following specifications: Intel Core-i7 processor,16 Gigabytes of RAM, Nvidia GeForce GTX 1080 graphics card, anda 64-bit Windows 10 Operating System (OS). The processing powerof the Arduino Nano is sufficient to analyze the input stereo soundsand to control two vibro-motors. We can provide the Arduino Nano’soperating voltage (5 volts) using a USB cable connected to a PC USBport, or a mobile phone’s USB connector (through an OTG cable). Arechargeable lithium-ion (Li-ion) battery can also be used to providethe required power.

The stereo sounds are transferred to the Arduino through a stereoaudio cable connected to the PC or mobile phone’s 3.5mm headphonejack. To keep the system as simple as possible and avoid using ad-ditional circuit boards, we programmed the Arduino to switch on/offthe vibro-motors instead of controlling their speed with PWM. Thevibro-motors can switch extremely fast and can be controlled by a lowcurrent source. Our goal was to indicate the direction of the closestsound to the user in real-time, so using PWM was not necessary. Wenoticed that using the power of vibration to indicate the loudness ofthe incoming sound as a continuous indicator instead of a binary one(on/off), not only causes confusion for the user in complex scenarioswith multiple sound sources, but also increases the processing load onthe Arduino.

The Arduino processes the input stereo sound by analyzing left andright stereo channels. The left vibro-motor (on the left ear) will vibrateif the intensity of the left channel is higher than the right and vice versa.Therefore, the user knows the direction of the incoming sound.

If the intensity is equal in both channels, the vibro-motors will notvibrate. In this case, there are two main situations: 1) The sound sourceis in front of the user, which is easy to find because the user can see it,and 2) The sound source is behind, above, or below the user. If userscannot find the sound source in the front or behind, they pay attentionto above or below.

3.2 SoftwareWe developed the Arduino code using Fast Fourier Transform (FFT)and a custom algorithm, and deploy the code onto Arduino using theArduino Integrated Development Environment (IDE) [40]. For the VRapplication, we developed two task-based sound-related VR gamesusing the Unity3D game engine version 2019.1.5f1 [46]. Google’sResonance Audio Software Development Kit (SDK) [47] was used toadd 3D audio to the objects in the VR environment. The SteamVR v2plugin [48] was utilized to program the VR controllers.

We decided to design two task-based VR games. The VR environ-ment is an enclosed room (Figure 6), the user spawns in the center.The base room 3D model is from Google Resonance Audio SDK andwe improved it for our project. 3D audio is essential for EarVR be-cause the signal processing code inside the Arduino analyzes input3D sounds (stereo channels) and it does not work in VR applicationswith background music (background music must be muted before usingEarVR).

Fig. 6. Our VR environment for Task 1 (top) and Task 2 (bottom).

We designed two tasks for two specific purposes: Task 1 was de-signed to measure the completion time, and Task 2 was dedicated tocount task completions. Two groups of people were tested: (1) DHHpersons and (2) persons without hearing problems. We stored andanalyzed the results for each group. Our goal was to determine theeffects of using EarVR on the speed of completing VR tasks and thedesire of using VR technology among DHH persons.

4 USER STUDIES

As mentioned in section 3.2, we designed two different VR games fortesting EarVR. A user study was designed to check functionality andacceptability of EarVR. We used comments from DHH communitywhen designing our final tasks.

Some of these comments were very useful for us. For example, manyDHH persons did not want to have to use a heavy and bulky VR HMD.Instead, we used Samsung Odyssey VR HMD which is comfortableto use. Also, we used coin-vibro-motors because they preferred smallvibro-motors.

4.1 Pilot Study

We recruited two groups of people for our pilot study. Group 1 consistedof 5 persons without hearing problems from university staff (3 men,2 women; ages 18-50, X = 33.6). And, Group 2 consisted of 5 DHHpersons (no hearing in both ears) from the DHH community (4 men, 1woman; ages 18-45, X = 32.2) who volunteered to do our test. All theparticipants were familiar with VR technology and had tried it at leastonce before. They also did not have any physical or emotional problemswith using VR. VR safety warnings were stated in our consent formcarefully and all volunteers were completely aware of them.

4.1.1 Task and Procedure

We designed a very simple VR game for our pilot study and asked theparticipants to play it. In this simple VR game, the user was placed inthe center of the environment and 4 speakers (3D objects) were placedaround him (left, right, front, back), Figure 7.

Fig. 7. VR environment of the pilot study.

In this experiment, one speaker starts playing a continues sound(loop wave) at a random time and the user must select it using VRcontroller as a pointer. If the user select the correct speaker, the nextrandom speaker will start playing a sound. This process will continuefour times. After selecting the fourth speaker, the task completion timeof the user is recorded which shows the user’s success in completingthe given task (win state). If the user selects a wrong speaker, the taskwill be over, and it will be recorded as a failure for the user (gameover state). Each participant experienced two conditions: 1) WithEarVR and 2) Without EarVR. Users can play the game three timesin total if they want. We applied a Wilcoxon Signed-Rank Test onthe average completion time, number of plays, number of wins, andnumber of game overs for with and without EarVR conditions. Weused significance level α = 0.05.

4.1.2 Pilot Study Results

Persons Without Hearing Problems: The results are shown inTable 1. Persons without hearing problems were able to complete thegame on average completion time of 5.84 seconds without EarVR and5.34 seconds with EarVR. Each user in this group played the game atleast two times. Two members played two times and the other threeplayed three times.

A Wilcoxon test indicated a significant main effect of using EarVRon average completion time (Z =−2.032, p = 0.042), but the test didnot elicit a statistically significant change in number of plays (Z =−1, p = 0.317) and number of wins (Z = −1, p = 0.317), and alsonumber of game overs (no game over).

Page 5: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

2088 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 26, NO. 5, MAY 2020

Table 1. Pilot study results for persons without hearing problems.

DHH Persons: The results are shown in Table 2. The resultswere very different among DHH persons compared to persons withouthearing problems. Without EarVR, none of DHH persons was able tocomplete the game successfully even once. Three of them did not wantto play the game after the first failure. Two other members tried it twotimes but they were also completely disappointed in finishing the gamebecause they failed both times. With EarVR, all DHH persons wereable to complete the game at their first attempt.

A Wilcoxon test revealed a significant main effect of using EarVRon average completion time (Z =−2.023, p = 0.043), number of plays(Z = −2.070, p = 0.038), number of wins (Z = −2.121, p = 0.034),and number of game overs (Z =−2.070, p = 0.038).

Table 2. Pilot study results for DHH persons.

4.1.3 Discussion and Hypotheses

The average task completion time of DHH persons was 5.7 secondswith EarVR and the average completion time of persons without hearingproblems was 5.84 seconds without EarVR (from section 4.1.1). AMann-Whitney U test suggested that these average completion timeswere close without a significant change (U = 10, p = 0.597). Thebehavioural state of DHH persons during the test indicated that theywere much happier and more excited with EarVR than without.

Hypotheses: We formulated the following hypotheses based onthe results from the pilot study (before the final experiment):

H1. EarVR helps DHH persons to locate sound sources in the VRenvironment in a time very close to persons without hearingproblems.

H2. Using EarVR, DHH persons can complete sound-related VRtasks like persons without hearing problems.

H3. EarVR encourages DHH persons to use VR technology more.

H4. EarVR improves the task completion time for persons withouthearing problems.

4.2 Main Study

For our main study, we recruited volunteers with the same recruitmentmethod as in our pilot study. Some volunteers had no experiencein using VR before. So, we asked them to play a VR demo gamefor 10 minutes before our main study. They signed the consent formbefore playing the game. Therefore, they were aware of potentialVR health effects. We watched them as they played and we wereready to stop them if they showed cybersickness symptoms [49]. Afterthis introduction, the volunteers could choose whether they want toparticipate in our main study or not. Finally, we selected 40 volunteersin two groups of 20 who experienced no side effects when using VR,such as cybersickness.

Group 1 included 20 persons without hearing problems (14 men, 6women, ages 15-50, X = 30.23). Group 2 included 20 DHH personswith no hearing in both ears (12 men, 8 women; ages 15-50, X = 31.85).In order to test our hypotheses, we divided each group of 20 participantsinto two groups of 10. Groups 1.1 and 1.2 (persons without hearingproblems) included 7 men and 3 women (group 1.1: ages 15-50, X =31.6, and group 1.2: ages 16-50, X = 32.1). Groups 2.1 and 2.2 (DHHpersons) included 6 men and 4 women (group 2.1: ages 15-50, X =32.4, and group 2.2: ages 17-50, X = 32.8). We asked the members ofgroups 1.1 and 2.1 to do the given tasks “Without EarVR”, and groups1.2 and 2.2 “With EarVR”. We wanted to analyze the effect of EarVRon all groups’ members.

4.2.1 Task 1: Find the Cube

In Task 1, ten identical cubes are spawned one after another, in ran-dom positions of the VR environment. Each cube produces continuous3D sounds generated by Resonance Audio SDK without a reverb ef-fect (only spatialized). The intensity of the sound that the user hears,changes based on the distance from the cube (for DHH users, it isthe input sound to the Arduino). The user is placed in the center ofthe room (fixed position) and can only rotate around to find the cube.The user must aim at each cube and push the trigger button on the VRcontroller to select the cube. A ray cast effect (laser pointer) is designedto show the aiming point to the user. If the user aims at the cube, thecube’s color changes (Figure 8).

Fig. 8. Task 1: Find the cube.

The cube will be eliminated after getting selected, and another cubewill spawn in a random position. The process will continue until all10 cubes are selected. Finally, the user’s task completion time will besaved (for each user). There is no game over mechanism in this taskand all users can complete the game at the end.

4.2.2 Task 2: Find the Correct Cubes

In Task 2, ten identical cubes are spawned simultaneously in randompositions of the VR environment separated by walls, as shown in Figure9. Only five of these ten cubes will generate continuous 3D sounds.The cubes are identical and have no signs to show the user which oneis generating sounds and also all of them generate similar sounds. Forthis task, the user is placed in the center of the room and can movearound using teleportation to search the areas behind the walls. Theuser can use the VR controller’s trigger button to select the cubes andthe touchpad button to teleport around. We also designed the task sothat two cubes (either one or both of them generating sounds) wouldnever spawn side by side.

Fig. 9. Task 2: Find the correct cubes.

This task is completed (win state) only if the user finds the correctfive cubes (generating sounds) and select them one by one. If the userselect a wrong cube (not generating sound), the game will be over(game over state). Users can play the game between 1 to 10 timesdepending on their desire. The time that each user can spend in thegame is limited to 5 minutes. If the user cannot finish the task in 5minutes, the game will be over. The number of ”Win” and ”Game Over”for each user is recorded. The task completion time is not importantfor this specific game.

4.2.3 Main Study Results

We started our analysis by looking at the task completion time, numberof wins and game overs, and the number of played games for each user.We applied a Mann-Whitney U test (using an α = 0.05) on the resultsof Task 1 and Task 2 for with and without EarVR conditions (for eachgroup). This data helped us to investigate the EarVR’s effects on eachgroup. It also showed us the desire of doing VR tasks among groupswith and without using EarVR.

Task 1 Results: Figure 10 shows the completion time of task 1for group 1 (persons without hearing problems) and group 2 (DHHpersons). Members of group 1 were able to complete the task withan average completion time of 14.9 seconds without EarVR and 12.6seconds with EarVR (2.3 seconds faster on average). Members ofgroup 2 were able to complete the task with an average completion timeof 29.6 seconds without EarVR and 14.7 seconds with EarVR (14.9seconds faster on average).

Fig. 10. Task 1 completion time for Group 1 and Group 2.

A Mann-Whitney U test indicated a significant main effect of usingEarVR on completion time of DHH persons (U, p < 0.001) and alsopersons without hearing problems (U = 15, p = 0.006).

Analysis of the results from task 1 showed us that DHH personswere able to complete the task much faster “With EarVR” comparedto “Without EarVR”. They completed the task on an average time of14.7 seconds which is very close to the average completion time ofpersons without hearing problems “Without EarVR” (14.9 seconds).A Mann-Whitney U test suggested that these completion times wereclose without a significant change (U = 47.5, p = 0.846).

Task 2 Results: In task 2, we focused on the number of plays,the number of wins, and the number of game overs for each user andthe completion time was not important. Figure 11 shows the numberof plays, wins, and game overs for group 1 (persons without hearingproblems). Figure 11-a shows the results of group 1 “Without EarVR”and figure 11-b shows the results “With EarVR”.

Fig. 11. Task 2 results for persons without hearing problems.

Page 6: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

MIRZAEI ET AL.: EARVR: USING EAR HAPTICS IN VIRTUAL REALITY F OR DEAF AND HARD-OF-HEARING PEOPLE 2089

Table 1. Pilot study results for persons without hearing problems.

DHH Persons: The results are shown in Table 2. The resultswere very different among DHH persons compared to persons withouthearing problems. Without EarVR, none of DHH persons was able tocomplete the game successfully even once. Three of them did not wantto play the game after the first failure. Two other members tried it twotimes but they were also completely disappointed in finishing the gamebecause they failed both times. With EarVR, all DHH persons wereable to complete the game at their first attempt.

A Wilcoxon test revealed a significant main effect of using EarVRon average completion time (Z =−2.023, p = 0.043), number of plays(Z = −2.070, p = 0.038), number of wins (Z = −2.121, p = 0.034),and number of game overs (Z =−2.070, p = 0.038).

Table 2. Pilot study results for DHH persons.

4.1.3 Discussion and Hypotheses

The average task completion time of DHH persons was 5.7 secondswith EarVR and the average completion time of persons without hearingproblems was 5.84 seconds without EarVR (from section 4.1.1). AMann-Whitney U test suggested that these average completion timeswere close without a significant change (U = 10, p = 0.597). Thebehavioural state of DHH persons during the test indicated that theywere much happier and more excited with EarVR than without.

Hypotheses: We formulated the following hypotheses based onthe results from the pilot study (before the final experiment):

H1. EarVR helps DHH persons to locate sound sources in the VRenvironment in a time very close to persons without hearingproblems.

H2. Using EarVR, DHH persons can complete sound-related VRtasks like persons without hearing problems.

H3. EarVR encourages DHH persons to use VR technology more.

H4. EarVR improves the task completion time for persons withouthearing problems.

4.2 Main Study

For our main study, we recruited volunteers with the same recruitmentmethod as in our pilot study. Some volunteers had no experiencein using VR before. So, we asked them to play a VR demo gamefor 10 minutes before our main study. They signed the consent formbefore playing the game. Therefore, they were aware of potentialVR health effects. We watched them as they played and we wereready to stop them if they showed cybersickness symptoms [49]. Afterthis introduction, the volunteers could choose whether they want toparticipate in our main study or not. Finally, we selected 40 volunteersin two groups of 20 who experienced no side effects when using VR,such as cybersickness.

Group 1 included 20 persons without hearing problems (14 men, 6women, ages 15-50, X = 30.23). Group 2 included 20 DHH personswith no hearing in both ears (12 men, 8 women; ages 15-50, X = 31.85).In order to test our hypotheses, we divided each group of 20 participantsinto two groups of 10. Groups 1.1 and 1.2 (persons without hearingproblems) included 7 men and 3 women (group 1.1: ages 15-50, X =31.6, and group 1.2: ages 16-50, X = 32.1). Groups 2.1 and 2.2 (DHHpersons) included 6 men and 4 women (group 2.1: ages 15-50, X =32.4, and group 2.2: ages 17-50, X = 32.8). We asked the members ofgroups 1.1 and 2.1 to do the given tasks “Without EarVR”, and groups1.2 and 2.2 “With EarVR”. We wanted to analyze the effect of EarVRon all groups’ members.

4.2.1 Task 1: Find the Cube

In Task 1, ten identical cubes are spawned one after another, in ran-dom positions of the VR environment. Each cube produces continuous3D sounds generated by Resonance Audio SDK without a reverb ef-fect (only spatialized). The intensity of the sound that the user hears,changes based on the distance from the cube (for DHH users, it isthe input sound to the Arduino). The user is placed in the center ofthe room (fixed position) and can only rotate around to find the cube.The user must aim at each cube and push the trigger button on the VRcontroller to select the cube. A ray cast effect (laser pointer) is designedto show the aiming point to the user. If the user aims at the cube, thecube’s color changes (Figure 8).

Fig. 8. Task 1: Find the cube.

The cube will be eliminated after getting selected, and another cubewill spawn in a random position. The process will continue until all10 cubes are selected. Finally, the user’s task completion time will besaved (for each user). There is no game over mechanism in this taskand all users can complete the game at the end.

4.2.2 Task 2: Find the Correct Cubes

In Task 2, ten identical cubes are spawned simultaneously in randompositions of the VR environment separated by walls, as shown in Figure9. Only five of these ten cubes will generate continuous 3D sounds.The cubes are identical and have no signs to show the user which oneis generating sounds and also all of them generate similar sounds. Forthis task, the user is placed in the center of the room and can movearound using teleportation to search the areas behind the walls. Theuser can use the VR controller’s trigger button to select the cubes andthe touchpad button to teleport around. We also designed the task sothat two cubes (either one or both of them generating sounds) wouldnever spawn side by side.

Fig. 9. Task 2: Find the correct cubes.

This task is completed (win state) only if the user finds the correctfive cubes (generating sounds) and select them one by one. If the userselect a wrong cube (not generating sound), the game will be over(game over state). Users can play the game between 1 to 10 timesdepending on their desire. The time that each user can spend in thegame is limited to 5 minutes. If the user cannot finish the task in 5minutes, the game will be over. The number of ”Win” and ”Game Over”for each user is recorded. The task completion time is not importantfor this specific game.

4.2.3 Main Study Results

We started our analysis by looking at the task completion time, numberof wins and game overs, and the number of played games for each user.We applied a Mann-Whitney U test (using an α = 0.05) on the resultsof Task 1 and Task 2 for with and without EarVR conditions (for eachgroup). This data helped us to investigate the EarVR’s effects on eachgroup. It also showed us the desire of doing VR tasks among groupswith and without using EarVR.

Task 1 Results: Figure 10 shows the completion time of task 1for group 1 (persons without hearing problems) and group 2 (DHHpersons). Members of group 1 were able to complete the task withan average completion time of 14.9 seconds without EarVR and 12.6seconds with EarVR (2.3 seconds faster on average). Members ofgroup 2 were able to complete the task with an average completion timeof 29.6 seconds without EarVR and 14.7 seconds with EarVR (14.9seconds faster on average).

Fig. 10. Task 1 completion time for Group 1 and Group 2.

A Mann-Whitney U test indicated a significant main effect of usingEarVR on completion time of DHH persons (U, p < 0.001) and alsopersons without hearing problems (U = 15, p = 0.006).

Analysis of the results from task 1 showed us that DHH personswere able to complete the task much faster “With EarVR” comparedto “Without EarVR”. They completed the task on an average time of14.7 seconds which is very close to the average completion time ofpersons without hearing problems “Without EarVR” (14.9 seconds).A Mann-Whitney U test suggested that these completion times wereclose without a significant change (U = 47.5, p = 0.846).

Task 2 Results: In task 2, we focused on the number of plays,the number of wins, and the number of game overs for each user andthe completion time was not important. Figure 11 shows the numberof plays, wins, and game overs for group 1 (persons without hearingproblems). Figure 11-a shows the results of group 1 “Without EarVR”and figure 11-b shows the results “With EarVR”.

Fig. 11. Task 2 results for persons without hearing problems.

Page 7: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

2090 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 26, NO. 5, MAY 2020

A Mann-Whitney U test indicated a statistically significant changein number of wins (U = 10, p = 0.002), but not elicit a statisticallysignificant change in number of plays (U = 25, p = 0.053) and numberof game overs (U = 32, p = 0.126). As shown in Figure 11, the totalnumber of plays among members of group 1 increased and the numberof game overs decreased but they are not significant. Each membersof group 1 completed the task at least once (at least one win for eachmember). The total number of wins increased from 22 to 39 which issignificant.

Figure 12 shows the number of wins and game overs for group 2(DHH persons). Figure 12-a shows the results of group 2 “WithoutEarVR”, and figure 12-b shows the results “With EarVR”. As shownin figure 12-a, nobody in this group “Without EarVR” could complete(finish) the task even after several tries. Whereas everyone “WithEarVR” completed the task (win) for at least three times (figure 12-b).

Fig. 12. Task 2 results for DHH persons.

Using Mann-Whitney U test, we could find a statistically significantchange in number of plays (U, p < 0.001), number of wins (U, p <0.001), and number of game overs (U = 17.5, p = 0.005).

The total number of plays, wins, and game overs among this groupincreased significantly which demonstrate that DHH persons have muchmore desire to play the VR game “With EarVR” and EarVR helpedthem a lot to complete the task. DHH persons “Without EarVR” weredisappointed in completing the task after one or a few game overs.Some of them requested to play the game again after their first orsecond game over, but they stopped trying when their efforts resultedin nothing.

5 SYSTEM FUNCTIONALITY TEST

To support our claims in real VR games, we designed a functionalitytest for EarVR to study how efficient it is among DHH persons. Weexplored numerous free VR Games available on the market, howevernone of them was suitable for our functionality test (with 3D audio andthe mute option for background music). Also, we found some demoversions of VR games, but they were not challenging for the playersand were not good enough for this test. Many available VR games donot mention the use of 3D audio in their descriptions.

Therefore, we decided to develop a VR game that supports EarVR’srequirements, such as being able to mute the background music andusing 3D audio. We used an open source non-VR game available in theUnity asset-store called “Survival Shooter” [50] and changed it into aVR game (Figure 13).

This game is score-based, and players get scored by eliminatingenemies. We designed a VR Heads-Up Display (HUD) to show theplayer’s health and scores. In addition, the game becomes harder athigher levels. We asked all 20 DHH participants from previous tests tojoin in and play this game.

Fig. 13. Our Shooting VR game environment.

We divided DHH participants into two groups of 10 to create a senseof competition between them. Users in both groups played the gameonce and their score was recorded. The user with the highest score ineach group was chosen to play the final round to determine the winner.We rewarded the final winner with a present at the end.

The passion and excitement among all DHH participants was verypleasant for us to observe. At the end, we asked all 20 DHH participantsto fill in an anonymous survey about their experience of playing the VRgame. Our survey questionnaire consisted of questions with a 5-pointLikert-scale (1 = most negative, 5 = most positive) and open-endedquestion for participants’ comments about their experience with EarVR.We designed it to study ease-of-use, satisfaction, effectiveness, anddesire-to-use EarVR among DHH persons. We analyzed the open-ended comments by using the participant’s own words in MAXQDAtrial version [51] through open-coding on two main categories of us-ability (effectiveness, ease-of-use) and user experience (satisfaction,desire-to-use), without imposing our own beliefs or biases [52].

5.1 Functionality Test ResultsFigure 14 shows the survey results from 20 DHH persons who partici-pated in the functionality test. The results of without EarVR are basedon the participants’ experience from previous tasks. The comparisonsof with and without EarVR conditions using Wilcoxon test indicatedthat using EarVR rated significantly better in terms of ease-of-use(Z =−3.671, p < 0.001), satisfaction (Z =−3.999, p < 0.001), effec-tiveness (Z =−3.981, p < 0.001), and desire-to-use (Z =−4.005, p <0.001).

Fig. 14. Functionality test results.

The results demonstrate that DHH persons find EarVR very helpfulto easily use VR technology (Figure 14-a). They rated the effectivenessof EarVR very high. They believed EarVR can help them completetasks in different sound-based VR applications (Figure 14-c). Thesatisfaction rate was very high, because they believed EarVR madeVR applications more enjoyable for them (Figure 14-b). Finally, thedesire-to-use EarVR was also very high among DHH persons. Theyexperienced a feeling very close to persons without hearing problems.Also, they wanted to repeat that experience again (Figure 14-d). A partof our qualitative analysis is shown in Table 3.

Table 3. Open-coding our qualitative data.

Code Preference Reason Freq. (%)

Desire to use Repeat experience using EarVR 58Effectiveness EarVR helps to locate sound sources 46Satisfaction Enjoy using EarVR 34Ease of use EarVR is portable and ready to use 27

6 DISCUSSION AND FUTURE WORK

The results of Task 1 show that DHH persons can complete the tasksignificantly faster when using EarVR. It helped DHH persons to locatesound sources in the VR environment. Their average task completiontime using EarVR was very close to the average task completion time ofpersons without hearing problems who were not using EarVR. The re-sults from persons without hearing problems indicate that using EarVRhelped them to complete the task faster as well.

The results from Task 2 provide evidence that DHH persons cancomplete sound-related VR tasks using EarVR, which was not possiblefor them without it. The qualitative and quantitative results from ourfunctionality test reveal that DHH persons are very eager to use andenjoy VR applications more when using EarVR. These positive results,in addition to a significant increase in the number of plays of Task 2among DHH persons, show that EarVR encouraged users to use VRtechnology more. According to the final results of Task 1 and Task 2,all of our hypotheses are supported.

Task 2 was designed to study if EarVR helps DHH persons to com-plete certain VR tasks that they were not able to complete otherwise. Inthis task sometimes two identical cubes were spawned near each other(either one or both of them generating sounds) which made it difficulteven for persons without hearing problems to complete the task. Weadded a condition in our development to prevent identical cubes fromspawning near each other. If situations like this occur in sound-relatedVR applications, either the objects are different or they are generatingdifferent sounds which are both distinguishable.

Based on the results of our functionality test, we noticed that someDHH persons do not have an interest in VR without using an assistanttechnology such as EarVR. It depends on the association of an individ-ual with the hearing or the deaf culture and in the end, it is a personalchoice [53]. The results indicate that DHH persons experience a verydifferent feeling when using EarVR with VR HMDs.

We also designed a friendly competition between DHH persons andpersons without hearing problems. We used the same VR game asin our functionality test. We wanted to see if DHH persons have thesame level of excitement for competing with persons without hearingproblems. In this competition, DHH participants used EarVR andpersons without hearing problems did not use EarVR. Both groupswere very excited and enjoyed playing the VR game. We hypothesizethat EarVR can eliminate the fear of losing the game and improvesthe level of confidence among DHH persons. Our assessment is onlybased on observation of participants during this competition but furtherstudies on psychological effects would be required to substantiate theseclaims.

EarVR has two main requirements on VR applications: The appli-cation must provide the option to mute background music and it mustoffer 3D audio. Using 3D audio in VR is growing rapidly and wehope to see more VR applications with 3D audio in the future. Thecomplexity of current VR games on the market is higher than that ofour tasks. Despite EarVR being able to help DHH persons even incomplex VR tasks, there are more parameters in VR environments thatwe should investigate in our future studies.

We discovered that using more than two vibro-motors and installingthem on different parts of the user’s body does not have a noticeableeffect on locating the audio sources. We performed a test to find howmany vibro-motors are needed to locate the sound sources in 6 maindirections (left, right, up, down, front, back). Observing how preciselyDHH persons can detect sound sources using just two vibro-motorswas very interesting. In addition, we conducted a survey among DHHcommunity to find the best locations to put the vibro-motors. Wefound out that DHH persons prefer their upper body for vibro-motorplacement, especially their head. Therefore, after final reviews, wedecided to use two vibro-motors on the user’s ears in our prototypedesign.

The VR environment becomes more interesting for all users byusing 3D audio. As future work, we consider developing a new versionof EarVR that is capable of analyzing different sounds in the VRenvironment based on their level of importance and also combinesound visualization techniques with haptic feedback. By expanding thehardware, we are going to analyze the effect of controlling the speed ofvibro-motors with PWM on determining the distance between the userand the object that is generating sounds.

7 CONCLUSION

In this paper, we introduced and evaluated EarVR as a mountable devicefor different VR HMDs. It acts as an assistant for DHH persons bylocating sound sources in the VR environment. It analyzes the input3D sounds from a VR environment to locate the direction of the closestsound source to the user. Then, it provides the direction to the userusing two vibro-motors placed on the user’s ears. EarVR helps DHHpersons to complete certain VR tasks that they were not able to finishbefore. Also, it improves DHH persons’s experience when runningsound-related VR applications. By using EarVR, DHH persons geta VR experience closer to the experience of persons without hearingproblems.

Page 8: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

MIRZAEI ET AL.: EARVR: USING EAR HAPTICS IN VIRTUAL REALITY F OR DEAF AND HARD-OF-HEARING PEOPLE 2091

A Mann-Whitney U test indicated a statistically significant changein number of wins (U = 10, p = 0.002), but not elicit a statisticallysignificant change in number of plays (U = 25, p = 0.053) and numberof game overs (U = 32, p = 0.126). As shown in Figure 11, the totalnumber of plays among members of group 1 increased and the numberof game overs decreased but they are not significant. Each membersof group 1 completed the task at least once (at least one win for eachmember). The total number of wins increased from 22 to 39 which issignificant.

Figure 12 shows the number of wins and game overs for group 2(DHH persons). Figure 12-a shows the results of group 2 “WithoutEarVR”, and figure 12-b shows the results “With EarVR”. As shownin figure 12-a, nobody in this group “Without EarVR” could complete(finish) the task even after several tries. Whereas everyone “WithEarVR” completed the task (win) for at least three times (figure 12-b).

Fig. 12. Task 2 results for DHH persons.

Using Mann-Whitney U test, we could find a statistically significantchange in number of plays (U, p < 0.001), number of wins (U, p <0.001), and number of game overs (U = 17.5, p = 0.005).

The total number of plays, wins, and game overs among this groupincreased significantly which demonstrate that DHH persons have muchmore desire to play the VR game “With EarVR” and EarVR helpedthem a lot to complete the task. DHH persons “Without EarVR” weredisappointed in completing the task after one or a few game overs.Some of them requested to play the game again after their first orsecond game over, but they stopped trying when their efforts resultedin nothing.

5 SYSTEM FUNCTIONALITY TEST

To support our claims in real VR games, we designed a functionalitytest for EarVR to study how efficient it is among DHH persons. Weexplored numerous free VR Games available on the market, howevernone of them was suitable for our functionality test (with 3D audio andthe mute option for background music). Also, we found some demoversions of VR games, but they were not challenging for the playersand were not good enough for this test. Many available VR games donot mention the use of 3D audio in their descriptions.

Therefore, we decided to develop a VR game that supports EarVR’srequirements, such as being able to mute the background music andusing 3D audio. We used an open source non-VR game available in theUnity asset-store called “Survival Shooter” [50] and changed it into aVR game (Figure 13).

This game is score-based, and players get scored by eliminatingenemies. We designed a VR Heads-Up Display (HUD) to show theplayer’s health and scores. In addition, the game becomes harder athigher levels. We asked all 20 DHH participants from previous tests tojoin in and play this game.

Fig. 13. Our Shooting VR game environment.

We divided DHH participants into two groups of 10 to create a senseof competition between them. Users in both groups played the gameonce and their score was recorded. The user with the highest score ineach group was chosen to play the final round to determine the winner.We rewarded the final winner with a present at the end.

The passion and excitement among all DHH participants was verypleasant for us to observe. At the end, we asked all 20 DHH participantsto fill in an anonymous survey about their experience of playing the VRgame. Our survey questionnaire consisted of questions with a 5-pointLikert-scale (1 = most negative, 5 = most positive) and open-endedquestion for participants’ comments about their experience with EarVR.We designed it to study ease-of-use, satisfaction, effectiveness, anddesire-to-use EarVR among DHH persons. We analyzed the open-ended comments by using the participant’s own words in MAXQDAtrial version [51] through open-coding on two main categories of us-ability (effectiveness, ease-of-use) and user experience (satisfaction,desire-to-use), without imposing our own beliefs or biases [52].

5.1 Functionality Test ResultsFigure 14 shows the survey results from 20 DHH persons who partici-pated in the functionality test. The results of without EarVR are basedon the participants’ experience from previous tasks. The comparisonsof with and without EarVR conditions using Wilcoxon test indicatedthat using EarVR rated significantly better in terms of ease-of-use(Z =−3.671, p < 0.001), satisfaction (Z =−3.999, p < 0.001), effec-tiveness (Z =−3.981, p < 0.001), and desire-to-use (Z =−4.005, p <0.001).

Fig. 14. Functionality test results.

The results demonstrate that DHH persons find EarVR very helpfulto easily use VR technology (Figure 14-a). They rated the effectivenessof EarVR very high. They believed EarVR can help them completetasks in different sound-based VR applications (Figure 14-c). Thesatisfaction rate was very high, because they believed EarVR madeVR applications more enjoyable for them (Figure 14-b). Finally, thedesire-to-use EarVR was also very high among DHH persons. Theyexperienced a feeling very close to persons without hearing problems.Also, they wanted to repeat that experience again (Figure 14-d). A partof our qualitative analysis is shown in Table 3.

Table 3. Open-coding our qualitative data.

Code Preference Reason Freq. (%)

Desire to use Repeat experience using EarVR 58Effectiveness EarVR helps to locate sound sources 46Satisfaction Enjoy using EarVR 34Ease of use EarVR is portable and ready to use 27

6 DISCUSSION AND FUTURE WORK

The results of Task 1 show that DHH persons can complete the tasksignificantly faster when using EarVR. It helped DHH persons to locatesound sources in the VR environment. Their average task completiontime using EarVR was very close to the average task completion time ofpersons without hearing problems who were not using EarVR. The re-sults from persons without hearing problems indicate that using EarVRhelped them to complete the task faster as well.

The results from Task 2 provide evidence that DHH persons cancomplete sound-related VR tasks using EarVR, which was not possiblefor them without it. The qualitative and quantitative results from ourfunctionality test reveal that DHH persons are very eager to use andenjoy VR applications more when using EarVR. These positive results,in addition to a significant increase in the number of plays of Task 2among DHH persons, show that EarVR encouraged users to use VRtechnology more. According to the final results of Task 1 and Task 2,all of our hypotheses are supported.

Task 2 was designed to study if EarVR helps DHH persons to com-plete certain VR tasks that they were not able to complete otherwise. Inthis task sometimes two identical cubes were spawned near each other(either one or both of them generating sounds) which made it difficulteven for persons without hearing problems to complete the task. Weadded a condition in our development to prevent identical cubes fromspawning near each other. If situations like this occur in sound-relatedVR applications, either the objects are different or they are generatingdifferent sounds which are both distinguishable.

Based on the results of our functionality test, we noticed that someDHH persons do not have an interest in VR without using an assistanttechnology such as EarVR. It depends on the association of an individ-ual with the hearing or the deaf culture and in the end, it is a personalchoice [53]. The results indicate that DHH persons experience a verydifferent feeling when using EarVR with VR HMDs.

We also designed a friendly competition between DHH persons andpersons without hearing problems. We used the same VR game asin our functionality test. We wanted to see if DHH persons have thesame level of excitement for competing with persons without hearingproblems. In this competition, DHH participants used EarVR andpersons without hearing problems did not use EarVR. Both groupswere very excited and enjoyed playing the VR game. We hypothesizethat EarVR can eliminate the fear of losing the game and improvesthe level of confidence among DHH persons. Our assessment is onlybased on observation of participants during this competition but furtherstudies on psychological effects would be required to substantiate theseclaims.

EarVR has two main requirements on VR applications: The appli-cation must provide the option to mute background music and it mustoffer 3D audio. Using 3D audio in VR is growing rapidly and wehope to see more VR applications with 3D audio in the future. Thecomplexity of current VR games on the market is higher than that ofour tasks. Despite EarVR being able to help DHH persons even incomplex VR tasks, there are more parameters in VR environments thatwe should investigate in our future studies.

We discovered that using more than two vibro-motors and installingthem on different parts of the user’s body does not have a noticeableeffect on locating the audio sources. We performed a test to find howmany vibro-motors are needed to locate the sound sources in 6 maindirections (left, right, up, down, front, back). Observing how preciselyDHH persons can detect sound sources using just two vibro-motorswas very interesting. In addition, we conducted a survey among DHHcommunity to find the best locations to put the vibro-motors. Wefound out that DHH persons prefer their upper body for vibro-motorplacement, especially their head. Therefore, after final reviews, wedecided to use two vibro-motors on the user’s ears in our prototypedesign.

The VR environment becomes more interesting for all users byusing 3D audio. As future work, we consider developing a new versionof EarVR that is capable of analyzing different sounds in the VRenvironment based on their level of importance and also combinesound visualization techniques with haptic feedback. By expanding thehardware, we are going to analyze the effect of controlling the speed ofvibro-motors with PWM on determining the distance between the userand the object that is generating sounds.

7 CONCLUSION

In this paper, we introduced and evaluated EarVR as a mountable devicefor different VR HMDs. It acts as an assistant for DHH persons bylocating sound sources in the VR environment. It analyzes the input3D sounds from a VR environment to locate the direction of the closestsound source to the user. Then, it provides the direction to the userusing two vibro-motors placed on the user’s ears. EarVR helps DHHpersons to complete certain VR tasks that they were not able to finishbefore. Also, it improves DHH persons’s experience when runningsound-related VR applications. By using EarVR, DHH persons geta VR experience closer to the experience of persons without hearingproblems.

Page 9: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

2092 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 26, NO. 5, MAY 2020

The results from our tests suggest that EarVR helps DHH persons tocomplete sound-related VR tasks and also encourages them to use andenjoy VR technology more than before. Further studies are requiredto determine how EarVR might be used in the DHH community. Wehope to inspire VR developers to create more VR applications thatare compatible with EarVR. Also, to encourage hardware developersto use this system in their future VR HMDs’ designs so that bothDHH persons and persons without hearing problems can enjoy thebenefits of EarVR. We hope that all DHH persons can use and enjoyVR applications without any limitations.

ACKNOWLEDGMENTS

The authors wish to thank all participants who volunteered for our tests.We especially thank the DHH community for their helpful commentsand for showing us that deafness is NOT a disability.

REFERENCES

[1] Qualcomm, “Driving the new era of immersive experiences.” QualcommTechnologies Inc. White Paper, October 2015.

[2] W.-P. Brinkman, A. R. Hoekstra, and R. van EGMOND, “The effect of 3daudio and other audio techniques on virtual reality experience,” AnnualReview of Cybertheraphy and Telemedicine 2015, p. 44, 2015.

[3] C. H. Lee, “Location-aware speakers for the virtual reality environments,”IEEE Access, vol. 5, pp. 2636–2640, 2017.

[4] E. R. Hoeg, L. J. Gerry, L. Thomsen, N. C. Nilsson, and S. Serafin,“Binaural sound reduces reaction time in a virtual reality search task,” in2017 IEEE 3rd VR workshop on sonic interactions for virtual environments(SIVE), pp. 1–4, IEEE, 2017.

[5] S. Yong and H.-C. Wang, “Using spatialized audio to improve humanspatial knowledge acquisition in virtual reality,” in Proceedings of the 23rdInternational Conference on Intelligent User Interfaces Companion, p. 51,ACM, 2018.

[6] F. Ruotolo, L. Maffei, M. Di Gabriele, T. Iachini, M. Masullo, G. Ruggiero,and V. P. Senese, “Immersive virtual reality and environmental noiseassessment: An innovative audio–visual approach,” Environmental ImpactAssessment Review, vol. 41, pp. 10–20, 2013.

[7] T. Walton, “The overall listening experience of binaural audio,” in Pro-ceedings of the 4th International Conference on Spatial Audio (ICSA),Graz, Austria, 2017.

[8] M. Teofilo, A. Lourenco, J. Postal, and V. F. Lucena, “Exploring virtualreality to enable deaf or hard of hearing accessibility in live theaters: Acase study,” in International Conference on Universal Access in Human-Computer Interaction, pp. 132–148, Springer, 2018.

[9] X. Luo, M. Han, T. Liu, W. Chen, and F. Bai, “Assistive learning forhearing impaired college students using mixed reality: A pilot study,”in 2012 International Conference on Virtual Reality and Visualization,pp. 74–81, Sep. 2012.

[10] W. F. of the Deaf, “70 million deaf people, 300+ sign languages, unlimitedpotential,” 2017.

[11] N. Adamo-Villani, “A virtual learning environment for deaf children: de-sign and evaluation,” International Journal of Human and Social Sciences,vol. 2, no. 2, pp. 123–128, 2007.

[12] P. Paudyal, A. Banerjee, Y. Hu, and S. Gupta, “Davee: A deaf accessi-ble virtual environment for education,” in Proceedings of the 2019 onCreativity and Cognition, pp. 522–526, ACM, 2019.

[13] N. R. Council et al., Hearing loss: Determining eligibility for socialsecurity benefits. National Academies Press, 2004.

[14] R. Palmer, O. Skille, R. Lahtinen, and S. Ojala, “Feeling vibrations froma hearing and dual-sensory impaired perspective,” Music and Medicine,vol. 9, no. 3, pp. 178–183, 2017.

[15] H. Gil, H. Son, J. R. Kim, and I. Oakley, “Whiskers: Exploring the useof ultrasonic haptic cues on the face,” in Proceedings of the 2018 CHIConference on Human Factors in Computing Systems, CHI ’18, (NewYork, NY, USA), pp. 658:1–658:13, ACM, 2018.

[16] F. Bevilacqua, N. Schnell, N. H. Rasamimanana, B. Zamborlin, andF. Guedy, “Online gesture analysis and control of audio processing,” 2011.

[17] E. Chew and A. R. Francois, “Musa. rt: music on the spiral array. real-time,” in Proceedings of the eleventh ACM international conference onMultimedia, pp. 448–449, ACM, 2003.

[18] D. Marino, H. Elbaggari, T. Chu, B. Gick, , and K. MacLean, “Single-channel vibrotactile feedback for voicing enhancement in trained and

untrained perceivers,” The Journal of the Acoustical Society of America,vol. 144, p. 1799, 2018.

[19] G. W. Young, D. Murphy, and J. Weeter, “Haptics in music: the effects ofvibrotactile stimulus in low frequency auditory difference detection tasks,”IEEE transactions on haptics, vol. 10, no. 1, pp. 135–139, 2016.

[20] M. Karam, G. Nespoli, F. Russo, and D. I. Fels, “Modelling perceptualelements of music in a vibrotactile display for deaf users: A field study,” in2009 Second International Conferences on Advances in Computer-HumanInteractions, pp. 249–254, IEEE, 2009.

[21] D. Passig and S. Eden, “Enhancing the induction skill of deaf and hard-of-hearing children with virtual reality technology,” Journal of Deaf Studiesand Deaf Education, vol. 5, no. 3, pp. 277–285, 2000.

[22] M. Lee, S. Je, W. Lee, D. Ashbrook, and A. Bianchi, “Activearring:Spatiotemporal haptic cues on the ears,” IEEE Transactions on Haptics,vol. 12, pp. 554–562, 2019.

[23] D.-Y. Huang, T. Seyed, L. Li, J. Gong, Z. Yao, Y. Jiao, X. A. Chen, andX.-D. Yang, “Orecchio: Extending body-language through actuated staticand dynamic auricular postures,” in Proceedings of the 31st Annual ACMSymposium on User Interface Software and Technology, UIST ’18, (NewYork, NY, USA), pp. 697–710, ACM, 2018.

[24] Y. Kojima, Y. Hashimoto, S. Fukushima, and H. Kajimoto, “Pull-navi: Anovel tactile navigation interface by pulling the ears,” in ACM SIGGRAPH2009 Emerging Technologies, SIGGRAPH ’09, (New York, NY, USA),pp. 19:1–19:1, ACM, 2009.

[25] F. Wolf and R. Kuber, “Developing a head-mounted tactile prototypeto support situational awareness,” Int. J. Hum.-Comput. Stud., vol. 109,pp. 54–67, Jan. 2018.

[26] V. A. de Jesus Oliveira, L. Brayda, L. Nedel, and A. Maciel, “Designinga vibrotactile head-mounted display for spatial awareness in 3d spaces,”IEEE Transactions on Visualization and Computer Graphics, vol. 23,pp. 1409–1417, Apr. 2017.

[27] D. Jain, L. Findlater, J. Gilkeson, B. Holland, R. Duraiswami, D. Zotkin,C. Vogler, and J. E. Froehlich, “Head-mounted display visualizations tosupport sound awareness for the deaf and hard of hearing,” in Proceedingsof the 33rd Annual ACM Conference on Human Factors in ComputingSystems, CHI ’15, (New York, NY, USA), pp. 241–250, ACM, 2015.

[28] F. W.-l. Ho-Ching, J. Mankoff, and J. A. Landay, “Can you see what ihear?: The design and evaluation of a peripheral sound display for thedeaf,” in Proceedings of the SIGCHI Conference on Human Factors inComputing Systems, CHI ’03, (New York, NY, USA), pp. 161–168, ACM,2003.

[29] L. Findlater, B. Chinh, D. Jain, J. Froehlich, R. Kushalnagar, and A. C.Lin, “Deaf and hard-of-hearing individuals’ preferences for wearable andmobile sound awareness technologies,” in Proceedings of the 2019 CHIConference on Human Factors in Computing Systems, CHI ’19, (NewYork, NY, USA), pp. 46:1–46:13, ACM, 2019.

[30] L. Sicong, Z. Zimu, D. Junzhao, S. Longfei, J. Han, and X. Wang, “Ubiear:Bringing location-independent sound awareness to the hard-of-hearingpeople with smartphones,” Proc. ACM Interact. Mob. Wearable UbiquitousTechnol., vol. 1, pp. 17:1–17:21, June 2017.

[31] M. Shibasaki, Y. Kamiyama, and K. Minamizawa, “Designing a hapticfeedback system for hearing-impaired to experience tap dance,” in Pro-ceedings of the 29th Annual Symposium on User Interface Software andTechnology, pp. 97–99, ACM, 2016.

[32] B. Petry, T. Illandara, and S. Nanayakkara, “Muss-bits: sensor-displayblocks for deaf people to explore musical sounds,” in Proceedings of the28th Australian Conference on Computer-Human Interaction, pp. 72–80,ACM, 2016.

[33] D. Guzman, G. Brito, J. E. Naranjo, C. A. Garcia, L. F. Saltos, andM. V. Garcia, “Virtual assistance environment for deaf people based onan electronic gauntlet,” in 2018 IEEE Third Ecuador Technical ChaptersMeeting (ETCM), pp. 1–6, IEEE, 2018.

[34] A. Baijal, J. Kim, C. Branje, F. Russo, and D. I. Fels, “Composing vibro-tactile music: A multi-sensory experience with the emoti-chair,” in 2012IEEE Haptics Symposium (HAPTICS), pp. 509–515, IEEE, 2012.

[35] D.-H. Kim and S.-Y. Kim, “Immersive game with vibrotactile and thermalfeedback,” in 5th International Conference on Computer Sciences andConvergence Information Technology, pp. 903–906, IEEE, 2010.

[36] S. Hashizume, S. Sakamoto, K. Suzuki, and Y. Ochiai, “Livejacket: Wear-able music experience device with multiple speakers,” in InternationalConference on Distributed, Ambient, and Pervasive Interactions, pp. 359–371, Springer, 2018.

[37] TeslaSuit, “Tesla VR Suit.” https://teslasuit.io/, 2019. [Online;

last visited: 02-August-2019].[38] bHaptic, “TactSuit.” https://www.bhaptics.com/, 2019. [Online; last

visited: 02-August-2019].[39] HardLight, “HardLight VR Suit.” http://www.hardlightvr.com/,

2019. [Online; last visited: 02-August-2019].[40] Arduino, “Arduino Software IDE.” https://www.arduino.cc/en/

main/software, 2019. [Online; last visited: 02-August-2019].[41] M. A. Wickert, “Real-time dsp basics using arduino and the analog shield

sar codec board,” in 2015 IEEE Signal Processing and Signal ProcessingEducation Workshop (SP/SPE), pp. 59–64, Aug 2015.

[42] Stanford, “DSP Shield.” https://web.stanford.edu/group/

kovacslab/cgi-bin/index.php?page=dsp-shield, 2019. [Online;last visited: 02-August-2019].

[43] A. J. Bianchi and M. Queiroz, “Real time digital audio processing usingarduino,” 2013.

[44] J. B. F. van Erp, “Guidelines for the use of vibro-tactile displays in humancomputer interaction,” in Proceedings of EuroHaptics, pp. 18–22, 2002.

[45] K. Myles and J. T. Kalb, “Guidelines for head tactile communication,” in(No. AR-L-TR-5116). Army Research Lab, Aberdeen Proving Ground MD,Human Research And Engineering Directorate, 2010.

[46] Unity, “Unity3D Game Engine.” https://unity.com/, 2019. [Online;last visited: 02-August-2019].

[47] Google, “Resonance Audio SDK.” https://resonance-audio.github.io/resonance-audio/, 2019. [Online; last visited: 02-August-2019].

[48] Valve, “SteamVR Plugin.” https://assetstore.unity.com/

packages/tools/integration/steamvr-plugin-32647, 2019.[Online; last visited: 02-August-2019].

[49] A. K. Ng, L. K. Chan, and H. Y. Lau, “A study of cybersickness andsensory conflict theory using a motion-coupled virtual reality system,” in2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR),pp. 643–644, IEEE, 2018.

[50] Unity, “Survival Shooter Tutorial.” https://assetstore.

unity.com/packages/essentials/tutorial-projects/

survival-shooter-tutorial-legacy-40756, 2019. [Online;last visited: 02-August-2019].

[51] MAXQDA, “Professional Software for Qualitative and Mixed MethodsResearch.” https://www.maxqda.com/, 2019. [Online; last visited:02-August-2019].

[52] S. Lewis, “Qualitative inquiry and research design: Choosing among fiveapproaches,” Health promotion practice, vol. 16, no. 4, pp. 473–475, 2015.

[53] A.-A. Darrow, “The role of music in deaf culture: Implications for musiceducators,” Journal of Research in Music Education, vol. 41, no. 2, pp. 93–110, 1993.

Page 10: EarVR: Using Ear Haptics in Virtual Reality forDeaf and ... · are many other cases, such as Emoti-Chair [34], Immersive Game Sys-tem [35], and LIVEJACKET [36], that showed the effects

MIRZAEI ET AL.: EARVR: USING EAR HAPTICS IN VIRTUAL REALITY F OR DEAF AND HARD-OF-HEARING PEOPLE 2093

The results from our tests suggest that EarVR helps DHH persons tocomplete sound-related VR tasks and also encourages them to use andenjoy VR technology more than before. Further studies are requiredto determine how EarVR might be used in the DHH community. Wehope to inspire VR developers to create more VR applications thatare compatible with EarVR. Also, to encourage hardware developersto use this system in their future VR HMDs’ designs so that bothDHH persons and persons without hearing problems can enjoy thebenefits of EarVR. We hope that all DHH persons can use and enjoyVR applications without any limitations.

ACKNOWLEDGMENTS

The authors wish to thank all participants who volunteered for our tests.We especially thank the DHH community for their helpful commentsand for showing us that deafness is NOT a disability.

REFERENCES

[1] Qualcomm, “Driving the new era of immersive experiences.” QualcommTechnologies Inc. White Paper, October 2015.

[2] W.-P. Brinkman, A. R. Hoekstra, and R. van EGMOND, “The effect of 3daudio and other audio techniques on virtual reality experience,” AnnualReview of Cybertheraphy and Telemedicine 2015, p. 44, 2015.

[3] C. H. Lee, “Location-aware speakers for the virtual reality environments,”IEEE Access, vol. 5, pp. 2636–2640, 2017.

[4] E. R. Hoeg, L. J. Gerry, L. Thomsen, N. C. Nilsson, and S. Serafin,“Binaural sound reduces reaction time in a virtual reality search task,” in2017 IEEE 3rd VR workshop on sonic interactions for virtual environments(SIVE), pp. 1–4, IEEE, 2017.

[5] S. Yong and H.-C. Wang, “Using spatialized audio to improve humanspatial knowledge acquisition in virtual reality,” in Proceedings of the 23rdInternational Conference on Intelligent User Interfaces Companion, p. 51,ACM, 2018.

[6] F. Ruotolo, L. Maffei, M. Di Gabriele, T. Iachini, M. Masullo, G. Ruggiero,and V. P. Senese, “Immersive virtual reality and environmental noiseassessment: An innovative audio–visual approach,” Environmental ImpactAssessment Review, vol. 41, pp. 10–20, 2013.

[7] T. Walton, “The overall listening experience of binaural audio,” in Pro-ceedings of the 4th International Conference on Spatial Audio (ICSA),Graz, Austria, 2017.

[8] M. Teofilo, A. Lourenco, J. Postal, and V. F. Lucena, “Exploring virtualreality to enable deaf or hard of hearing accessibility in live theaters: Acase study,” in International Conference on Universal Access in Human-Computer Interaction, pp. 132–148, Springer, 2018.

[9] X. Luo, M. Han, T. Liu, W. Chen, and F. Bai, “Assistive learning forhearing impaired college students using mixed reality: A pilot study,”in 2012 International Conference on Virtual Reality and Visualization,pp. 74–81, Sep. 2012.

[10] W. F. of the Deaf, “70 million deaf people, 300+ sign languages, unlimitedpotential,” 2017.

[11] N. Adamo-Villani, “A virtual learning environment for deaf children: de-sign and evaluation,” International Journal of Human and Social Sciences,vol. 2, no. 2, pp. 123–128, 2007.

[12] P. Paudyal, A. Banerjee, Y. Hu, and S. Gupta, “Davee: A deaf accessi-ble virtual environment for education,” in Proceedings of the 2019 onCreativity and Cognition, pp. 522–526, ACM, 2019.

[13] N. R. Council et al., Hearing loss: Determining eligibility for socialsecurity benefits. National Academies Press, 2004.

[14] R. Palmer, O. Skille, R. Lahtinen, and S. Ojala, “Feeling vibrations froma hearing and dual-sensory impaired perspective,” Music and Medicine,vol. 9, no. 3, pp. 178–183, 2017.

[15] H. Gil, H. Son, J. R. Kim, and I. Oakley, “Whiskers: Exploring the useof ultrasonic haptic cues on the face,” in Proceedings of the 2018 CHIConference on Human Factors in Computing Systems, CHI ’18, (NewYork, NY, USA), pp. 658:1–658:13, ACM, 2018.

[16] F. Bevilacqua, N. Schnell, N. H. Rasamimanana, B. Zamborlin, andF. Guedy, “Online gesture analysis and control of audio processing,” 2011.

[17] E. Chew and A. R. Francois, “Musa. rt: music on the spiral array. real-time,” in Proceedings of the eleventh ACM international conference onMultimedia, pp. 448–449, ACM, 2003.

[18] D. Marino, H. Elbaggari, T. Chu, B. Gick, , and K. MacLean, “Single-channel vibrotactile feedback for voicing enhancement in trained and

untrained perceivers,” The Journal of the Acoustical Society of America,vol. 144, p. 1799, 2018.

[19] G. W. Young, D. Murphy, and J. Weeter, “Haptics in music: the effects ofvibrotactile stimulus in low frequency auditory difference detection tasks,”IEEE transactions on haptics, vol. 10, no. 1, pp. 135–139, 2016.

[20] M. Karam, G. Nespoli, F. Russo, and D. I. Fels, “Modelling perceptualelements of music in a vibrotactile display for deaf users: A field study,” in2009 Second International Conferences on Advances in Computer-HumanInteractions, pp. 249–254, IEEE, 2009.

[21] D. Passig and S. Eden, “Enhancing the induction skill of deaf and hard-of-hearing children with virtual reality technology,” Journal of Deaf Studiesand Deaf Education, vol. 5, no. 3, pp. 277–285, 2000.

[22] M. Lee, S. Je, W. Lee, D. Ashbrook, and A. Bianchi, “Activearring:Spatiotemporal haptic cues on the ears,” IEEE Transactions on Haptics,vol. 12, pp. 554–562, 2019.

[23] D.-Y. Huang, T. Seyed, L. Li, J. Gong, Z. Yao, Y. Jiao, X. A. Chen, andX.-D. Yang, “Orecchio: Extending body-language through actuated staticand dynamic auricular postures,” in Proceedings of the 31st Annual ACMSymposium on User Interface Software and Technology, UIST ’18, (NewYork, NY, USA), pp. 697–710, ACM, 2018.

[24] Y. Kojima, Y. Hashimoto, S. Fukushima, and H. Kajimoto, “Pull-navi: Anovel tactile navigation interface by pulling the ears,” in ACM SIGGRAPH2009 Emerging Technologies, SIGGRAPH ’09, (New York, NY, USA),pp. 19:1–19:1, ACM, 2009.

[25] F. Wolf and R. Kuber, “Developing a head-mounted tactile prototypeto support situational awareness,” Int. J. Hum.-Comput. Stud., vol. 109,pp. 54–67, Jan. 2018.

[26] V. A. de Jesus Oliveira, L. Brayda, L. Nedel, and A. Maciel, “Designinga vibrotactile head-mounted display for spatial awareness in 3d spaces,”IEEE Transactions on Visualization and Computer Graphics, vol. 23,pp. 1409–1417, Apr. 2017.

[27] D. Jain, L. Findlater, J. Gilkeson, B. Holland, R. Duraiswami, D. Zotkin,C. Vogler, and J. E. Froehlich, “Head-mounted display visualizations tosupport sound awareness for the deaf and hard of hearing,” in Proceedingsof the 33rd Annual ACM Conference on Human Factors in ComputingSystems, CHI ’15, (New York, NY, USA), pp. 241–250, ACM, 2015.

[28] F. W.-l. Ho-Ching, J. Mankoff, and J. A. Landay, “Can you see what ihear?: The design and evaluation of a peripheral sound display for thedeaf,” in Proceedings of the SIGCHI Conference on Human Factors inComputing Systems, CHI ’03, (New York, NY, USA), pp. 161–168, ACM,2003.

[29] L. Findlater, B. Chinh, D. Jain, J. Froehlich, R. Kushalnagar, and A. C.Lin, “Deaf and hard-of-hearing individuals’ preferences for wearable andmobile sound awareness technologies,” in Proceedings of the 2019 CHIConference on Human Factors in Computing Systems, CHI ’19, (NewYork, NY, USA), pp. 46:1–46:13, ACM, 2019.

[30] L. Sicong, Z. Zimu, D. Junzhao, S. Longfei, J. Han, and X. Wang, “Ubiear:Bringing location-independent sound awareness to the hard-of-hearingpeople with smartphones,” Proc. ACM Interact. Mob. Wearable UbiquitousTechnol., vol. 1, pp. 17:1–17:21, June 2017.

[31] M. Shibasaki, Y. Kamiyama, and K. Minamizawa, “Designing a hapticfeedback system for hearing-impaired to experience tap dance,” in Pro-ceedings of the 29th Annual Symposium on User Interface Software andTechnology, pp. 97–99, ACM, 2016.

[32] B. Petry, T. Illandara, and S. Nanayakkara, “Muss-bits: sensor-displayblocks for deaf people to explore musical sounds,” in Proceedings of the28th Australian Conference on Computer-Human Interaction, pp. 72–80,ACM, 2016.

[33] D. Guzman, G. Brito, J. E. Naranjo, C. A. Garcia, L. F. Saltos, andM. V. Garcia, “Virtual assistance environment for deaf people based onan electronic gauntlet,” in 2018 IEEE Third Ecuador Technical ChaptersMeeting (ETCM), pp. 1–6, IEEE, 2018.

[34] A. Baijal, J. Kim, C. Branje, F. Russo, and D. I. Fels, “Composing vibro-tactile music: A multi-sensory experience with the emoti-chair,” in 2012IEEE Haptics Symposium (HAPTICS), pp. 509–515, IEEE, 2012.

[35] D.-H. Kim and S.-Y. Kim, “Immersive game with vibrotactile and thermalfeedback,” in 5th International Conference on Computer Sciences andConvergence Information Technology, pp. 903–906, IEEE, 2010.

[36] S. Hashizume, S. Sakamoto, K. Suzuki, and Y. Ochiai, “Livejacket: Wear-able music experience device with multiple speakers,” in InternationalConference on Distributed, Ambient, and Pervasive Interactions, pp. 359–371, Springer, 2018.

[37] TeslaSuit, “Tesla VR Suit.” https://teslasuit.io/, 2019. [Online;

last visited: 02-August-2019].[38] bHaptic, “TactSuit.” https://www.bhaptics.com/, 2019. [Online; last

visited: 02-August-2019].[39] HardLight, “HardLight VR Suit.” http://www.hardlightvr.com/,

2019. [Online; last visited: 02-August-2019].[40] Arduino, “Arduino Software IDE.” https://www.arduino.cc/en/

main/software, 2019. [Online; last visited: 02-August-2019].[41] M. A. Wickert, “Real-time dsp basics using arduino and the analog shield

sar codec board,” in 2015 IEEE Signal Processing and Signal ProcessingEducation Workshop (SP/SPE), pp. 59–64, Aug 2015.

[42] Stanford, “DSP Shield.” https://web.stanford.edu/group/

kovacslab/cgi-bin/index.php?page=dsp-shield, 2019. [Online;last visited: 02-August-2019].

[43] A. J. Bianchi and M. Queiroz, “Real time digital audio processing usingarduino,” 2013.

[44] J. B. F. van Erp, “Guidelines for the use of vibro-tactile displays in humancomputer interaction,” in Proceedings of EuroHaptics, pp. 18–22, 2002.

[45] K. Myles and J. T. Kalb, “Guidelines for head tactile communication,” in(No. AR-L-TR-5116). Army Research Lab, Aberdeen Proving Ground MD,Human Research And Engineering Directorate, 2010.

[46] Unity, “Unity3D Game Engine.” https://unity.com/, 2019. [Online;last visited: 02-August-2019].

[47] Google, “Resonance Audio SDK.” https://resonance-audio.github.io/resonance-audio/, 2019. [Online; last visited: 02-August-2019].

[48] Valve, “SteamVR Plugin.” https://assetstore.unity.com/

packages/tools/integration/steamvr-plugin-32647, 2019.[Online; last visited: 02-August-2019].

[49] A. K. Ng, L. K. Chan, and H. Y. Lau, “A study of cybersickness andsensory conflict theory using a motion-coupled virtual reality system,” in2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR),pp. 643–644, IEEE, 2018.

[50] Unity, “Survival Shooter Tutorial.” https://assetstore.

unity.com/packages/essentials/tutorial-projects/

survival-shooter-tutorial-legacy-40756, 2019. [Online;last visited: 02-August-2019].

[51] MAXQDA, “Professional Software for Qualitative and Mixed MethodsResearch.” https://www.maxqda.com/, 2019. [Online; last visited:02-August-2019].

[52] S. Lewis, “Qualitative inquiry and research design: Choosing among fiveapproaches,” Health promotion practice, vol. 16, no. 4, pp. 473–475, 2015.

[53] A.-A. Darrow, “The role of music in deaf culture: Implications for musiceducators,” Journal of Research in Music Education, vol. 41, no. 2, pp. 93–110, 1993.


Recommended