Jump to content

User:Yoda2016/sandbox

From Wikipedia, the free encyclopedia

Introduction Voice-controlled speakers, also known as “smart speakers” or more commonly by the names of commercially available products such as Google Home and Amazon Echo, are earning more and more popularities these days. As soon as they hear a “wake up” phrase, they are activated and start following your oral commands (Baird, 2017). Such voice recognition software itself, such as Apple’s Siri or Amazon’s Alexa, is not necessarily a new technology (Baird, 2017). However, what smart speakers strive to do is to provide comprehensive virtual home assistants through its ability to be connected to other Internet of Things (IoT) devices (Baird, 2017). While smart speakers are intended to make our life easier, they also pose potentially serious security risks on our privacy, just like many other IoT devices do.

Current Use Voice-controlled speakers contain internet-connected microphones, which constantly seek for a specific, “wake up” phrase (Baird, 2017). Once the specific phrase is recognized, the speakers initiate recording of your voice to start following your oral commands. Specifically, the recording is moved to the outside server for analysis (Baird, 2017). With the smart speakers’ ability to connect with other devices and online accounts, users are able to create a shopping list, manage their schedule, order products online, listen to favorite music, adjust room temperature, open the garage door and so on, simply by speaking to the smart speakers (Baird, 2017) (Brenner, 2017).

Security Aspects However, these voice-controlled speakers are not free from security concerns. The major concern is the risk of misuse. Since these speakers do not discriminate depending on who is the user and they respond in the same way no matter who is the user, it can be easily manipulated by any family member or friend in the vicinity of the smart speakers. As such, a mischievous friend may order embarrassing or surprising gifts for you (Falcioni, 2016), or a child family member may order expensive items by mistake. After all, all the speakers need to be activated is the “wake up” phrase, and even other, similar phrases (for example, hearing someone say “Alex” on TV or radio, instead of actually talking to the speakers “Alexa”) can inadvertently activate the speakers, from which point the speakers will record and follow your voice instructions.

Another concern is our privacy. When the smart speakers are activated, the voice recordings are stored remotely, although users can review and delete such recordings or log of oral commands (Baird, 2017). However, secure storage of recordings and command history is not always guaranteed with increasing risks of hacking (Baird, 2017). Moreover, with the ease of voice activation, practically all conversations that occur in the smart speaker’s vicinity are not fully protected and may be at risk of being recorded without your knowledge or against your will.

Legal, Ethical and Social Implications Such security concerns ultimately lead us to a legal question - is any third party entitled to your recordings on your smart speakers? Or, how much privacy can we really expect while using a smart speaker at home or in the office? Just as many people will not feel comfortable sharing their personal conversations at home with anyone else (even, or especially with the authority), you may not feel comfortable sharing the recordings from your smart speakers that were captured in the sanctuary of your own home. Although not always on (the smart speakers are only activated by a “wake up” phrase and they only record a short statement, rather than a long conversation), the smart speakers are inevitably prone to accidental recording. What became at issue in the 2015 murder case was whether the law enforcement was entitled to recordings on a smart speaker located at the murder scene (Brenner, 2017) (McLaughlin, 2017). An Arkansas man was accused of murdering his friend at his home. On the night of the death, three men consumed alcohol, watched college football games, and went into a hot tub. One of three men was found dead in the hot tub next morning, and the prosecutors sought the recording from Amazon Echo, with a hope that the smart speaker recorded something relevant to the murder case upon accidental activation (for example by TV or radio) (Brenner, 2017) (McLaughlin, 2017). In other words, the law enforcement issued a search warrant for the recording from Amazon Echo although they were not certain whether such recording contained any substantial evidence of murder. In response, Amazon is believed to have released the account details of the defendant, while it rejected the request for recording by alleging that the search warrant was “overbroad” (Brenner, 2017). Although the defendant in this case eventually voluntarily submitted the recording (McLaughlin, 2017), new sets of questions arise here. Had the defendant not voluntarily cooperated, is Amazon legally required to submit such recording? Also, is Amazon even entitled to submit such recording without the user’s consent? This kind of “ubiquitous surveillance” is not necessarily an issue limited to smart speakers, but is commonly shared by many other IoT devices (Schneier, 2015). For example, Samsung’s “eavesdropping” smart TVs or Ford’s cars embedded with Amazon’s Alexa have highlighted how prevalent smart assistants have become, as well as how extensively our privacy is at risk in recent years (Dale, 2015) (Schneier, 2015) (Waters, 2017). The more our devices are connected to the internet, the more we are likely to be heard by hackers, government agencies or private companies without our knowledge or consent. Some companies manufacturing these smart devices may believe that they have covered their basis by providing a lengthy privacy policy statement (Schneier, 2015). However, even when such statement is present, consumers generally do not read such statement before consenting to it, and are completely unaware of the extent to which there are being surveilled (for example, even if you don’t utter a word, Gmail and Facebook “listen” to everything you type!) (Schneier, 2015).

Moreover, social and ethical implications of the search result provided by the smart speakers cannot be overlooked either. When you wish to search for an answer to your particular question, the difference between the oral answer from your smart speaker, as opposed to browsing the search results on the internet, is that you cannot compare the oral answer with any other search results. If your voice command was not clear or your smart speaker misunderstood your question, the answer your smart speaker has given you may be actually incorrect. Even worse, you probably will not know that the answer is incorrect because you have no other search results to compare with. Also, each smart speaker uses a different search engine which may not be your preferred engine, and it can further be programmed to provide arbitrary search results. For example, Amazon Echo will most likely recommend you products on Amazon, although you may wish to order the product through another online website. As a result, we may be unknowingly losing the ability to proactively choose and sort information, thereby losing autonomy and independence as a consumer, as we get more and more reliant on the convenience of the smart speakers.

Future Use Considering how prevalent the smart speakers have become, it is probably not surprising to learn that many people consider their smart speakers as family members. According to Daren Gill, director of product management for Amazon’s Alexa used by Amazon Echo, the overwhelming number of more than 250,000 people have proposed to Alexa (Turk, 2016). Although Echo cannot really be personalized, people’s desire to treat it as an intimate friend, rather than merely a robot, does not seem to subside (Turk, 2016). Accordingly, some studies are on-going to make voice-controlled speakers more “personable” (Turk, 2016). It is probably notable that another tech giant, Apple, has yet to release a product equivalent to Amazon Echo or Google Home. Apple has been rumored to be developing its own smart speaker. To differentiate its product from ones of the competitors, Apple’s smart speaker may even include face recognition software (Baird, 2017) (Gurman and King, 2016). While users will be able to opt out from such facial recognition function by turning off the camera (Baird, 2017), such function, if it does come true, will bring smart speakers to another level, potentially making them even more invasive in our privacy.

Conclusion Although voice-controlled speakers bring many benefits and convenience to our life, they also come with serious security concerns. Although not perfect, there are some ways that users can try to protect themselves (Baird, 2017) (Brenner, 2017). For example, the user can manually mute the microphone when not in use. The user can also change the setting of speaker to prevent misuse or accidental activation. You can also manage and delete the voice recordings and command history online. You can further choose a place to put your smart speaker, or choose which IoT devices or online accounts to connect with your smart speaker, so that your smart speaker will not be exposed to sensitive information. It is our choice and responsibility, as a user, to determine whether or not to use these speakers, and how to minimize the potential security risk that comes with them. However, it is also the manufactures’ responsibility to keep their privacy policy transparent so that the users can monitor and regulate how much in fact they are being listened to (Schneier, 2015).

(The content - 1548 words) 

References

Baird, J. (April 14, 2017). Smart speakers and voice recognition: Is your privacy at risk?. Huffington Post. Retrieved September 14, 2017, from http://www.huffingtonpost.com/entry/smart-speakers-and-voice-recognition-is-your-privacy_us_58f14ddee4b04cae050dc73e (WEBSITE) The article describes what smart speakers are and how they work. The article then explains how smart speakers may put our privacy at risk. Specifically, the author highlights the risk of misuse and privacy concerns. The article further notes what we should keep in mind in determining whether or not to take advantage of this useful technology. In the end, the article argues how users can try to protect their privacy while using smart speakers.

Brenner, B. (January 27, 2017). Know the risks of Amazon Alexa and Google Home. Naked Security. Retrieved September 14, 2017, from https://nakedsecurity.sophos.com/2017/01/27/data-privacy-day-know-the-risks-of-amazon-alexa-and-google-home/ (WEBSITE) The article focuses on security risks involved in using voice-activated, internet-connected personal assistants. The article explains the extent to which these personal assistants may invade our privacy. As an example, the article cites the murder case in Arkansas in which the recording from Amazon Echo became at issue. The author notes how widely security threats against IoT devices have spread in recent years, which can also apply to voice-activated personal assistants. The author finally lists some preventive measures to minimize such security threats.

Dale, R. (2015). The limits of intelligent personal assistants. Natural Language Engineering, 21(2), 325-329. doi:http://dx.doi.org.mutex.gmu.edu/10.1017/S1351324915000042 (JOURNAL) This article lays out recent industry insights in intelligent personal assistants (IPAs). It further explains their benefits and limitations, including invasive nature of these IPAs such as Siri, Cortana and Google Now. The author notes that Amazon Echo, Google Glass and Samsung’s “eavesdropping” televisions will bring the IPAs’ dominance “a step further.” However, the author predicts that “a collection of specific capabilities doesn’t amount to a broad-coverage intelligence” and we will not see “machines-we-can-talk-to” in our life time. The author ends the article with recent news related to language-technology, such as Facebook’s acquisitions of voice recognition and translation start-ups, spoken language translation by Google and Microsoft, and sentiment analysis.

Falcioni, J. G. (2016). A machine that thinks for you. Mechanical Engineering, 138(11), 6. Retrieved from https://search-proquest-com.mutex.gmu.edu/docview/1848077203?accountid=14541 (JOURNAL) This article illustrates our modern-day obsession with voice recognition personal assistants. It also notes public concerns surrounding such assistants, such as a potential risk of misuse which can very easily happen. The author introduces the Google’s view that “machine learning is at a point where a virtual assistant is all we need to solve all our information-related needs.” While acknowledging the powerfulness of voice recognition, the author warns that the actual impact of speech recognition is still unknown. The author concludes the article by stating that the engineering community should keep in mind the moral and ethical burdens in building artificial intelligence (AI).

Gurman, M. and King, I. (September 23, 2016). Apple stepping up plans for Amazon Echo-style smart-home device. Bloomberg.com. Retrieved September 22, 2017, from https://www.bloomberg.com/news/articles/2016-09-23/apple-said-to-step-up-plans-for-echo-style-smart-home-device-itfnod11 (NEWSPAPER) The article describes the history of product development of an Amazon Echo-style smart-home device by Apple. The Apple’s interest in smart-home devices started around 2014. The article notes that the development has “entered the prototype stage.” With a hope of boosting its sales, Apple is keen on differentiating its product from other smart speakers, for example by facial recognition. The article explains that Apple is further seeking a way that the voice command can “control the entire system without having to open an app or reactivate Siri.”

McLaughlin, E. C. (April 26, 2017). Suspect OKs Amazon to hand over Echo recordings in murder case. CNN. Retrieved September 14, 2017, from http://www.cnn.com/2017/03/07/tech/amazon-echo-alexa-bentonville-arkansas-murder-case/index.html (WEBSITE) The article describes the background information of the Arkansas murder case in which recordings from the defendant’s Amazon Echo was sought. Amazon declined this request due to “the important First Amendment and privacy implications at stake.” Especially, the prosecutor’s request for the recordings was considered “overbroad” as there was no certain evidence that the recordings had anything to do with solving the murder. This may be the first court battle in which recordings from smart speakers became at issue. Yet, the article notes that such battle will certainly not be the last one, considering how increasingly the evidence from internet-connected devices have been sought.

Schneier, B. (February 12, 2015). Your TV is listening to you. CNN. Retrieved September 14, 2017, from http://www.cnn.com/2015/02/11/opinion/schneier-samsung-tv-listening/index.html (WEBSITE) With a shocking revelation of Samsung’s internet-connected, “eavesdropping” smart TVs, the author lists many other examples of how we are being listened to every day through devices around us by hackers, government and companies. The author notes that we should not be surprised about this, considering increased computerization. The author explains how voice-activated systems listen to us. The author also describes how what we type, even if we do not speak, can also be monitored. The author concludes that such monitoring must be properly regulated to safeguard our privacy and free expression.

Turk, V. (2016). Home invasion. New Scientist, 232, 16. Retrieved from https://search-proquest-com.mutex.gmu.edu/docview/1879984802?accountid=14541 (JOURNAL) This article focuses on how and why home assistants such as Google Home and Amazon Echo are growing in demand. It also describes how people communicate with such home assistants. Notably, the article highlights how people are developing complicated relationships with such home assistants seeking more intimate and “personable” interactions. The author attributes this to the hypothesis that speaking to the interface makes the interface “almost disappear.” While the author points out that home assistants are nothing like live people or animals, she also recognizes our tendencies to treat “technology like people.”

Waters, R. (2017). Ford enlists Amazon's Alexa as driver assistant. FT.Com, Retrieved from https://search-proquest-com.mutex.gmu.edu/docview/1864981791?accountid=14541 (JOURNAL) The article reports on how Ford “has become the first car company to embed Amazon’s voice-activated digital assistant in its cars.” The article explains that this move follows a trend of installing a voice-activation system with home appliances. Such trend is partially based on the hope that products that have otherwise “failed to live up expectations” will attract more consumers. On one hand, the author notes an advantage of oral commands and linking multiple devices. On the other hand, the author admits a difficulty of setting up multiple devices with different compatibilities.