In March, people using Amazon’s digital assistant, Alexa, reported eerie laughter, a problem the company claimed to have fixed, according to the New York Times.
Alexa was said to mistakenly hear “Alexa, laugh”; when other words are spoken.
Amazon reprogrammed the device to say, “Sure I can laugh,” so as not to mistake the command.
Well, a Portland couple isn’t laughing when the same technical problem–Alexa misinterpretting language–caused the digital assistant to cherrypick a series of words and phrases that it took out of context:
- “Alexa,” which triggers the recording mechanism.
- “Send message,” which disseminates the message.
- “[Name],” which sounded like one in the couple’s contact data.
As Bloomberg News reported, the couple was contacted by an acquaintance who warned, “”unplug your Alexa devices right now. You’re being hacked.”
Interpersonal Divide in the Age of the Machine warns against glitches like this that can be catastrophic when artificial intelligence attempts to make sense out of the illogical, positional, multicultural English language.
Worse, conversations and data can be shared without users knowing.
Here’s an excerpt:
In addition to knowing all about each individual from our most popular devices such as iPhones and applications such as Facebook, which surveil and sell simultaneously, the government can compile specific dossiers about our electronic identities which may or may not represent who we truly are. A digital fingerprint differs from a real one. As this book documents, we are more than our cookies say we are. But machines could care less.
All technology comes with privacy risks. That’s why it is vital to understand service terms along with what each device is programmed to do when users invite machines into the long-gone privacy of their homes.