Communications Daily is a Warren News publication.
IFTT Applets ‘Potentially Unsafe,’

FTC Hears of Privacy, Security Risks in Many IoT, Tech Products

FTC members and staffers heard from academics about privacy, security and other risks in a range of connected products and in some of the programming that undergirds parts of the IoT. Many shared parts of their research and published papers Wednesday during the FTC’s PrivacyCon.

Sign up for a free preview to unlock the rest of this article

Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!

About half of applets that allow consumers to automatically post to social media are “potentially unsafe,” said Carnegie Mellon University scholar Milijana Surbatovich. The academic cited a report analyzing security and privacy risks of applets that use If This Then That technology, which triggers automated posts. An IFTTT applet for Fitbit might be programmed to upload a Facebook post each time a person hits an exercise milestone. Surbatovich said such technology can have unintended consequences, such as compromising private data. The picture of a passport meant for official use could unintentionally be posted. Other issues include third parties triggering unauthorized activity on applets, leading to altered posts from users. Based on research by Surbatovich and Carnegie Mellon colleagues, who analyzed about 19,000 unique applets, about half of them were considered potentially “unsafe.”

On a separate panel, University of Washington Program Director-Tech Policy Lab Emily McReynolds cited privacy issues with children’s toys. Unlike Furby of the late 1990s, today’s toys record and transmit data over the internet, which can be distributed without users’ consent. Hello Barbie has a belt buckle that when held down records the voices and surrounding sounds that can be posted to social media. McReynolds and fellow researchers recommended companies offer more disclosure on toy capabilities.

Northeastern University associate professor Alan Mislove presented research on privacy and data issues for Facebook advertising, saying the company struggles to balance providing utility for advertisers and protecting user privacy, suggesting the balance is slightly skewed toward advertisers. Mislove and his research team explored attribute-based advertisement targeting, which is used by Facebook, Google, Instagram, LinkedIn and Pinterest. He called these companies “21st century data brokers.” Acting as a potential advertising client, the team explored how hackers can manipulate Facebook data collections for targeted users, which could include private information like email addresses, phone numbers and dates of birth. One aspect they found when interacting with Facebook is that anyone can be an advertiser, and there are ways of manipulating and mining for private data before ad space is even purchased.

FTC acting Chief Technology Officer Neil Chilson said the academics presented some "concerning scenarios." He's pleased to hear some companies have acted to address concerns since the research was published.

Commissioner Terrell McSweeny said she believes the four incoming commissioners will be very focused on consumer privacy and data security. Speaking during the Future of Privacy Forum’s annual Privacy Papers for Policymakers event Tuesday evening on Capitol Hill, she said all four testified they are interested in addressing the power of technology for Americans. She said there’s an apparent need for consumer rights that “map well” within the digital age, and “what I’d love to see is work around consumer rights to control for data, for data portability.” It’s pretty hard to engage in “consumer protection in this day and age without thinking about consumer privacy but also consumer data rights and what those mean in the digital age,” she said. As McSweeny has prepared to vacate her seat, she has repeatedly emphasized increasing consumer leverage on data (see 1802220042).

Privacy papers discussed Tuesday included “Artificial Intelligence Policy: A Primer and Roadmap,” written by University of Washington associate law professor Ryan Calo, who warned against AI regulations that restrict what companies and researchers can explore. “It’s premature for there to be top-down regulation for artificial intelligence,” he said. “Government needs to build in the capability over time in order to intervene.” Asked about fears AI could be an existential threat to humanity, Calo said the U.S. should be worried, but “this is not Skynet.” Though AI is affecting real people in terms of employment and faulty technology causing injury or death, he doesn’t believe robots will eventually control society physically.