Архив метки: Google Play

Facebook will shut down its spyware VPN app Onavo

Facebook will end its unpaid market research programs and proactively take its Onavo VPN app off the Google Play store in the wake of backlash following TechCrunch’s investigation about Onavo code being used in a Facebook Research app the sucked up data about teens. The Onavo Protect app will eventually shut down, and will immediately cease pulling in data from users for market research though it will continue operating as a Virtual Private Network in the short-term to allow users to find a replacement.
Facebook has also ceased to recruit new users for the Facebook Research app that still runs on Android but was forced off of iOS by Apple after we reported on how it violated Apple’s Enterprise Certificate program for employee-only apps. Existing Facebook Research app studies will continue to run, though.
With the suspicions about tech giants and looming regulation leading to more intense scrutiny of privacy practices, Facebook has decided that giving users a utility like a VPN in exchange for quietly examining their app usage and mobile browsing data isn’t a wise strategy. Instead, it will focus on paid programs where users explicitly understand what privacy they’re giving up for direct financial compensation.

Onavo billed itself as a way to “limit apps from using background data and “use a secure VPN network for your personal info” but also noted it would collect the “Time you spend using apps, mobile and Wi-Fi data you use per app, the websites you visit, and your country, device and network type” A Facebook spokesperson confirmed the change and provided this statement: “Market research helps companies build better products for people. We are shifting our focus to reward-based market research which means we’re going to end the Onavo program.”
Facebok acquired Onavo in 2013 for a reported $200 million to use its VPN app the gather data about what people were doing on their phones. That data revealed WhatsApp was sending over twice as many messages per day as Messenger, BuzzFeed’s Ryan Mac and Charlie Warzel reported, convincing Facebook to pay a steep sum of $19 billion to buy WhatsApp. Facebook went on to frame Onavo as a way for users to reduce their data usage, block dangerous websites, keep their traffic safe from snooping — while Facebook itself was analyzing that traffic. The insights helped it discover new trends in mobile usage, keep an eye on competitors, and figure out what features or apps to copy. Cloning became core to Facebook’s product strategy over the past years, with Instagram’s version of Snapchat Stories growing larger than the original.
But last year, privacy concerns led Apple to push Facebook to remove the Onavo VPN app from the App Store, though it continued running on Google Play. But Facebook quietly repurposed Onavo code for use in its Facebook Research app that TechCrunch found was paying users in the U.S. and India ages 13 to 35 up to $20 in gift cards per month to give it VPN and root network access to spy on all their mobile data.
Facebook ran the program in secret, obscured by intermediary beta testing services like Betabound and Applause. It only informed users it recruited with ads on Instagram, Snapchat and elsewhere that they were joining a Facebook Research program after they’d begun signup and signed non-disclosure agreements. A Facebook spokesperson claimed in a statement that “there was nothing ‘secret’ about this”, yet it had threatened legal action if users publicly discussed the Research program.
But the biggest problem for Facebook ended up being that its Research app abused Apple’s Enterprise Certificate program meant for employee-only apps to distribute the app outside the company. That led Apple to ban the Research app from iOS and invalidate Facebook’s certificate. This shut down Facebook’s internal iOS collaboration tools, pre-launch test versions of its popular apps, and even its lunch menu and shuttle schedule to break for 30 hours, causing chaos at the company’s offices.
To preempt any more scandals around Onavo and the Facebook Research app and avoid Google stepping in to forcibly block the apps, Facebook is now taking Onavo off the Play Store and stopping recruitment of Research testers. That’s a surprising voluntary move that perhaps shows Facebook is finally getting in tune with the public perception of its shady actions. The company has repeatedly misread how users would react to its product launches and privacy invasions, leading to near constant gaffes and an unending news cycle chronicling its blunders.
Without Onavo, Facebook loses a powerful method of market research, and its future initiatives here will come at a higher price. Facebook has run tons of focus groups, surveys, and other user feedback programs over the past decade to learn where it could improve or what innovations it could co-opt. But given how cloning plus acquisitions like WhatsApp and Instagram have been vital to Facebook’s success, it’s likely worth paying out more gift cards and more tightly monitoring its research practices. Otherwise Facebook could miss the next big thing that might disrupt it.
Hopefully Facebook will be less clandestine with its future market research programs. It should be upfront about its involvement, make certain that users understand what data they’re giving up, stop researching teens or at the very least verify the consent of their parents, and avoid slurping up sensitive information or data about a user’s unwitting friends. For a company that depends on people to trust it with their content, it has a long way to go win back our confidence.

Facebook pays teens to install VPN that spies on them

Facebook will shut down its spyware VPN app Onavo

Google starts pulling unvetted Android apps that access call logs and SMS messages

Google is removing apps from Google Play that request permission to access call logs and SMS text message data but haven’t been manually vetted by Google staff.
The search and mobile giant said it is part of a move to cut down on apps that have access to sensitive calling and texting data.
Google said in October that Android apps will no longer be allowed to use the legacy permissions as part of a wider push for developers to use newer, more secure and privacy minded APIs. Many apps request access to call logs and texting data to verify two-factor authentication codes, for social sharing, or to replace the phone dialer. But Google acknowledged that this level of access can and has been abused by developers who misuse the permissions to gather sensitive data — or mishandle it altogether.
“Our new policy is designed to ensure that apps asking for these permissions need full and ongoing access to the sensitive data in order to accomplish the app’s primary use case, and that users will understand why this data would be required for the app to function,” wrote Paul Bankhead, Google’s director of product management for Google Play.
Any developer wanting to retain the ability to ask a user’s permission for calling and texting data has to fill out a permissions declaration.
Google will review the app and why it needs to retain access, and will weigh in several considerations, including why the developer is requesting access, the user benefit of the feature that’s requesting access and the risks associated with having access to call and texting data.
Bankhead conceded that under the new policy, some use cases will “no longer be allowed,” rendering some apps obsolete.
So far, tens of thousands of developers have already submitted new versions of their apps either removing the need to access call and texting permissions, Google said, or have submitted a permissions declaration.
Developers with a submitted declaration have until March 9 to receive approval or remove the permissions. In the meantime, Google has a full list of permitted use cases for the call log and text message permissions, as well as alternatives.
The last two years alone has seen several high-profile cases of Android apps or other services leaking or exposing call and text data. In late 2017, popular Android keyboard ai.type exposed a massive database of 31 million users, including 374 million phone numbers.

Another huge database exposed millions of call logs and SMS text messages

Google starts pulling unvetted Android apps that access call logs and SMS messages

Обзор лучших игр про космос на Андроид 2019

Представляем обстоятельную подборку игрушек на космическую и научно-фантастическую тематику для телефонов. Игры в космосе на Андроид разбросаны по разным разделам Google Play, поэтому обзор поможет сориентироваться и скачать бесплатно понравившуюся стреля

WhatsApp has an encrypted child porn problem

WhatsApp chat groups are being used to spread illegal child pornography, cloaked by the app’s end-to-end encryption. Without the necessary number of human moderators, the disturbing content is slipping by WhatsApp’s automated systems. A report from two Israeli NGOs reviewed by TechCrunch details how third-party apps for discovering WhatsApp groups include “Adult” sections that offer invite links to join rings of users trading images of child exploitation. TechCrunch has reviewed materials showing many of these groups are currently active.
TechCrunch’s investigation shows that Facebook could do more to police WhatsApp and remove this kind of content. Even without technical solutions that would require a weakening of encryption, WhatsApp’s moderators should have been able to find these groups and put a stop to them. Groups with names like “child porn only no adv” and “child porn xvideos” found on the group discovery app “Group Links For Whats” by Lisa Studio don’t even attempt to hide their nature. And a screenshot provided by anti-exploitation startup AntiToxin reveals active WhatsApp groups with names like “Children ” or “videos cp” — a known abbreviation for ‘child pornography’.
A screenshot from today of active child exploitation groups on WhatsApp. Phone numbers and photos redacted. Provided by AntiToxin.
Better manual investigation of these group discovery apps and WhatsApp itself should have immediately led these groups to be deleted and their members banned. While Facebook doubled its moderation staff from 10,000 to 20,000 in 2018 to crack down on election interference, bullying and other policy violations, that staff does not moderate WhatsApp content. With just 300 employees, WhatsApp runs semi-independently, and the company confirms it handles its own moderation efforts. That’s proving inadequate for policing a 1.5 billion-user community.
The findings from the NGOs Screen Savers and Netivei Reshe were written about today by Financial Times, but TechCrunch is publishing the full report, their translated letter to Facebook, translated emails with Facebook, their police report, plus the names of child pornography groups on WhatsApp and group discovery apps listed above. A startup called AntiToxin Technologies that researches the topic has backed up the report, providing the screenshot above and saying it’s identified more than 1,300 videos and photographs of minors involved in sexual acts on WhatsApp groups. Given that Tumblr’s app was recently temporarily removed from the Apple App Store for allegedly harboring child pornography, we’ve asked Apple if it will temporarily suspend WhatsApp, but have not heard back. 

View this document on Scribd
Uncovering a nightmare
In July 2018, the NGOs became aware of the issue after a man reported to one of their hotlines that he’d seen hardcore pornography on WhatsApp. In October, they spent 20 days cataloging more than 10 of the child pornography groups, their content and the apps that allow people to find them.
The NGOs began contacting Facebook’s head of Policy, Jordana Cutler, starting September 4th. They requested a meeting four times to discuss their findings. Cutler asked for email evidence but did not agree to a meeting, instead following Israeli law enforcement’s guidance to instruct researchers to contact the authorities. The NGO reported their findings to Israeli police but declined to provide Facebook with their research. WhatsApp only received their report and the screenshot of active child pornography groups today from TechCrunch.
Listings from a group discovery app of child exploitation groups on WhatsApp. URLs and photos have been redacted.
WhatsApp tells me it’s now investigating the groups visible from the research we provided. A Facebook spokesperson tells TechCrunch, “Keeping people safe on Facebook is fundamental to the work of our teams around the world. We offered to work together with police in Israel to launch an investigation to stop this abuse.” A statement from the Israeli Police’s head of the Child Online Protection Bureau, Meir Hayoun, notes that: “In past meetings with Jordana, I instructed her to always tell anyone who wanted to report any pedophile content to contact the Israeli police to report a complaint.”
A WhatsApp spokesperson tells me that while legal adult pornography is allowed on WhatsApp, it banned 130,000 accounts in a recent 10-day period for violating its policies against child exploitation. In a statement, WhatsApp wrote that:

WhatsApp has a zero-tolerance policy around child sexual abuse. We deploy our most advanced technology, including artificial intelligence, to scan profile photos and images in reported content, and actively ban accounts suspected of sharing this vile content. We also respond to law enforcement requests around the world and immediately report abuse to the National Center for Missing and Exploited Children. Sadly, because both app stores and communications services are being misused to spread abusive content, technology companies must work together to stop it.

But it’s that over-reliance on technology and subsequent under-staffing that seems to have allowed the problem to fester. AntiToxin’s CEO Zohar Levkovitz tells me, “Can it be argued that Facebook has unwittingly growth-hacked pedophilia? Yes. As parents and tech executives we cannot remain complacent to that.”

Automated moderation doesn’t cut it
WhatsApp introduced an invite link feature for groups in late 2016, making it much easier to discover and join groups without knowing any members. Competitors like Telegram had benefited as engagement in their public group chats rose. WhatsApp likely saw group invite links as an opportunity for growth, but didn’t allocate enough resources to monitor groups of strangers assembling around different topics. Apps sprung up to allow people to browse different groups by category. Some usage of these apps is legitimate, as people seek communities to discuss sports or entertainment. But many of these apps now feature “Adult” sections that can include invite links to both legal pornography-sharing groups as well as illegal child exploitation content.
A WhatsApp spokesperson tells me that it scans all unencrypted information on its network — basically anything outside of chat threads themselves — including user profile photos, group profile photos and group information. It seeks to match content against the PhotoDNA banks of indexed child pornography that many tech companies use to identify previously reported inappropriate imagery. If it finds a match, that account, or that group and all of its members, receive a lifetime ban from WhatsApp.
A WhatsApp group discovery app’s listings of child exploitation groups on WhatsApp
If imagery doesn’t match the database but is suspected of showing child exploitation, it’s manually reviewed. If found to be illegal, WhatsApp bans the accounts and/or groups, prevents it from being uploaded in the future and reports the content and accounts to the National Center for Missing and Exploited Children. The one example group reported to WhatsApp by Financial Times was already flagged for human review by its automated system, and was then banned along with all 256 members.
To discourage abuse, WhatsApp says it limits groups to 256 members and purposefully does not provide a search function for people or groups within its app. It does not encourage the publication of group invite links and the vast majority of groups have six or fewer members. It’s already working with Google and Apple to enforce its terms of service against apps like the child exploitation group discovery apps that abuse WhatsApp. Those kind of groups already can’t be found in Apple’s App Store, but remain available on Google Play. We’ve contacted Google Play to ask how it addresses illegal content discovery apps and whether Group Links For Whats by Lisa Studio will remain available, and will update if we hear back. [Update 3pm PT: Google has not provided a comment but the Group Links For Whats app by Lisa Studio has been removed from Google Play. That’s a step in the right direction.]
But the larger question is that if WhatsApp was already aware of these group discovery apps, why wasn’t it using them to track down and ban groups that violate its policies. A spokesperson claimed that group names with “CP” or other indicators of child exploitation are some of the signals it uses to hunt these groups, and that names in group discovery apps don’t necessarily correlate to the group names on WhatsApp. But TechCrunch then provided a screenshot showing active groups within WhatsApp as of this morning, with names like “Children ” or “videos cp”. That shows that WhatsApp’s automated systems and lean staff are not enough to prevent the spread of illegal imagery.
The situation also raises questions about the trade-offs of encryption as some governments like Australia seek to prevent its usage by messaging apps. The technology can protect free speech, improve the safety of political dissidents and prevent censorship by both governments and tech platforms. However, it also can make detecting crime more difficult, exacerbating the harm caused to victims.
WhatsApp’s spokesperson tells me that it stands behind strong end-to-end encryption that protects conversations with loved ones, doctors and more. They said there are plenty of good reasons for end-to-end encryption and it will continue to support it. Changing that in any way, even to aid catching those that exploit children, would require a significant change to the privacy guarantees it’s given users. They suggested that on-device scanning for illegal content would have to be implemented by phone makers to prevent its spread without hampering encryption.
But for now, WhatsApp needs more human moderators willing to use proactive and unscalable manual investigation to address its child pornography problem. With Facebook earning billions in profit per quarter and staffing up its own moderation ranks, there’s no reason WhatsApp’s supposed autonomy should prevent it from applying adequate resources to the issue. WhatsApp sought to grow through big public groups, but failed to implement the necessary precautions to ensure they didn’t become havens for child exploitation. Tech companies like WhatsApp need to stop assuming cheap and efficient technological solutions are sufficient. If they want to make money off huge user bases, they must be willing to pay to protect and police them.

WhatsApp has an encrypted child porn problem

A look at the Android Market (aka Google Play) on its 10th Anniversary

Google Play has generated more than twice the downloads of the iOS App Store, reaching a 70 percent share of worldwide downloads in 2017, according to a new report from App Annie, released in conjunction with the 10th anniversary of the Android Market, now called Google Play. The report also examined the state of Google Play’s marketplace and the habits of Android users.
It found that, despite the large share of downloads, Google Play only accounted for 34 percent of worldwide consumer spend on apps, compared with 66 percent on the iOS App Store in 2017 — a figure that’s stayed relatively consistent for years.

Those numbers are consistent with the narrative that’s been told about the two app marketplaces for some time, as well. That is, Google has the sheer download numbers, thanks to the wide distribution of its devices — including its reach into emerging markets, thanks to low-cost smartphones. But Apple’s ecosystem is the one making more money from apps.
App Annie also found that the APAC (Asia-Pacific) region accounts for more than half of Google Play consumer spending. Japan was the largest market of all-time on this front, topping the charts with $25.1 billion dollars spent on apps and in-app purchases. It was followed by the U.S. ($19.3 billion) and South Korea ($11.2 billion).
The firm attributed some of Google Play’s success in Japan to carrier billing. This has allowed consumer spending to increase in markets like South Korea, Taiwan, Thailand and Singapore, as well, it said.
As to what consumers are spending their money on? Games, of course.
The report found that games accounted for 41 percent of downloads, but 88 percent of spend.

Outside of games, in-app subscriptions have contributed to revenue growth.
Non-game apps reached $2.7 billion in consumer spend last year, with 4 out of the top 5 apps offering a subscription model. The No. 1 app, LINE, was the exception. It was followed by subscription apps Tinder, Pandora, Netflix and HBO NOW.
In addition, App Annie examined the app usage patterns of Android users, and found they tend to have a lot of apps installed. In several markets, including the U.S. and Japan, Android users had more than 60 apps installed on their phones, and they used more than 30 apps every month.
Australia, the U.S. and South Korea led the way here, with users’ phones holding 100 or more apps.

The report also looked at the most popular games and apps of all time by both downloads and consumer spend. There weren’t many surprises on these lists, with apps like those made by Facebook dominating the top apps by downloads list, and subscription services dominating top apps by spend.

App Annie also noted Google Play has seen the release of nearly 10 million apps since its launch in 2008. Not all these remain, of course — by today’s count, there are just over 2.8 million apps live on Google Play.

 

The full report is available here.

A look at the Android Market (aka Google Play) on its 10th Anniversary