Архив рубрики: Security

Auto Added by WPeMatico

Apple, Opera and Yandex fix browser address bar spoofing bugs, but millions more still left vulnerable

Year after year, phishing remains one of the most popular and effective ways for attackers to steal your passwords. As users, we’re mostly trained to spot the telltale signs of a phishing site, but most of us rely on carefully examining the web address in the browser’s address bar to make sure the site is legitimate.
But even the browser’s anti-phishing features — often the last line of defense for a would-be phishing victim — aren’t perfect.
Security researcher Rafay Baloch found several vulnerabilities in some of the most widely used mobile browsers — including Apple’s Safari, Opera and Yandex — which if exploited would allow an attacker to trick the browser into displaying a different web address than the actual website that the user is on. These address bar spoofing bugs make it far easier for attackers to make their phishing pages look like legitimate websites, creating the perfect conditions for someone trying to steal passwords.

Riot automatically educates your team about phishing

The bugs worked by exploiting a weakness in the time it takes for a vulnerable browser to load a web page. Once a victim is tricked into opening a link from a phishing email or text message, the malicious web page uses code hidden on the page to effectively replace the malicious web address in the browser’s address bar to any other web address that the attacker chooses.
In at least one case, the vulnerable browser retained the green padlock icon, indicating that the malicious web page with a spoofed web address was legitimate — when it wasn’t.

An address bar spoofing bug in Opera Touch for iOS (left) and Bolt Browser (right). These spoofing bugs can make phishing emails look far more convincing. (Image: Rapid7/supplied)

Rapid7’s research director Tod Beardsley, who helped Baloch with disclosing the vulnerabilities to each browser maker, said address bar spoofing attacks put mobile users at particular risk.
“On mobile, space is at an absolute premium, so every fraction of an inch counts. As a result, there’s not a lot of space available for security signals and sigils,” Beardsley told TechCrunch. “While on a desktop browser, you can either look at the link you’re on, mouse over a link to see where you’re going or even click on the lock to get certificate details. These extra sources don’t really exist on mobile, so the location bar not only tells the user what site they’re on, it’s expected to tell the user this unambiguously and with certainty. If you’re on palpay.com instead of the expected paypal.com, you could notice this and know you’re on a fake site before you type in your password.”
“Spoofing attacks like this make the location bar ambiguous, and thus, allow an attacker to generate some credence and trustworthiness to their fake site,” he said.
Baloch and Beardsley said the browser makers responded with mixed results.
So far, only Apple and Yandex pushed out fixes in September and October. Opera spokesperson Julia Szyndzielorz said the fixes for its Opera Touch and Opera Mini browsers are “in gradual rollout.”
But the makers of UC Browser, Bolt Browser and RITS Browser — which collectively have more than 600 million device installs — did not respond to the researchers and left the vulnerabilities unpatched.
TechCrunch reached out to each browser maker but none provided a statement by the time of publication.

A simple bug makes it easy to spoof Google search results into spreading misinformation

Apple, Opera and Yandex fix browser address bar spoofing bugs, but millions more still left vulnerable

Google is making autofill on Chrome for mobile more secure

Google today announced a new autofill experience for Chrome on mobile that will use biometric authentication for credit card transactions, as well as an updated built-in password manager that will make signing in to a site a bit more straightforward.
Image Credits: Google
Chrome already uses the W3C WebAuthn standard for biometric authentication on Windows and Mac. With this update, this feature is now also coming to Android .
If you’ve ever bought something through the browser on your Android phone, you know that Chrome always asks you to enter the CVC code from your credit card to ensure that it’s really you — even if you have the credit card number stored on your phone. That was always a bit of a hassle, especially when your credit card wasn’t close to you.
Now, you can use your phone’s biometric authentication to buy those new sneakers with just your fingerprint — no CVC needed. Or you can opt out, too, as you’re not required to enroll in this new system.
As for the password manager, the update here is the new touch-to-fill feature that shows you your saved accounts for a given site through a standard Android dialog. That’s something you’re probably used to from your desktop-based password manager already, but it’s definitely a major new built-in convenience feature for Chrome — and the more people opt to use password managers, the safer the web will be. This new feature is coming to Chrome on Android in the next few weeks, but Google says that “is only the start.”
Image Credits: Google
 

Google is making autofill on Chrome for mobile more secure

Rapid Huawei rip-out could cause outages and security risks, warns UK telco

The chief executive of UK incumbent telco BT has warned any government move to require a rapid rip-out of Huawei kit from existing mobile infrastructure could cause network outages for mobile users and generate its own set of security risks.
Huawei has been the focus of concern for Western governments including the US and its allies because of the scale of its role in supplying international networks and next-gen 5G, and its close ties to the Chinese government — leading to fears that relying on its equipment could expose nations to cybersecurity threats and weaken national security.
The UK government is widely expected to announce a policy shift tomorrow, following reports earlier this year that it would reverse course on so called “high risk” vendors and mandate a phase out of use of such kit in 5G networks by 2023.
Speaking to BBC Radio 4’s Today program this morning, BT CEO Philip Jansen said he was not aware of the detail of any new government policy but warned too rapid a removal of Huawei equipment would carry its own risks.
“Security and safety in the short term could be put at risk. This is really critical — because if you’re not able to buy or transact with Huawei that would mean you wouldn’t be able to get software upgrades if you take it to that specificity,” he said.
“Over the next five years we’d expect 15-20 big software upgrades. If you don’t have those you’re running gaps in critical software that could have security implications far bigger than anything we’re talking about in terms of managing to a 35% cap in the access network of a mobile operator.”
“If we get a situation where things need to go very, very fast then you’re in a situation where potentially service for 24M BT Group mobile customers is put into question,” he added, warning that “outages would be possible”.
Back in January the government issued a much delayed policy announcement setting out an approach to what it dubbed “high risk” 5G vendors — detailing a package of restrictions it said were intended to mitigate any risk, including capping their involvement at 35% of the access network. Such vendors would also be entirely barred them from the sensitive “core” of 5G networks. However the UK has faced continued international and domestic opposition to the compromise policy, including from within its own political party.
Wider geopolitical developments — such as additional US sanctions on Huawei and China’s approach to Hong Kong, a former British colony — appear to have worked to shift the political weather in Number 10 Downing Street against allowing even a limited role for Huawei.
Asked about the feasibility of BT removing all Huawei kit, not just equipment used for 5G, Jansen suggested the company would need at least a decade to do so.
“It’s all about timing and balance,” he told the BBC. “If you wanted to have no Huawei in the whole telecoms infrastructure across the whole of the UK I think that’s impossible to do in under ten years.”
If the government policy is limited to only removing such kit from 5G networks Jansen said “ideally” BT would want seven years to carry out the work — though he conceded it “could probably do it in five”.
“The current policy announced in January was to cap the use of Huawei or any high risk vendor to 35% in the access network. We’re working towards that 35% cap by 2023 — which I think we can make although it has implications in terms of roll out costs,” he went on. “If the government makes a policy decision which effectively heralds a change from that announced in January then we just need to understand the potential implications and consequences of that.
“Again we always — at BT and in discussions with GCHQ — we always take the approach that security is absolutely paramount. It’s the number one priority. But we need to make sure that any change of direction doesn’t lead to more risk in the short term. That’s where the detail really matters.”
Jansen fired a further warning shot at Johnson’s government, which has made a major push to accelerate the roll out of fiber wired broadband across the country as part of a pledge to “upgrade” the UK, saying too tight a timeline to remove Huawei kit would jeopardize this “build out for the future”. Instead, he urged that “common sense” prevail.
“There is huge opportunity for the economy, for the country and for all of us from 5G and from full fiber to the home and if you accelerate the rip out obviously you’re not building either so we’ve got to understand all those implications and try and steer a course and find the right balance to managing this complicated issue.
“It’s really important that we very carefully weigh up all the different considerations and find the right way through this — depending on what the policy is and what’s driving the policy. BT will obviously and is talking directly with all parts of government, [the National] Cyber Security Center, GCHQ, to make sure that everybody understands all the information and a sensible decision is made. I’m confident that in the end common sense will prevail and we will head down the right direction.”
Asked whether it agrees there are security risks attached to an accelerated removal of Huawei kit, the UK’s National Cyber Security Centre declined to comment. But a spokesperson for the NCSC pointed us to an earlier statement in which it said: “The security and resilience of our networks is of paramount importance. Following the US announcement of additional sanctions against Huawei, the NCSC is looking carefully at any impact they could have to the U.K.’s networks.”
We’ve also reached out to DCMS for comment. Update: A government spokesperson said: “We are considering the impact the US’s additional sanctions against Huawei could have on UK networks. It is an ongoing process and we will update further in due course.”

Rapid Huawei rip-out could cause outages and security risks, warns UK telco

Signal now has built-in face blurring for photos

Apps like Signal are proving invaluable in these days of unrest, and anything we can do to simplify and secure the way we share sensitive information is welcome. To that end Signal has added the ability to blur faces in photos sent via the app, making it easy to protect someone’s identity without leaving any trace on other, less secure apps.
After noting Signal’s support of the protests occurring all over the world right now against police brutality, the company’s founder Moxie Marlinspike writes in a blog post that “We’ve also been working to figure out additional ways we can support everyone in the street right now. One immediate thing seems clear: 2020 is a pretty good year to cover your face.”
Fortunately there are perfectly good tools out there both to find faces in photographs and to blur imagery (presumably irreversibly, given Signal’s past attention to detail in these matters, but the company has not returned a request for comment). Put them together and boom, a new feature that lets you blur all the faces in a photo with a single tap.
Image Credits: Signal
This is helpful for the many users of Signal who use it to send sensitive information, including photos where someone might rather not be identifiable. Normally one would blur the face in another photo editor app, which is simple enough but not necessarily secure. Some editing apps, for instance, host computation-intensive processes on cloud infrastructure and may retain a copy of a photo being edited there — and who knows what their privacy or law enforcement policy may be?
If it’s sensitive at all, it’s better to keep everything on your phone and in apps you trust. And Signal is among the few apps trusted by the justifiably paranoid.
All face detection and blurring takes place on your phone, Marlinspike wrote. But he warned that the face detection isn’t 100% reliable, so be ready to manually draw or expand blur regions in case someone isn’t detected.
The new feature should appear in the latest versions of the app as soon as those are approved by Google and Apple.
Lastly Marlinspike wrote that the company is planning on “distributing versatile face coverings to the community free of charge.” The picture shows a neck gaiter like those sold for warmth and face protection. Something to look forward to then.

Signal now has built-in face blurring for photos

TikTok brings in outside experts to help it craft moderation and content policies

In October, TikTok href=»https://techcrunch.com/2019/10/15/tiktok-taps-corporate-law-firm-kl-gates-to-advise-on-its-u-s-content-moderation-policies/»> tapped corporate law firm K&L Gates to advise the company on its moderation policies and other topics afflicting social media platforms. As a part of those efforts, TikTok said it would form a new committee of experts to advise the business on topics like child safety, hate speech, misinformation, bullying and other potential problems. Today, TikTok is announcing the technology and safety experts who will be the company’s first committee members.
The committee, known as the TikTok Content Advisory Council, will be chaired by Dawn Nunziato, a professor at George Washington University Law School and co-director of the Global Internet Freedom Project. Nunziato specializes in free speech issues and content regulation — areas where TikTok has fallen short.
“A company willing to open its doors to outside experts to help shape upcoming policy shows organizational maturity and humility,” said Nunziato, of her joining. “I am working with TikTok because they’ve shown that they take content moderation seriously, are open to feedback and understand the importance of this area both for their community and for the future of healthy public discourse,” she added.
TikTok says it plans to grow the committee to around a dozen experts in time.
According to the company, other committee members include:
Rob Atkinson, Information Technology and Innovation Foundation, brings academic, private sector, and government experience as well as knowledge of technology policy that can advise our approach to innovation
Hany Farid, University of California, Berkeley Electrical Engineering & Computer Sciences and  School of Information, is a renowned expert on digital image and video forensics, computer vision, deep fakes, and robust hashing
Mary Anne Franks, University of Miami Law School, focuses on the intersection of law and technology and will provide valuable insight into industry challenges including discrimination, safety, and online identity
Vicki Harrison, Stanford Psychiatry Center for Youth Mental Health and Wellbeing, is a social worker at the intersection of social media and mental health who understands child safety issues and holistic youth needs
Dawn Nunziato, chair, George Washington University Law School, is an internationally recognized expert in free speech and content regulation
David Ryan Polgar, All Tech Is Human, is a leading voice in tech ethics, digital citizenship, and navigating the complex challenge of aligning societal interests with technological priorities
Dan Schnur, USC Annenberg Center on Communication and UC Berkeley Institute of Governmental Studies, brings valuable experience and insight on political communications and voter information
Nunziato’s view of TikTok — of a company being open and willing to change — is a charitable one, it should be said.
The company is in dangerous territory here in the U.S., despite its popularity among Gen Z and millennial users. TikTok today is facing a national security review and a potential ban on all government workers’ phones. In addition, the Dept. of Defense suggested the app should be blocked on phones belonging to U.S. military personnel. Its 2017 acquisition of U.S.-based Musical.ly may even come under review.
Though known for its lighthearted content — like short videos of dances, comedy and various other creative endeavors — TikTok has also been accused of things like censoring the Hong Kong protests and more, which contributed to U.S. lawmakers’ fears that the Chinese-owned company may have to comply with “state intelligence work.” 
TikTok has also been accused of having censored content from unattractive, poor or disabled persons, as well as videos from users identified as LGBTQ+. The company explained in December these guidelines are no longer used, as they were an early and misguided attempt to protect users from online bullying. TikTok had limited the reach of videos where such harassment could occur. But this suppression was done in the dark, unasked for by the “protected” parties — and it wasn’t until exposed by German site NetzPolitik that anyone knew these rules had existed.
In light of the increased scrutiny of its platform and its ties to China, TikTok has been taking a number of steps in an attempt to change its perception. The company released new Community Guidelines and published its first Transparency Report a few months ago. It also hired a global General Counsel and expanded its Trust & Safety hubs in the U.S., Ireland and Singapore. And it just announced a Transparency Center open to outside experts who want to review its moderation practices.
TikTok’s new Advisory Council will meet with the company’s U.S. leadership to focus on the key topics of importance starting at the end of the month, with an early focus on creating policies around misinformation and election interference.

“All of our actions, including the creation of this Council, help advance our focus on creating an entertaining, genuine experience for our community by staying true to why users uniquely love the TikTok platform. As our company grows, we are focused on reflection and learning as a part of company culture and committed to transparently sharing our progress with our users and stakeholders,” said TikTok’s U.S. general manager, Vanessa Pappas. “Our hope is that through thought-provoking conversations and candid feedback, we will find productive ways to support platform integrity, counter potential misuse, and protect the interests of all those who use our platform,” she added. 

TikTok brings in outside experts to help it craft moderation and content policies