Security

GitHub survived the biggest DDoS attack ever recorded

Wired:

On Wednesday, at about 12:15 pm ET, 1.35 terabits per second of traffic hit the developer platform GitHub all at once. It was the most powerful distributed denial of service attack recorded to date—and it used an increasingly popular DDoS method, no botnet required.

GitHub briefly struggled with intermittent outages as a digital system assessed the situation. Within 10 minutes it had automatically called for help from its DDoS mitigation service, Akamai Prolexic. Prolexic took over as an intermediary, routing all the traffic coming into and out of GitHub, and sent the data through its scrubbing centers to weed out and block malicious packets. After eight minutes, attackers relented and the assault dropped off.

How the attack was pulled off:

Database caching systems [memcached servers] work to speed networks and websites, but they aren’t meant to be exposed on the public internet; anyone can query them, and they’ll likewise respond to anyone. About 100,000 memcached servers, mostly owned by businesses and other institutions, currently sit exposed online with no authentication protection, meaning an attacker can access them, and send them a special command packet that the server will respond to with a much larger reply.

Interesting story. Leaves me wondering why the attackers relented. Did a human plan it to be this long? Was there some mechanism that measured the impact of the attack, it stopped when Prolexic stepped in? Was the time limit to avoid being traced?

Forbes: The feds can now (probably) unlock every iPhone model in existence

Forbes:

Cellebrite, a Petah Tikva, Israel-based vendor that’s become the U.S. government’s company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11. That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.

As the Forbes article points out, this prose is on the Cellebrite media datasheet:

Devices supported for Advanced Unlocking and Extraction Services include:

Apple iOS devices and operating systems, including iPhone, iPad, iPad mini, iPad Pro and iPod touch, running iOS 5 to iOS 11

Google Android devices, including Samsung Galaxy and Galaxy Note devices; and other popular devices from Alcatel, Google Nexus, HTC, Huawei, LG, Motorola, ZTE, and more.

If true, that Forbes headline seems a fair statement.

How iCloud protects your data

John Gruber dug into this article from the Hong Kong Free Press:

The US-based global tech giant Apple Inc. is set to hand over the operation of its iCloud data center in mainland China to a local corporation called Guizhou-Cloud Big Data (GCBD) by February 28, 2018.

You can read Gruber’s take here.

As a postscript, Gruber links to this Apple knowledge-base article, which lays out the encryption methods used to protect the various data types stored in iCloud.

I found all three of these definitely worth a look.

Bitcoin thieves threaten real life violence

Nathaniel Popper, New York Times:

In the beach resort of Phuket, Thailand, last month, the assailants pushed their victim, a young Russian man, into his apartment and kept him there, blindfolded, until he logged onto his computer and transferred about $100,000 worth of Bitcoin to an online wallet they controlled.

And:

A few weeks before that, the head of a Bitcoin exchange in Ukraine was taken hostage and only released after the company paid a ransom of $1 million in Bitcoin.

And:

In New York City, a man was held captive by a friend until he transferred over $1.8 million worth of Ether, a virtual currency second in value only to Bitcoin.

This has become a thing. Why? Because once the Bitcoins have been transferred, there’s no way to prove ownership, no way to get the Bitcoin back. When the victims call the police, the police shrug their shoulders. There’s simply nothing they can do.

Motherboard: Key iPhone source code gets posted online in ‘biggest leak in history’

Nope. Nope. Nope.

I hate headlines like this. Biggest leak in history? Come on.

Here’s where the reaction comes from:

Someone just posted what experts say is the source code for a core component of the iPhone’s operating system on GitHub, which could pave the way for hackers and security researchers to find vulnerabilities in iOS and make iPhone jailbreaks easier to achieve.

The GitHub code is labeled “iBoot,” which is the part of iOS that is responsible for ensuring a trusted boot of the operating system. In other words, it’s the program that loads iOS, the very first process that runs when you turn on your iPhone. It loads and verifies the kernel is properly signed by Apple and then executes it—it’s like the iPhone’s BIOS.

This is true. It’s also true that Apple filed a copyright takedown and GitHub removed the post. But that’s a side note. Important, but a side note.

Buried down in the Motherboard article is this nugget:

This source code first surfaced last year, posted by a Reddit user called “apple_internals” on the Jailbreak subreddit.

This has been known about for some time. It’s iOS 9 source code and, while it’s likely true that some of that source code remains in iOS 11, Apple has known about this for long enough that they’ve certainly made any necessary changes to limit their exposure. I’d suggest that this GitHub publication had more value to the original poster and to Motherboard than to the anyone trying to hack the current version of iBoot.

And that said, I hope I’m right about this.

Apple health data is being used as evidence in a rape and murder investigation

Samantha Cole, Motherboard:

One of the most important witnesses to the rape and homicide of a 19-year-old-woman in Germany might be a stock app on the iPhone of her alleged murderer.

Hussein K., an Afghan refugee in Freiburg, has been on trial since September for allegedly raping and murdering a student in Freiburg, and disposing of her body in a river.

And:

He refused to give authorities the passcode to his iPhone, but investigators hired a Munich company (which one is not publicly known) to gain access his device, according to German news outlet Welt. They searched through Apple’s Health app, which was added to all iPhones with the release of iOS 8 in 2014, and were able to gain more data about what he was doing that day. The app records how many steps he took and what kind of activity he was doing throughout that day.

The app recorded a portion of his activity as “climbing stairs,” which authorities were able to correlate with the time he would have dragged his victim down the river embankment, and then climbed back up. Freiburg police sent an investigator to the scene to replicate his movements, and sure enough, his Health app activity correlated with what was recorded on the defendant’s phone.

This is two stories. First and foremost, there’s the use of HealthKit data in a murder/rape trial. But underneath is the question of how the unnamed German firm was able to get into the phone.

Games on your phone (mostly Android, some iOS) that track what you watch on TV

Sapna Maheshwari, New York Times:

At first glance, the gaming apps — with names like “Pool 3D,” “Beer Pong: Trickshot” and “Real Bowling Strike 10 Pin” — seem innocuous. One called “Honey Quest” features Jumbo, an animated bear.

Yet these apps, once downloaded onto a smartphone, have the ability to keep tabs on the viewing habits of their users — some of whom may be children — even when the games aren’t being played.

Yesterday, we posted about a technique ad houses use to glean your identity using your browser’s password manager.

This is a similar data-farming trick, this time using your phone’s microphone to track your TV watching habits.

The apps use software from Alphonso, a start-up that collects TV-viewing data for advertisers. Using a smartphone’s microphone, Alphonso’s software can detail what people watch by identifying audio signals in TV ads and shows, sometimes even matching that information with the places people visit and the movies they see. The information can then be used to target ads more precisely and to try to analyze things like which ads prompted a person to go to a car dealership.

Most of this occurs in the Android universe, but some iOS games use Alphonso as well. I’m willing to bet that though the games ask permission to use the microphone, not one of those games adds in, “so we can eavesdrop, track your TV viewing habits”.

This is despicable. Apple should do something about this.

[Via DF]

UPDATE: Missed this nugget:

Mr. Chordia [Alphonso CEO] said that Alphonso has a deal with the music-listening app Shazam, which has microphone access on many phones. Alphonso is able to provide the snippets it picks up to Shazam, he said, which can use its own content-recognition technology to identify users and then sell that information to Alphonso.

Shazam, which Apple recently agreed to buy, declined to comment about Alphonso.

We’ve reached out to Apple for comment.

Browser password manager used to track you, even with tracking blocked

FreedomToTinker:

We show how third-party scripts exploit browsers’ built-in login managers (also called password managers) to retrieve and exfiltrate user identifiers without user awareness. To the best of our knowledge, our research is the first to show that login managers are being abused by third-party scripts for the purposes of web tracking.

To see this for yourself, fire up Safari and go to this demo page.

  • When the page loads, type in a fake email address and a fake password. Don’t use your real info.
  • Click the link at the bottom of the page.
  • Safari will offer to save your password for that site. Click Save.

The demo will then jump to a sniffer page which contains an invisible login form. Safari will helpfully populate the form, and this new demo page will display the sniffed results.

This approach is only possible when a third party has script access to the first-party domain. Thus, our third-party script is only able to recover the credentials you saved for this website (senglehardt.com). It is not possible for us to access credentials for other websites.

So far, your data is visible to a script running on a site with that script installed. The problem is with scripts that run on multiple sites:

We found two scripts using this technique to extract email addresses from login managers on the websites which embed them. These addresses are then hashed and sent to one or more third-party servers. These scripts were present on 1110 of the Alexa top 1 million sites. The process of detecting these scripts is described in our measurement methodology in the Appendix 1. We provide a brief analysis of each script in the sections below.

Bottom line, the scripts are saving hashed (encrypted) versions of surreptitiously harvested login info and comparing it to a saved database of other hashed results. If it finds a match, it knows who you are.

This is all a bit complicated, but my 2 cents, Apple should address this in some way to prevent this form of cross-site tracking.

xkcd: Phone security

Xkcd proposes some terrific options to set when your iPhone is stolen. Just a nibble:

If phone is stolen, do a fake factory reset. Then, in the background, automatically order food to phone’s location from every delivery place within 20 miles.

Just read it. I’m sure you can think of your own options. And if you’ve never read xkcd before, here’s a link to a completely random one.

Amazon wants a key to your house. I did it. I regretted it.

Geoffrey A. Fowler, Washington Post:

I gave Amazon.com a key to go into my house and drop off packages when I’m not around. After two weeks, it turns out letting strangers in has been the least-troubling part of the experience.

Once Amazon owned my door, I was the one locked into an all-Amazon world.

And:

Make no mistake, the $250 Amazon Key isn’t just about stopping thieves. It’s the most aggressive effort I’ve seen from a tech giant to connect your home to the Internet in a way that puts itself right at the center.

And:

The Key-compatible locks are made by Yale and Kwikset, yet don’t work with those brands’ own apps. They also can’t connect with a home-security system or smart-home gadgets that work with Apple and Google software.

And, of course, the lock can’t be accessed by businesses other than Amazon. No Walmart, no UPS, no local dog-walking company.

And:

Amazon is barely hiding its goal: It wants to be the operating system for your home.

First things first, note that this article appeared in The Washington Post. The Post is owned by Jeff Bezos. Which tells me that Bezos truly is allowing the Post to be the Post, and that the Post is not afraid to bite the hand that feeds.

That said, the issue here is the walled garden. Once Amazon controls the lock on your door, they can control who has access to that lock, keeping out eventual home delivery by rivals like Walmart, and keeping rivals like Apple and HomeKit from offering door-unlocking services.

Very interesting.

HomeKit vulnerability allowed remote access to locks and more, fix rolling out

Zac Hall, 9to5Mac:

A HomeKit vulnerability in the current version of iOS 11.2 has been demonstrated to 9to5Mac that allows unauthorized control of accessories including smart locks and garage door openers. Our understanding is Apple has rolled out a server-side fix that now prevent unauthorized access from occurring while limiting some functionality, and an update to iOS 11.2 coming next week will restore that full functionality.

And this from Apple:

“The issue affecting HomeKit users running iOS 11.2 has been fixed. The fix temporarily disables remote access to shared users, which will be restored in a software update early next week.”

Props to Zac Hall for the scoop and the way he handled the whole issue.

Uber paid 20-year old $100,000 to destroy stolen customer data

Reuters:

Uber announced on Nov. 21 that the personal data of 57 million users, including 600,000 drivers in the United States, were stolen in a breach that occurred in October 2016, and that it paid the hacker $100,000 to destroy the information. But the company did not reveal any information about the hacker or how it paid him the money.

Uber made the payment last year through a program designed to reward security researchers who report flaws in a company’s software, these people said. Uber’s bug bounty service – as such a program is known in the industry – is hosted by a company called HackerOne, which offers its platform to a number of tech companies.

Crazy. Just crazy.

Update to High Sierra now live, official comment from Apple

An update to High Sierra has now gone live. It addresses the root password issue we first mentioned in this post.

“Security is a top priority for every Apple product, and regrettably we stumbled with this release of macOS”, said an Apple spokesperson in a statement to The Loop.

“When our security engineers became aware of the issue Tuesday afternoon, we immediately began working on an update that closes the security hole. This morning, as of 8 a.m., the update is available for download, and starting later today it will be automatically installed on all systems running the latest version (10.13.1) of macOS High Sierra.

We greatly regret this error and we apologize to all Mac users, both for releasing with this vulnerability and for the concern it has caused. Our customers deserve better. We are auditing our development processes to help prevent this from happening again.”

The download is now available via the Mac App Store.

Facebook’s new CAPTCHA: “Please upload a photo of yourself that clearly shows your face”

Nitasha Tiku, Wired:

Facebook may soon ask you to “upload a photo of yourself that clearly shows your face,” to prove you’re not a bot.

The company is using a new kind of captcha to verify whether a user is a real person. According to a screenshot of the identity test shared on Twitter on Tuesday and verified by Facebook, the prompt says: “Please upload a photo of yourself that clearly shows your face. We’ll check it and then permanently delete it from our servers.”

And:

In a statement to WIRED, a Facebook spokesperson said the photo test is intended to “help us catch suspicious activity at various points of interaction on the site, including creating an account, sending Friend requests, setting up ads payments, and creating or editing ads.”

This is somewhat reminiscent of Face ID, though presumably without the machine learning aspect, with zero 3D information (it’s a picture, after all) and, also presumably, with a much slower reaction time.

My two cents: I find it interesting that we have such a splintered approach to security. We’ve got security cams, passwords, fingerprints, iris scanning, and 3D facial mapping, all implemented with varying degrees of success by a wide variety of vendors.

Over time, there will be a tension for standards to emerge, to allow for constant verification. With the obvious dystopian potential that goes along with constant surveillance. This tension is between the requirement to verify that you are you, to validate a transaction, protect you from hackers and the like, and the desire to track you, to mine your habits.

With each new security standard you sign up for, opt into, important to know exactly where that data goes, what it will ultimately be used for.

Side note, here’s the Wikipedia page for CAPTCHA. Interesting acronym.

Security hole in macOS High Sierra lets anyone gain root access to a logged in machine

There’s a security hole in macOS High Sierra and we’ve verified the issue.

First reported in this tweet:

https://twitter.com/lemiorhan/status/935578694541770752

Here’s how to reproduce it:

  • Log in to your Mac, as you normally would
  • Now launch System Preferences
  • Click the Users & Groups pane
  • Click the lock to make changes but do NOT enter your normal credentials
  • Instead, change the user name to root, leave the password field blank, but click in the password field (does not appear to work if you don’t click in the password field) and click Unlock
  • If you don’t get in, change the user name to root, leave password field blank (but click in it), click Unlock again

Eventually, you will get a second Unlock dialog. Repeat this procedure with root and empty password field. This time, when you click Unlock, the admin lock will unlock and you are in.

Note that this does require you to have physical access to a machine and be already logged in to the machine. I have verified this on my machine and it does work.

While this is an issue, this would be way more of an issue if this technique allowed you to log in to a machine (perhaps a stolen one, for example), as opposed to gaining root access to a machine whose user logged in and granted access in the first place. Not nothing, but the sky is not falling.

We’ve reached out to Apple and will update this post the moment we hear back.

UPDATE: This just got a bit worse. This same technique will enable you to login to any Mac whose login options are set to “Display login window as Name and password” instead of “Display login window as List of users”.

While you wait for Apple to respond, suggest you do this:

  • Go to System Preferences / Users & Groups
  • Click the lock, login as your admin user
  • Click Login Options (bottom left)
  • Click List of users instead of Name and password

You can also follow up by entering a root password or, as others have suggested, disabling the root user. My suggestion would be to wait until Apple responds, then follow their suggested advice.

How criminals clear your stolen iPhone for resale

Charlie Osborne, ZDNet lays out one particular attack chain designed to clear your iPhone so they can resell it.

Fascinating, and worth reading, just so you know what might be coming if someone ever gets their hands on your iOS device.

You probably don’t need to worry about someone hacking your iPhone X’s Face ID with a mask

Taylor Hatmaker, TechCrunch:

Touted as the iPhone X’s new flagship form of device security, Face ID is a natural target for hackers. Just a week after the device’s release, Vietnamese research team Bkav claims to have cracked Apple’s facial recognition system using a replica face mask that combines printed 2D images with three-dimensional features. The group has published a video demonstrating its proof of concept, but enough questions remain that no one really knows how legitimate this purported hack is.

I believe the term should be spoofed, not hacked. The video in the post shows Bkav using a homemade mask trying to spoof a person’s face registered using Face ID. Hacking would be breaking in and stealing credentials, or installing a back door, that sort of thing.

That said, something doesn’t sit right looking at that video. When I first saw it, my instinctive reaction was that it was fake. But even if the mask was successful in spoofing the user’s face, I just don’t see this as an issue.

More from Taylor’s post:

If you’re concerned that someone might want into your devices badly enough that they’d execute such an involved plan to steal your facial biometrics, well, you’ve probably got a lot of other things to worry about as well.

And:

Prior to the Bkav video, Wired worked with Cloudflare to see if Face ID could be hacked through masks that appear far more sophisticated than the ones the Bkav hack depicts. Remarkably, in spite of their fairly elaborate efforts — including “details like eyeholes designed to allow real eye movement” and “thousands of eyebrow hairs inserted into the mask intended to look more like real hair” — Wired and Cloudflare didn’t succeed.

If Bkav has the goods, I suspect we’ll hear more from them, perhaps a follow-on post with a more clearly defined demonstration. Or, perhaps, we’ll hear from Apple about some patch they made to Face ID in response to Bkav’s work. As is, color me skeptical.

Face ID on the Mac

Thoughts on the idea of Apple adding facial mapping and Face ID to your Mac. […]

ACLU raises privacy concerns over app developer access to facial expressions on iPhone X

Ben Lovejoy, 9to5Mac:

The American Civil Liberties Union (ACLU) has raised privacy concerns about developer access to the facial expressions of iPhone X users. In particular, they say that Apple allows developers to capture facial expression data and store it on their own servers.

When the iPhone X was launched, Apple was careful to stress that the 3D face recognition model used by Face ID was stored only on the phone itself. The data is never transferred to Apple servers. But the ACLU says that app developers are allowed to transmit and store some face data.

Interesting article. Lots of layers to this issue. There’s face tracking (think Animoji) and attention detection (are you actually watching your screen). How much of this data is hidden behind an API? In other words, does Apple simply tell a developer whether you are paying attention to the screen, or do they give you more specific data, like the current screen location on which you are currently focused?

This is a good read. And keep an eye out for more detail in the Rene Ritchie/iMore iPhone X review I’ll be posting a bit later this morning.

FaceID is brilliant because it’s subtraction instead of addition

Daniel Miessler:

Imagine a similar handheld device from a superior alien race. Assuming they needed such an interface or display at all, they would simply handle their device normally and it would still allow them to perform sensitive actions.

To an unfamiliar observer it might seem like no authentication took place, like one could just pick up any device and start taking sensitive actions on their behalf. But in reality all of that functionality had just been removed from the workflow and done automatically. It’s security made invisible and effortless.

That’s what FaceID is, and why it represents such an improvement: it adds security while removing friction.

I like the analogy here. Touch ID focuses authentication on a physical act on a physical mechanism on the phone. Face ID is invisible.

Forget the iPhone X, Apple’s best product is something you can’t buy

John Patrick Pullen, Time:

There’s this photo of my kids in the bath that, well, I’d rather not tell you about. I mean, it’s incredibly cute and I’d love to show it to you, but I’m also a private person, so it wouldn’t be right to go into details. But I will say this: though it’s one of my favorite possessions, this picture doesn’t physically exist.

And:

As precious as this image is, I don’t have it stored on a flash drive attached to my keychain, or in some other ultra-safe place. Instead, it’s housed on a server in some unknown probably dank and sunless location. That’s no casual decision. I’ve put considerable time and thought into how I store my photos in general, as well as how I back up my information overall. Despite all the bottomless storage features offered by tech giants like Google and Amazon, I default to keeping my most valuable data with Apple. Why I chose this matters, so let’s talk about it.

Spot on. Apple’s commitment to privacy is a critical discriminator. Not only for the reasons spelled out in this Time article, but as a foundation for protecting things like information flow between doctors and patients. Good read.

The iOS privacy loophole

Felix Krause:

Once you grant an app access to your camera, it can:

  • access both the front and the back camera
  • record you at any time the app is in the foreground
  • take pictures and videos without telling you
  • upload the pictures/videos it takes immediately
  • run real-time face recognition to detect facial features or expressions

Have you ever used a social media app while using the bathroom? ?

All without indicating that your phone is recording you and your surrounding, no LEDs, no light or any other kind of indication.

The point is that when you grant an app access to your camera, you grant complete access. There is no granularity, no access limitation for a single task.

Is this paranoia? Perhaps. But seems like this is worth some thought.

Selling your MacBook Pro with Touch Bar? Apple recommends this step

Zac Hall, 9to5Mac:

If you’re selling (or generously handing down) your MacBook Pro with Touch Bar, Apple recommends an extra step when erasing your data before parting ways with your machine. This step requires an obscure Terminal command that you wouldn’t assume and isn’t required on Macs without the Touch Bar.

Here’s the Apple Support document titled What to do before you sell or give away your Mac.

Check out step 6, “If you have a MacBook Pro with Touch Bar, clear its data”.

Begs the question, what is specifically stored in the Touch Bar that requires cleaning? Good to know that this step is necessary, but a bit of a mystery. Anyone know the specifics? Please do ping me.

Nice find, Zac.

UPDATE: And the answer is, this script removes your Touch ID data from your Mac, as proved by Stephen Hackett, written up on this 512 Pixels post.

Why we can’t have nice things: WiFi is now broken

Dan Goodin, Ars Technica:

An air of unease set into the security circles on Sunday as they prepared for the disclosure of high-severity vulnerabilities in the Wi-Fi Protected Access II protocol that make it possible for attackers to eavesdrop Wi-Fi traffic passing between computers and access points.

The proof-of-concept exploit is called KRACK, short for Key Reinstallation Attacks. The research has been a closely guarded secret for weeks ahead of a coordinated disclosure that’s scheduled for 8 a.m. Monday, east coast time.

That reveal is scheduled for a few minutes from now. This is real “sky is falling” news, basically impacting the majority of WiFi using folk who use WPA2 to protect their WiFi connections.

More from the article:

The vast majority of existing access points aren’t likely to be patched quickly, and some may not be patched at all. If initial reports are accurate that encryption bypass exploits are easy and reliable in the WPA2 protocol, it’s likely attackers will be able to eavesdrop on nearby Wi-Fi traffic as it passes between computers and access points. It might also mean it’s possible to forge Dynamic Host Configuration Protocol settings, opening the door to hacks involving users’ domain name service.

Take a few minutes to read this announcement page, which lays out all the detail on the attack. At the very least, scroll down to the Q&A section a bit more than halfway down the page.

The bad news is, this impacts pretty much everyone using WPA1 and WPA2 and you can’t fix this by, say, changing your password.

The good news:

Implementations can be patched in a backwards-compatible manner. This means a patched client can still communicate with an unpatched access point, and vice versa. In other words, a patched client or access points sends exactly the same handshake messages as before, and at exactly the same moments in time. However, the security updates will assure a key is only installed once, preventing our attacks. So again, update all your devices once security updates are available.

A nightmare, but not a total unfixable nightmare. But things are going to be sketchy for some time. Check for HTTPS on your URLs. If you are using HTTP, assume someone can read every part of your communication.

Researchers: Uber’s iOS app had secret permissions that allowed it to record your iPhone screen

Kate Conger, Gizmodo:

To improve functionality between Uber’s app and the Apple Watch, Apple allowed Uber to use a powerful tool that could record a user’s iPhone screen, even if Uber’s app was only running in the background, security researchers told Gizmodo. After the researchers discovered the tool, Uber said it is no longer in use and will be removed from the app.

My head is spinning. How was this allowed to happen in the first place and how was the tool not monitored, removal tracked and forced by Apple?

More:

The entitlement isn’t common and would require Apple’s explicit permission to use, the researchers explained. Will Strafach, a security researcher and CEO of Sudo Security Group, said he couldn’t find any other apps with the entitlement live on the App Store.

I’d love an official comment by Apple on this. Was this a one time thing? Is this common practice?