Security

EFF: iOS 11’s misleading “off-ish” Bluetooth, Wi-Fi setting bad for user security

Electronic Frontier Foundation blog:

Turning off your Bluetooth and Wi-Fi radios when you’re not using them is good security practice (not to mention good for your battery usage). When you consider Bluetooth’s known vulnerabilities, it’s especially important to make sure your Bluetooth and Wi-Fi settings are doing what you want them to. The iPhone’s newest operating system, however, makes it harder for users to control these settings.

We’ve discussed the Control Center controls and icons in this Loop post.

In a nutshell, when you tap the WiFi or Bluetooth icons in Control Center, you’ll drop/restore the current connection, but without turning off the respective radio. And that’s the EFF’s complaint.

Instead, what actually happens in iOS 11 when you toggle your quick settings to “off” is that the phone will disconnect from Wi-Fi networks and some devices, but remain on for Apple services. Location Services is still enabled, Apple devices (like Apple Watch and Pencil) stay connected, and services such as Handoff and Instant Hotspot stay on.

All true.

Apple’s UI fails to even attempt to communicate these exceptions to its users.

A small point, but I disagree with this. Once you see the difference between the off icon state and the disconnected state, it’s clear what’s going on. There’s also helper text, like “Disconnected from XXX”, where XXX is your WiFi network name.

The more important issue:

It gets even worse. When you toggle these settings in the Control Center to what is best described as”off-ish,” they don’t stay that way. The Wi-Fi will turn back full-on if you drive or walk to a new location. And both Wi-Fi and Bluetooth will turn back on at 5:00 AM. This is not clearly explained to users, nor left to them to choose, which makes security-aware users vulnerable as well.

The only way to turn off the Wi-Fi and Bluetooth radios is to enable Airplane Mode or navigate into Settings and go to the Wi-Fi and Bluetooth sections.

My two cents? Make the controls default to the safest possible behavior, then expose settings that allow me to go to a more relaxed, less secure state for a specific benefit (battery savings, better communications, etc.)

UPDATE: As pointed out by my unrelated name-sharer and Loop reader Jason Mark, Airplane Mode does not impact the WiFi or Bluetooth radios, as EFF claims. An easy mistake, but worth clarifying. Give this a try on your iOS 11 device.

Apple releases Face ID security guide

A few bits from Apple’s Face ID Security white paper:

When Face ID detects and matches your face, iPhone X unlocks without asking for the device passcode. Face ID makes using a longer, more complex passcode far more practical because you don’t need to enter it as frequently.

If Face ID was able to eliminate the passcode completely, users could use long, impossible to memorize strings, just as they would with strong passwords combined with a password manager. But the fact that you have to memorize the passcode (you won’t have to use it much, but you’ll still encounter situations where you’ll need it) limits the complexity. Not a complaint, just an observation.

Here’s when a passcode is still required:

  • You can always use your passcode instead of Face ID, and it’s still required under the following circumstances:
  • The device has just been turned on or restarted.
  • The device hasn’t been unlocked for more than 48 hours.
  • The passcode hasn’t been used to unlock the device in the last 156 hours (six and a half days) and Face ID has not unlocked the device in the last 4 hours.
  • The device has received a remote lock command.
  • After five unsuccessful attempts to match a face.
  • After initiating power off/Emergency SOS by pressing and holding either volume button and the side button simultaneously for 2 seconds.

And:

The TrueDepth camera automatically looks for your face when you wake iPhone X by raising it or tapping the screen, as well as when iPhone X attempts to authenticate you to display an incoming notification or when a supported app requests Face ID authentication. When a face is detected, Face ID confirms attention and intent to unlock by detecting that your eyes are open and directed at your device; for accessibility, this is disabled when VoiceOver is activated or can be disabled separately, if required.

This is what’s encrypted and saved in the iPhone X Secure Enclave:

  • The infrared images of your face captured during enrollment.
  • The mathematical representations of your face calculated during enrollment.
  • The mathematical representations of your face calculated during some unlock attempts if Face ID deems them useful to augment future matching.

There’s a lot more in the white paper, including some detail on Apple Pay, and third party access to Face ID.

High Sierra automatically checks firmware integrity each week

The Eclectic Light Company:

Upgrading to High Sierra brings a new and significant security feature: your Mac will automatically check its EFI firmware. In a series of tweets, Xeno Kovah, one of the three engineers responsible for the new tool, has outlined how this works.

The new utility eficheck, located in /usr/libexec/firmwarecheckers/eficheck, runs automatically once a week. It checks that Mac’s firmware against Apple’s database of what is known to be good. If it passes, you will see nothing of this, but if there are discrepancies, you will be invited to send a report to Apple.

And:

eficheck depends on a small local library of ‘known good’ data, which will be automatically and silently updated if you have security updates turned on in the App Store pane.

That checkbox is in the App Store pane in System Preferences and should be checked by default.

macOS High Sierra keychain vulnerability should not stop you from updating

Juli Clover, MacRumors:

macOS High Sierra, released to the public today, could be impacted by a major security flaw that could allow a hacker to steal the usernames and passwords of accounts stored in Keychain.

Here’s the tweet that brought this to light:

https://twitter.com/patrickwardle/status/912254053849079808

The timing of this reveal is terrible, as it coincides with the release of macOS High Sierra. I know a number of people who have held off updating for just this reason.

Don’t let this story stop you from updating:

  1. This exploit is said to effect earlier versions of macOS as well. If you are on Sierra and considering updating, you are already as vulnerable as you would be if you updated.

  2. Apple is said to be working on a fix and Patrick Wardle has said he will not release details of the exploit until the fix patch is available.

Add to that:

For this vulnerability to work, a user needs to download malicious third-party code from an unknown source, something Apple actively discourages with warnings about apps downloaded outside of the Mac App Store or from non-trusted developers.

To be clear, do your research and a full backup before you update. I’ve done my homework and, once I finish this morning’s Loop posts, will hit the return key and start my update. I will definitely update on Twitter as I go. Hopefully, the update will be trouble-free. Fingers are crossed.

Hackers use Find My iPhone to remotely lock Macs, demand ransom

Juli Clover, MacRumors:

Over the last day or two, several Mac users appear to have been locked out of their machines after hackers signed into their iCloud accounts and initiated a remote lock using Find My iPhone.

With access to an iCloud user’s username and password, Find My iPhone on iCloud.com can be used to “lock” a Mac with a passcode even with two-factor authentication turned on, and that’s what’s going on here.

This does appear to be a genuine hole in Apple’s security scheme, though iCloud itself was not hacked.

Seems like this is fixable. From the comments:

When you go to remote lock a device you enter a lock passcode and the device’s password or passcode. When that is sent to the Mac, iPhone, whatever, if the device password doesn’t match, it won’t lock the device. That way, even if a hacker guesses your Apple ID and password using hacked credentials, they still can’t lock the device without the Mac’s login.

Not sure if this is doable, since your Mac’s password is not stored in the cloud, but maybe the entered password could be encrypted, sent to the Mac, and the Mac could decrypt and compare.

Apple’s Craig Federighi answers some burning questions about Face ID

Matthew Panzarino, TechCrunch:

Face ID is easily the most hot-button topic to come out of Apple’s iPhone event this week, notch be damned. As people have parsed just how serious Apple is about it, questions have rightly begun to be raised about its effectiveness, security and creation.

To get some answers, I hopped on the phone with Apple’s SVP of Software Engineering, Craig Federighi. We went through a bunch of the common concerns in rapid-fire fashion, and I’ve also been asking around and listening to Apple folks who have been using the feature over long periods. Hopefully we can clear up some of the FUD about it.

And:

“Phil mentioned that we’d gathered a billion images and that we’d done data gathering around the globe to make sure that we had broad geographic and ethnic data sets. Both for testing and validation for great recognition rates,” says Federighi. “That wasn’t just something you could go pull off the internet.”

Especially given that the data needed to include a high-fidelity depth map of facial data. So, says Federighi, Apple went out and got consent from subjects to provide scans that were “quite exhaustive.” Those scans were taken from many angles and contain a lot of detail that was then used to train the Face ID system.

Imagine the process of deciding on a representative group of faces. A daunting problem.

“We do not gather customer data when you enroll in Face ID, it stays on your device, we do not send it to the cloud for training data,” he notes.

And, these tidbits on when Face ID yields to demand a passcode:

  • If you haven’t used Face ID in 48 hours, or if you’ve just rebooted, it will ask for a passcode.
  • If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode — it tried to read the people setting the phones up on the podium.)
  • Developers do not have access to raw sensor data from the Face ID array. Instead, they’re given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.
  • You’ll also get a passcode request if you haven’t unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn’t unlocked it in 4 hours.

Great questions. Nice job, Matthew.

It’s about to get tougher for cops, border agents to get at your iPhone’s data

Cyrus Farivar, Ars Technica:

According to security experts who have reviewed early developer versions of the forthcoming iOS 11, law enforcement will soon have a harder time conducting digital forensic searches of iPhones and iPads.

And:

Prior to this latest version of the firmware, in order for an iOS device to be “trusted” by a computer that it was physically connected to, that device had to be unlocked first via Touch ID or passcode. Next, the device would prompt the user: “Trust This Computer?” Only then could the entire device’s data could be extracted and imaged. Under iOS 11, this sequence has changed to also specifically require the passcode on the device after the “Trust This Computer?” prompt.

While the change may seem minor, the fact that the passcode will be specifically required as the final step before any data can be pulled off the phone means that law enforcement and border agents won’t have as much routine access to fully image a seized device.

Subtle change, interesting.

[H/T, The surreptitiously supercilious Not Jony Ive]

How a Twitter hack taught me to take online security seriously

Michael Steeber, 9to5Mac:

When I went to sleep last Monday night, I had no idea that I’d open my eyes to dozens of confusing notifications and my Twitter account taken over by a security hacker group. It caught me completely off guard, but it didn’t have to be that way.

Hopefully by relaying my story and some hard lessons I learned along the way, I can help you avoid the same situation as you manage the safety and security of your online accounts and data.

Great read. Some important lessons learned.

Official Equifax statement on massive hack, execs sell off stock before announcement

  1. Here’s the official Equifax post about the massive hack.

  2. Here’s the official site Equifax set up to see if your information was exposed. Beware of other sites masquerading as the real deal. I do not understand why they didn’t go with a subdomain, such as haveIbeenHacked.equifax.com or some such. More to the point, I don’t understand why you have to enroll in their service to see if you are affected, even if it is free.

  3. Three Equifax execs sold almost $2 million of stock after the breach, but before the announcement. Even assuming they were not aware of the breach when they sold their stock, they will still benefit from a situation of their own making.

Not crazy about the way this is playing out.

UPDATE: According to this tweet, if you sign up with Equifax to check to see if your information was compromised, you waive your rights to sue Equifax or to be part of a class action suit. Can this be correct? [H/T @varunorcv]

Hacking Siri

[VIDEO] FastCoDesign:

Chinese researchers have discovered a terrifying vulnerability in voice assistants from Apple, Google, Amazon, Microsoft, Samsung, and Huawei. It affects every iPhone and Macbook running Siri, any Galaxy phone, any PC running Windows 10, and even Amazon’s Alexa assistant.

Using a technique called the DolphinAttack, a team from Zhejiang University translated typical vocal commands into ultrasonic frequencies that are too high for the human ear to hear, but perfectly decipherable by the microphones and software powering our always-on voice assistants. This relatively simple translation process lets them take control of gadgets with just a few words uttered in frequencies none of us can hear.

First things first, this is not terrifying. But it is interesting.

You can watch a demo in the video embedded in the main Loop post. Not sure there’s a software fix to prevent this. Seems to me the audio in processor would have to have access to the frequency of the audio coming in, then filter it if it was outside some specified audible range.

Not sure this threat, which seems relatively minor, is worth the effort.

Also, DolphinAttack, cool name.

Here’s an actual Samsung Galaxy Note 8 facial recognition test

To give you a sense of the Samsung Galaxy Note 8 facial recognition, here’s a video from Mel Tajon showing it in action.

He takes a selfie on one phone, then points his Galaxy Note 8 at the selfie. Note that he doesn’t even need to frame the selfie particularly well and the Note 8 unlocks.

Nope.

UPDATE: Folks are saying this is the Note 8 in demo mode. Would love a verified source on this, but posting this here to give Samsung the benefit of the doubt. That said, take a read of the New York Times review, which doesn’t fare much better.

[Click through to the main Loop for the tweet/video.]

CNBC: Thousands of ‘innocent’ Android apps watch videos and view ads behind your back

CNBC:

That cute cat wallpaper for your Android phone or free photo-editing software app you downloaded may be using your phone without your permission and running up fraudulent ad views, according to a recent report from online marketing firm eZanga.

EZanga used its Anura ad fraud protection software to look at one module from a software development kit (otherwise known as an SDK) that hides in apps, then activates to run advertisements and play videos while the user is not on their phone. While the person may be sleeping, the malware chews up bandwidth and battery life.

And:

A Google spokesperson said all apps submitted to Google Play are automatically scanned for potentially malicious code and spammy developer accounts before they are published. Google said it also recently introduced a proactive app review process, as well as Google Play Protect, which scans Android devices to let users know if they are downloading a malicious app. There is also Verify Apps, which warns about or blocks potentially harmful apps.

And:

Google Play did remove all the apps eZanga named in the study within a few weeks, Kahn said. However, when they looked after the study in early August for the same SDK module, they found 6,000 more apps online (not necessarily in the Google Play store) that contained a morphed version of the malware.

Sounds like there’s a hole in the review process. This is the number one thing that keeps me from buying an Android device.

Why your face will soon be the key to all your devices

Wall Street Journal:

Forget fiddling with passwords or even fingerprints; forget multiple layers of sign-in; forget credit cards and, eventually, even physical keys to our homes and cars. A handful of laptops and mobile devices can now read facial features, and the technique is about to get a boost from specialized hardware small enough to fit into our phones.

Using our faces to unlock things could soon become routine, rather than the purview of spies and superheroes.

And:

Depth-sensing technology, generally called “structured light,” sprays thousands of tiny infrared dots across a person’s face or any other target.

By reading distortions in this field of dots, the camera gathers superaccurate depth information. Since the phone’s camera can see infrared but humans can’t, such a system could allow the phone to unlock in complete darkness.

And:

Teaching our phones what our faces look like will be just like teaching them our fingerprints, says Sy Choudhury, a senior director at Qualcomm responsible for security and machine-intelligence products. An image of your face is captured, relevant features are extracted and the phone stores them for comparison with your face when you unlock the phone.

As with fingerprint recognition, the facial images are securely stored only on the device itself, not in the cloud. History — from Apple’s battles with domestic law enforcement over unlocking iPhones to Amazon’s insistence that the Alexa doesn’t upload anything until it hears its wake word — suggests companies will use this privacy as a selling point.

My fingerprints don’t change, but moisture, sweat, and dirt can make my fingerprints unreadable to Touch ID. I wonder if a haircut, beard trim, shift in makeup patterns will have a similar impact on facial recognition.

Fascinating read.

Latest iOS beta offers quick way to force passcode reentry

When you restart your iPhone, you are forced to reenter your passcode to unlock your phone. If your phone is off, this prevents anyone with access to your phone from breaking in.

But with the latest beta (iOS 11 beta 6), Apple added this shortcut:

https://twitter.com/alt_kia/status/898067522234097664

In a nutshell, if you press the power button 5 times quickly, you are sent to the emergency call screen (as you were in previous incarnations). But in the latest beta, Touch ID will no longer unlock your phone, forcing you to reenter the passcode to regain access.

This is a smart add. You can make this move silently, even with the phone in your pocket.

Motherboard: Unpatchable hack that turns Amazon Echo into spying device

Louise Matsakis, Motherboard:

The Amazon Echo can be turned into a spying tool by exploiting a physical security vulnerability, according to Mark Barnes, a researcher at cybersecurity firm MWR InfoSecurity. His research shows how it’s possible to hack the 2015 and 2016 models of the smart speaker to listen in on users without any indication that they’ve been compromised.

The issue is unfixable via a software update, meaning millions of Echos sold in 2015 and 2016 will likely have this vulnerability through the end of their use.

Barnes executed the attack by removing the bottom of the smart speaker and exposing 18 “debug” pads, which he used to boot directly into the firmware with an external SD card. Once the hack is complete, the rubber base can be reattached, leaving behind no evidence of tampering.

With the malware installed, Barnes could remotely monitor the Echo’s “always listening” microphone, which is constantly paying attention for a “wake word.” (The most popular of these is “Alexa.”) Barnes took advantage of the same audio file that the device creates to wait for those keywords.

The way I read it, this does require physical access, but once the hack is installed, there’s no obvious way to detect its presence, and an update won’t get rid of the malware.

Feh.

iPhone bugs are too valuable to report to Apple

Motherboard:

In August 2016, Apple’s head of security Ivan Krstic stole the show at one of the biggest security conferences in the world with an unexpected announcement.

“I wanna share some news with you,” Krstic said at the Black Hat conference, before announcing that Apple was finally launching a bug bounty program to reward friendly hackers who report bugs to the company.

The crowd erupted in enthusiastic applause. But almost a year later, the long-awaited program appears to be struggling to take off, with no public evidence that hackers have claimed any bug bounties.

And at the core of it all:

The iPhone’s security is so tight that it’s hard to find any flaws at all, which leads to sky-high prices for bugs on the grey market.

The question is, are the bugs valuable enough for Apple to raise their bounties to compete with the grey market?

A cyberattack the world isn’t ready for

Nicole Perlroth, New York Times:

The strike on IDT, a conglomerate with headquarters in a nondescript gray building here with views of the Manhattan skyline 15 miles away, was similar to WannaCry in one way: Hackers locked up IDT data and demanded a ransom to unlock it.

The Wanna Cry attack made huge headlines. The IDT attack did not.

But the ransom demand was just a smoke screen for a far more invasive attack that stole employee credentials. With those credentials in hand, hackers could have run free through the company’s computer network, taking confidential information or destroying machines.

This is a huge issue. The premise is, there are many of these attacks and they are almost all undiscovered, allowing the attacker to build up a treasure trove of employee credentials. The attack was allegedly carried out using cyberweapons stolen from the NSA.

Scans for the two hacking tools used against IDT indicate that the company is not alone. In fact, tens of thousands of computer systems all over the world have been “backdoored” by the same N.S.A. weapons. Mr. Ben-Oni and other security researchers worry that many of those other infected computers are connected to transportation networks, hospitals, water treatment plants and other utilities.

Lots more to this story, including one person’s quest to hunt down the perpetrator. Terrific read.

More evidence in favor of Apple’s commitment to not adding a back door to modern versions of iOS, as well as a firm argument for Apple’s approach to OS distribution. A major part of the problem is the flood of old, unpatched flavors of Windows and Android out in the wild.

Android vs. iOS: Are iPhones really safer?

The article leads off with this:

In a new Apple ad, a thief breaks into “your phone” but struggles to get into an iPhone. Here’s how it plays out in the real world.

I was all set to read about how the ad was wrong, that Android phones were actually just as safe. But:

There are several reasons why iPhones are more secure than the various phones running Android software, according to Mike Johnson, who runs the security technologies graduate program at the University of Minnesota.

Side note: That’s no small-time opinion. The University of Minnesota has one of the best computer science programs in the US.

Moving on:

The old rule about PC viruses seems to be holding true with mobile phones, as well. Android phones make up more than 80% of the global smartphone market, and hackers are more likely to succeed if they write programs for these devices, just because of sheer numbers.

The Windows vs Mac logic. Certainly true.

Plus, he says, the process of “patching” security holes is easier on iOS devices. Apple’s iOS operating system only runs on iPhones, while Alphabet’s Android software runs on phones made by numerous manufacturers. It’s more complicated to deliver patches, or bug fixes, that work across so many device makers and carriers. Android can release a patch, but it won’t necessarily be available on all devices right away.

“Fragmentation is the enemy of security,” Johnson says.

And:

Last year, Wired magazine reported that one security firm was offering up to $1.5 million for the most serious iOS exploits and up to $200,000 for an Android one, a sign that iOS vulnerabilities are rarer.

Add to that Apple’s underlying review process, designed to restrict the use of private APIs, controlling techniques that could end-around Apple’s security processes. Not perfect, but world’s better than the more wild-west Android ecosystem.

Oh Samsung

Chaos Computer Club blog:

Biometric authentication systems – again – don’t deliver on their security promise: The iris recognition system of the new Samsung Galaxy S8 was successfully defeated by hackers of the Chaos Computer Club (CCC). A video demonstrates how the simple technique works.

The video is embedded in the main Loop post. This seems incredibly easy to replicate. Did Samsung even try to break their own iris recognition system? Sigh. Oh, Samsung.

Twitter changes privacy policy, adds new tools. Go check “Your Twitter data”

Twitter updated their privacy policy. Some highlights from the official Twitter blog post:

Today, we’re announcing a suite of industry-leading tools to give you more access to your information and greater, more granular control over how it’s used. We’ve also updated our Privacy Policy to reflect the improvements that we’ve made to Twitter.

And:

We’re expanding Your Twitter Data to give you the most transparent access to your Twitter information to date, including demographic and interest data, and advertisers that have included you in their tailored audiences on Twitter. Each category of data will be clearly marked, and you will be able to view or modify this data directly.

There’s lots more to read in the blog post, but the changes to Your Twitter Data are worth exploring.

Take a few minutes to look around at all the data Twitter has collected on you. There’s a lot of odd data in my set. For example, here’s the list of languages Twitter has for me:

French, German, Slovenian, Indonesian, Basque, Dutch, Turkish, Spanish, Estonian, Portuguese, Tagalog

That’s certainly not representative. Not sure where these come from. Languages of people who I’ve interacted with?

Also interesting is the list of Interests further down the page. This was somewhat representative of my interests, but not exact. I wonder how this list was built.

Another interesting collection is the “tailored audiences” section. From mine:

You are currently part of 6624 audiences from 1573 advertisers.

Not sure what that means, but if I cared to tailor my advertising experience, there’s a button to “Request advertiser list”. I might dig into that at some point.

Take some time to look over Your Twitter Data. Good to know what choices you have.

Microsoft: Lessons from last week’s cyberattack

Microsoft Blog, on the WannaCrypt ransomeware attack:

The WannaCrypt exploits used in the attack were drawn from the exploits stolen from the National Security Agency, or NSA, in the United States. That theft was publicly reported earlier this year. A month prior, on March 14, Microsoft had released a security update to patch this vulnerability and protect our customers. While this protected newer Windows systems and computers that had enabled Windows Update to apply this latest update, many computers remained unpatched globally. As a result, hospitals, businesses, governments, and computers at homes were affected.

And:

This attack demonstrates the degree to which cybersecurity has become a shared responsibility between tech companies and customers. The fact that so many computers remained vulnerable two months after the release of a patch illustrates this aspect. As cybercriminals become more sophisticated, there is simply no way for customers to protect themselves against threats unless they update their systems.

Amen. This has long been a bugaboo shared by Windows and Android and to a far lesser extent by macOS and iOS. Getting your users to update to the latest OS is a non-trivial problem.

More from Microsoft:

This attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem. This is an emerging pattern in 2017. We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world. Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen. And this most recent attack represents a completely unintended but disconcerting link between the two most serious forms of cybersecurity threats in the world today – nation-state action and organized criminal action.

This should be a wake up call. But just as the OS installed base is hopelessly fractured, the decision making mechanic behind these exploits is similarly fractured, mainly due to the need for secrecy. What are the chances the NSA, CIA and Microsoft are going to collaborate to work towards a solution?

[H/T John Kordyback]

Check who has access to your Google account

Yesterday, we posted about a widely spread, relatively sophisticated Google Docs phishing attack. Google has taken steps to disable the accounts behind the attack, but that is a bit of a whack-a-mole problem. Attacks like this are a part of life.

One thing you can do is periodically check out what apps and sites have access to your Google account by clicking on this link:

https://myaccount.google.com/security?pli=1#connectedapps

See anything there you don’t recognize? Click the Manage Apps link and revoke that sucker.

New AI copies anyone’s voice

Lyrebird:

Lyrebird will offer an API to copy the voice of anyone. It will need as little as one minute of audio recording of a speaker to compute a unique key defining her/his voice. This key will then allow to generate anything from its corresponding voice. The API will be robust enough to learn from noisy recordings.

This is fascinating and scary. The technology is far from perfect, but I can definitely see them getting to “close enough to fool you” pretty quickly.

Oh Samsung

Motherboard:

Last month, the CIA got a lot of attention when WikiLeaks published internal documents purporting to show how the spy agency can monitor people through their Samsung smart TVs. There was a caveat to the hack, however—the hijack involved older models of Samsung TVs and required the CIA have physical access to a TV to install the malware via a USB stick.

But the window to this sort of hijacking is far wider than originally thought because a researcher in Israel has uncovered 40 unknown vulnerabilities, or zero-days, that would allow someone to remotely hack millions of newer Samsung smart TVs, smart watches, and mobile phones already on the market, as well as ones slated for future release, without needing physical access to them. The security holes are in an open-source operating system called Tizen that Samsung has been rolling out in its devices over the last few years.

Got any Samsung devices in your house? Might want to read the details here.

On the House vote to wipe away the FCC’s landmark Internet privacy protections

OK, so this is bad. But as always, read up on this and on what you can do to protect yourself. Here are a few pieces to start. Readers, please do add in your own suggestions (both habit and reading) in the comments, or send to me via Twitter.

The Washington Post:

In a party-line vote, House Republicans freed Internet service providers such as Verizon, AT&T and Comcast of protections approved just last year that had sought to limit what companies could do with information such as customer browsing habits, app usage history, location data and Social Security numbers. The rules also had required providers to strengthen safeguards for customer data against hackers and thieves.

From the left:

“Today’s vote means that Americans will never be safe online from having their most personal details stealthily scrutinized and sold to the highest bidder,” said Jeffrey Chester, executive director of the Center for Digital Democracy.

And from the right:

”[Consumer privacy] will be enhanced by removing the uncertainty and confusion these rules will create,” said Rep. Marsha Blackburn (R-Tenn.), who chairs the House subcommittee that oversees the FCC.

Privacy will be enhanced? Give me a break.

The New York Times:

The bill not only gives cable companies and wireless providers free rein to do what they like with your browsing history, shopping habits, your location and other information gleaned from your online activity, but it would also prevent the Federal Communications Commission from ever again establishing similar consumer privacy protections.

There’s so much more to this. Read up on what’s just happened, then consider what it means to you, consider changing some online habits. With that in mind, a bit more reading:

  • The Tor Project: Read about anonymity and how Tor works, consider downloading Tor or a similar browser. At the very least, this will put one level of indirection between your internet travels and your IP address.

  • How to Go Invisible Online by Kevin Mitnick: This is a very understandable detailed practical guide. Though the focus is on email, it will help you understand how tracking works, how to insert encryption into the process.

  • VPNs are for most people, including you: What is a VPN? Why use one? Good explanations here.

I’m far from an expert on this stuff, so please do weigh in if there are better explanations, better resources to consider.

Laptop ban on planes came after plot to put explosives in iPad

The Guardian:

The US-UK ban on selected electronic devices from the passenger cabins of flights from some countries in north Africa and the Middle East was partly prompted by a previously undisclosed plot involving explosives hidden in a fake iPad, according to a security source.

When it was sneakers and underwear, they did not ban sneakers and underwear. This seems arbitrary at best.