Showing posts with label Privacy. Show all posts
Showing posts with label Privacy. Show all posts

February 18, 2022

This is why you should still be ad-blocking online

Having just pointed out how different Google's advertising-fuelled business is from Facebook's surveillance-fuelled shop, I suppose it's only fair to point out that being distinctly different from, and less evil than, Facebook, doesn't automatically make the crew at Google into paragons of virtue.

Por ejemplo, take this report from Huffpost:

Dammit, Google, must you?

A while back, I was watching The WAN Show, a weekly tech-focused podcast on Linus Tech Tips, when Linus, a YouTuber who makes a significant chunk of his company's revenue from Google Adsense, opined that ad-blocking was tantamount to theft; if not outright piracy, it was at the very least privateering.

Linus was wrong. There's a false equivalency at work in his argument, in which ads served up by Google are essentially the same thing as the ads that you'd see on network television: a minor nuisance which is borne by the audience in exchange for otherwise-free programming. The problem is that online ads aren't at all the same as the TV ads of the long ago time; online ads are lousy with scams and grift, when they aren't actually installing malware on your system when they're auto-executed by your browser. 

Do you remember cryptojacking? Because I do.

And then there's the creepy surveillance aspect of things; even Google, whose business model is still viable if the link between advertising and surveillance is broken, isn't yet a surveillance-free zone. There's a reason why the U.S. Congress is marking up legislation right now which will mandate a stop to the process; a looming legal problem that Google is trying to get ahead of by making cross-app tracking more difficult, much like Apple has already done.

And even if online ads weren't dangerous to your security, invasive to your privacy, and occasionally outright-illegal scams which Google not only fails to detect, but profits from, online ads are intrusive to the online experience, to a truly obnoxious degree.

Do you remember when a U.S. Congress, who couldn't agree at the time to keep their own fucking lights on, came together to mandate a decibel cap for television ads? Because I do.

Do I like LTT's content? Yes, I do. It their content so good that I'd be willing to give up my privacy, my security, my emotional well-being, and subject any number of desperate people to an endless (and apparently unstoppable) fire-hose of lies, scams, phishing attacks, misinformation, radicalization, and addiction? Yes, addiction; our current epidemic of opiate addicts is a direct consequence of Oxycontin advertisements which were pumped into people's homes, depicting an opiate painkiller as addiction-free, side-effect-free, and totally safe.

BTW, Purdue Pharmaceuticals, who were responsible for that ad campaign? They're desperately trying top settle the resulting class-action wrongful-death lawsuit... so far, without success.

Online ads aren't a relatively-innocuous thing which we endure to get access to free content. They're often dangerous, frequently outright evil, and demand far too much in exchange for showing us a few minutes of a movie trailer on YouTube... which, I'll remind you, is already a fucking advertisement, and shouldn't need to also be supported by selling additional pre- and end-roll ads... or mid-roll ads, for that matter.

So, no, Linus, ad-blocking isn't piracy, or privateering, or theft of any description. It's self-defence. If Google want me to stop blocking the ads they're hosting and serving, then that ad stream needs to be independently certified as 100% clean, by people whose word we can trust on the subject. In other words, not by Google themselves, who have a vested material interest in shading the truth on this subject.

June 26, 2021

"Android apps, forced Microsoft accounts, telemetry, oh my!"

Given how curmudgeonly my immediate reaction was to this week's Windows 11 announcement, I was beginning to wonder if I'm just being far too cynical about all of this. Nobody else was making that much noise about the six-year-old telemetry and data collection that was bundled into Windows 10 (and later back-ported to Windows 7). The biggest substantive criticism of W11 seemed to revolve around its hardware requirements (especially TPM 2.0); the next-biggest criticism was about the removal of the ability to reposition the taskbar from the bottom of the screen to the one of sides.

Apparently, though, other people just needed a little time to catch up; por ejemplo, Jez Corden, at Windows Central:

In our heavily connected, heavily surveilled world, anxiety about government and big tech overreach is at a fever pitch. And Microsoft has increasingly fallen on the wrong side of this argument.

At the Windows 11 event yesterday, Microsoft had an opportunity to meet some of these concerns, founded or not. Yet, it chose not to. [...]

In Microsoft's Windows 11 blog post, the word "privacy" doesn't appear once in the copy, which doesn't exactly bode well for its messaging. Windows 11 will force users to use a Microsoft Account in its free Home Edition, which already speaks of a business model where your data is the monetization engine. Even if you're using the world's best VPN, it's not exactly going to protect your data from going directly to Microsoft if you're signed in. [...]

Microsoft is also enlisting another doubted tech giant, Amazon, to bring Android apps to Windows 11. Amazon is under heavy scrutiny already for the way it treats its workers among other things, but combining this with Android adds another layer of concern. Android is oft-painted as an insecure, privacy-apathetic platform. True or not, the prospect of an Amazon-fronted Android subsystem in Windows 11 compounds data fears.

February 22, 2019

The bare minimum, done under duress
Facebook's anemic new pro-privacy measures don't impress me much

In a week which started with the UK Parliament condemning Facebook as "digital gangsters," it appears that Zuck & Co. have decided that they have to do something to turn back the tide of negative PR, and have chosen to make a couple of changes that, frankly, should have been made months ago.

First, as reported by TechCrunch, they're finally going to shut down their spyware-disguised-as-VPN "service," Onavo:
Facebook has also ceased to recruit new users for the Facebook Research app that still runs on Android but was forced off of iOS by Apple after we reported on how it violated Apple’s Enterprise Certificate program for employee-only apps. Existing Facebook Research app studies will continue to run, though.
With the suspicions about tech giants and looming regulation leading to more intense scrutiny of privacy practices, Facebook has decided that giving users a utility like a VPN in exchange for quietly examining their app usage and mobile browsing data isn’t a wise strategy. Instead, it will focus on paid programs where users explicitly understand what privacy they’re giving up for direct financial compensation.
Second, as reported by TechZim, Facebook are also making changes to their app which will allow users to opt out of having Facebook collect their location data even when the app was not in use:
To address user concerns about the extent to which Facebook’s Android app can access location data, Facebook has now updated its location controls. The new privacy settings will enable Android users to opt out of location tracking when they aren’t actively using the app and have greater control over how much of their location data is saved by the social media giant. With a new option in place, Android users will now be able to decide whether or not they want Facebook to be aware of their location at all times.
Again, while both of these are good changes, they're also obvious changes which should have been implemented months ago. If they'd announced these changes immediately after these scandals broke, I'd have been impressed with the speed of their response, even if it took them a little while to actually patch the changes into their app; instead, I can only cynically assume that they've been keeping these in their back pocket, ready to deploy in a week where Facebook desperately needed some good PR.

January 31, 2019

Sheryl Sandberg's here to make it better worse

It looks like Facebook's creepy teen-data-collection app is not going away, mainly because Facebook can't help themselves. Sheryl Sandberg, who I once praised for having better communication skills than Mark Zuckerberg, only to be proven 100% wrong about that during the whole Definers Media business, has once again stepped forward to try to direct the narrative, and her defense of Facebook appears to be almost entirely composed of lies.

As reported by Gizmodo:
Chief operating officer Sheryl Sandberg’s defense? The teens “consented.”
“So I want to be clear what this is,” Sandberg told CNBC’s Julia Boorstin on Wednesday. “This is a Facebook Research app. It’s very clear to the people who participated. It’s completely opt-in. There is a rigorous consent flow and people are compensated. It’s a market research program.”
“Now, that said, we know we have work to do to make sure people’s data is protected,” Sandberg added, repeating a thoroughly unconvincing line that has been rolled out so many times amid Facebook’s constant scandals that it has barreled into self-satire territory. “It’s your information. You put it on Facebook, you need to know what is happening. In this case the people who chose to participate in this program did.”
“But we definitely have work to do and we’ve done it,” Sandberg said, just to hammer home that line.
Here's the problem, though: the teens that Facebook bribed into accepting this app on their phones almost certainly didn't know how comprehensive the data collection would be. They didn't know that Facebook was behind the app, either, since Facebook took pains to hide their involvement:
Facebook had users sideload the app and avoided submitting it through TestFlight, Apple’s beta testing system, which requires Apple review.
And Facebook didn't do anything to protect the privacy of these teens; Apple had already blocked the app before Facebook made a show of "voluntarily" taking it down.

U.S. lawmakers, naturally, are furious, as reported by The Verge:
Tuesday night, a TechCrunch investigation revealed that Facebook had been secretly paying teenagers to install a VPN that let the company see nearly everything they did on their phones. Today, lawmakers on both sides of the aisle are lashing out at the tech giant, raising new questions about how the company might fare in future privacy legislation.
“Wiretapping teens is not research, and it should never be permissible.” Sen. Richard Blumenthal (D-CT) said in a statement. “Instead of learning its lesson when it was caught spying on consumers using the supposedly ‘private’ Onavo VPN app, Facebook rebranded the intrusive app and circumvented Apple’s attempts to protect iPhone users.”
Blumenthal said that he would be sending letters to Apple and Google to probe them on their involvement by hosting the apps.
Sen. Josh Hawley (R-MO) tweeted, “Wait a minute. Facebook PAID teenagers to install a surveillance device on their phones without telling them it gave Facebook power to spy on them? Some kids as young as 13. Are you serious?” This is Hawley’s first year serving in the Senate, and he has already positioned himself as a strong conservative voice on tech. At his first Judiciary hearing in January, Hawley lambasted President Trump’s attorney general nominee with questions regarding his stance on regulating Silicon Valley companies.
Yes, folks, that's bipartisan agreement that something needs to be done about Facebook, in a country where it took the two major political parties over a month to agree that government was something that needed to exist, and be paid for.

It's not all doom and gloom for Facebook, though. Advertisers have apparently decided that they don't care how terrible Facebook's image is, leading to a 61% jump in earnings despite the firm's bad press, and Facebook managed to gain a few users over the quarter, too. The result? A surge in their share price, of course, meaning that the company's new, more combative media strategy is likely to be the tone we hear from them going forwards. And why not? It's working for them, at least in the near term. And if there's one thing on which you can rely, it's that bad corporate behaviour that gets rewarded with increased share prices and executive bonuses is guaranteed to continue.

All in all, it looks like this year in Facebook is going to be an even bumpier ride than last year, with #deleteFacebook having stalled, Facebook's soul-less advertiser clients having returned, and Facebook's increasingly defiant tone in the face of a continued litany of scandal having finally got the attention of U.S. lawmakers, who are already proposing legislation to put Facebook back in its place.

Buckle up, sunshine. It gets even rougher from here.

January 30, 2019

This week in Facebook

Facebook's headlines this week are all about the children, and how Zuckerberg & co. are knowingly exploiting them.

First up, this piece from TechCrunch:
Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.
Pro tip: If you're cloaking your involvement in a shady project because you know it's too shady to be publicly associated with... you should probably be rethinking the whole enterprise. Just saying.

Facebook's "Project Atlas" shenanigans should sound familiar: it wasn't that long ago that Facebook's Onavo app was removed from the iOS app store for violating Apple's terms of service. And the new app is pretty comprehensive, potentially allowing the collections of "photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed." And, while Facebook apparently pulled an about-face at "at 11:20pm PT" (when TC's piece was updated), announced that FB was removing the app from Apple phones, they apparently have no plans yet to do the same on Android phones.

Also, it should be noted that most jurisdictions don't allow 13 year olds to sign legally binding contracts, which means that Facebook's use of just-barely-teens for this effort may be not-quite-legal. Which is when we get to the second piece of Facebook's sketchy and dodgy teen-involving bullshit, as reported by arstechnica:
Two Democratic senators have asked Facebook CEO Mark Zuckerberg to explain why the social network apparently "manipulated children into spending their parents' money without permission" while playing games on Facebook.
"A new report from the Center for Investigative Reporting shows that your company had a policy of willful blindness toward credit card charges by children—internally referred to as 'friendly fraud'—in order to boost revenue at the expense of parents," US Sens. Edward Markey (D-Mass.) and Richard Blumenthal (D-Conn.) wrote in a letter to Zuckerberg today. "Notably, Facebook appears to have rejected a plan that would have effectively mitigated this risk and instead doubled down on maximizing revenue."
Because parents didn't know that children would be able to make purchases without additional verification, "many young users incurred several thousands of dollars in charges while playing games like Angry Birds, Petville, Wild Ones, and Barn Buddy," the senators' letter said.
What, did you think that Facebook had dodged responsibility for this one? Well, think again, Apple fan, because the Democratically-controlled U.S. House of Representatives aren't about to let this go, and their colleague in the U.S. Senate look to also be keen to get in on the regulating-of-Facebook action. I told you that Facebook's troubles were just getting started.

And so, with two different Facebook-exploits-teens stories in the headlines, we can now head into Wednesday... and the rest of the week. That's right, folks, Facebook's week isn't even over yet. Winning!

January 26, 2019

This week in Facebook

After starting the new year with a few largely scandal-free weeks, Mark Zuckerberg apparently decided that he was bored, or something, because the Facebook shit resumed flying fast and thick, and Gizmodo had pretty good coverage of it all.

First up: Mark Zuckerberg's thirsty op-ed, in which he opined that people didn't trust Facebook only because we don't understand them:
On Thursday, the Wall Street Journal published a 1,000-word screed by Mark Zuckerberg about the company’s data collecting practices titled “The Facts About Facebook.” In it, Zuckerberg makes noise about the company being about “people,” and insists—as he has been for the majority of his company’s 15-year history—that we should trust it. Zuckerberg appears to think the primary reason users have little faith in the company’s ability to responsibly or ethically handle their data is because of its targeted advertising practices, about which he writes: “This model can feel opaque, and we’re all distrustful of systems we don’t understand.” 
I guess the apology tour is over; Zuck is back to his normal, condescending self.

Gizmodo's Catie Keck goes on to list a few of the reasons why people who understand Facebook just fine also distrust Zuck & Co., starting with FB's lack of transparency, continuing on through Cambridge Analytica, and ending with their scraping and then sharing data about their users (and also about people who've never used Facebook themselves) with advertisers, and other low-lights:
In 2018, we learned that Facebook was data-sharing with other companies like Microsoft’s Bing, Spotify, Netflix, and others in exchange for more information about its users. There were also the revelations that Cambridge Analytica data-scraping was worse than we thought; that Facebook was sharing shadow contact information with advertisers; and that turning off Facebook location-sharing doesn’t stop it from tracking you. That’s obviously totally aside from the George Soros conspiracy theory fiasco; its mishandling of Myanmar genocide; and its standing as a hotbed for rampant misinformation.
As with his year-end Facebook post—which I’ll note here also largely ignored the tsunami of public relations problems the company faced last year—Zuckerberg appears to remain bafflingly optimistic about the function of his company. To be clear, this is the same founder of Facebook who once called users of his product “dumb fucks” for trusting him with their sensitive information.
Lots of links in the original article, if you missed some of those earlier "hits" when they happened.

So, not an auspicious beginning. Zuck wasn't done yet, though; not by a long shot.

January 24, 2019

Remember that Firefox is an option

I consume a fair bit of basically-free online content, and don't have anything against "paying" the creators of that content by having a little advertising accompany it, as long as those ads are not intrusive, or disruptive, or loaded with crypto-jacking (or other) malware. I only went nuclear on online ads because advertisers couldn't get their shit together.

So, when Google announced that their Chrome browser's selective ad-blocking functionality would be rolling out worldwide, I was cautiously optimistic. I was even considering switching back to Chrome from Firefox, just to see what sort of a web browsing experience I could have on Google's browser, now that I didn't have to be running multiple extensions in order to block the bad guys.

And then, Google had to go and break everybody else's ad-blockers. Because of course they did; Google sells advertising, and obviously they want you to stop blocking as many ads as possible. Which sucks; they're basically taking away consumer choice, just to line their own pockets. Even worse, though, Google aren't just breaking ad-blocking extensions; they're breaking a whole bunch of other stuff in the process.

As reported by ZDNet:
A planned update to one of the Google Chrome extensions APIs would kill much more than a few ad blockers, ZDNet has learned, including browser extensions for antivirus products, parental control enforcement, and various privacy-enhancing services.
[...]
The biggest of these categories would be extensions developed by antivirus makers and meant to prevent users from accessing malicious sites and for detecting malware before it's being downloaded.
Yikes.

January 13, 2019

So.... I guess that was CES?

Does anyone else find it weird that 2019's big Consumer Electronics Show wasted the entire week without showcasing anything for actual consumers?

I mean, sure, we got LG's rollup OLED TV, which looks sexy but costs US$8000, and which will need to be replaced in two years' time because of OLED's severe screen burn-in issues. Who can afford to spend $8K every two years on a roll-up gimmick TV? Who is this for?

We also got a plethora of 8K TVs, at a time when even 4K TVs aren't really a thing yet. I mean, it's great that the likes of LG are making 4K sets that are comparable in price to 1080p sets; if you're needing to replace your TV, and don't need a refresh rate higher than 60 Hz for any reason, then you can certainly go 4K because it won't cost extra so why not? But you still don't need a 4K TV for which there's almost no content available, and you definitely don't need an expensive 8K set for which there's even less content on the menu. 8K is nothing but costly, boasting high price points while delivering zero value to the consumer... which was basically the prevailing trend of CES2019.

Oh, yes, and then there's 5G... which, again, boasts a premium price while being completely useless to consumers since there are no 5G networks. And, no, AT&T's 5G E nonsense is not a 5G network, and does not count. Which brings us to CES2019's other prevailing trend, which was straight-up lies told to consumers about expensive products which are being marketed at them, without being in any way designed for them.

Worse yet, the one big discussion about technology that consumers actually care about was never mentioned by any of the big exhibitors.

December 20, 2018

Well beyond the realm of incompetence...

In case you were wondering... Facebook's day of bad news didn't only revolve around the consequences that they're now facing for their reckless disregard of their users' privacy. It also included new insight into that disregard for their users' privacy. As reported by The Guardian:
Facebook targets users with location-based adverts even if they block the company from accessing GPS on their phones, turn off location history in the app, hide their work location on their profile and never use the company’s “check in” feature, according to an investigation published this week.
There is no combination of settings that users can enable to prevent their location data from being used by advertisers to target them, according to the privacy researcher Aleksandra Korolova. “Taken together,” Korolova says, “Facebook creates an illusion of control rather than giving actual control over location-related ad targeting, which can lead to real harm.”
Facebook users can control to an extent how much information they give the company about their location. [...] But while users can decide to give more information to Facebook, Korolova revealed they cannot decide to stop the social network knowing where they are altogether nor can they stop it selling the ability to advertise based on that knowledge.
They say that you should hesitate to ascribe to malice that which can adequately be explained by incompetence, but there is no incompetence surrounding this latest revelation: Facebook themselves straight-up admit that they use "IP and other information such as check-ins and current city from your profile" to built these shadow profiles of users' location data, even after those users refused to grant Facebook permission to build a profile of their location data. This is clearly malicious. I've said it before, and I'll say it again: Google does not do this. Microsoft does not do this; Apple and Amazon do not do this. There is no "all sides" argument to be made by WIRED magazine, or any of Facebook's other Definers Media-fueled defenders.

Only Facebook is this shady. Facebook is the problem, here.

To say that this most likely contravenes multiple provisions of the GDPR would be something of an understatement; whether U.S. laws currently prohibit this sort of "shadow profiling" is anyone's guess, although I'm sure the new U.S. Congress will be looking into that question, among others. If you're waiting for government regulators to get a handle on the full breadth and depth of Facebook's scumminess, though... you should probably stop waiting, and just delete Facebook, already.

December 19, 2018

Fucking Facebook's terrible year isn't over yet

With two more weeks to go, Facebook's horribad year is still getting worse, as reported by Gizmodo:
According to a bombshell report in the New York Times on Tuesday, Facebook’s behind-the-scenes efforts to give select corporate partners access to user data have been far more expansive than previously reported, including allowing certain third-party companies access to user contact lists and access to users’ private messages.
Yes, that’s right, Facebook gave Netflix and Spotify the ability to read users’ messages, and other tech giants including Microsoft, Amazon, and Sony access to data on users’ friends, according to hundreds of internal documents obtained by the paper and interviews with dozens of “former employees of Facebook and its corporate partners.” 
Not only did Facebook allow 150 companies, including Microsoft, Netflix, Spotify, Amazon, and Yahoo, access to users’ private messages, they also allowed them unprecedented access to users’ personal data. According to BuzzFeed News:
Facebook allowed Microsoft’s search engine Bing to see the names of nearly all users’ friends without their consent, and allowed Spotify, Netflix, and the Royal Bank of Canada to read, write, and delete users’ private messages, and see participants on a thread.
Let that sink in for a second: these companies could not only see your messages, they could delete any of them which they didn't like, allowing them to censor Facebook users without their consent, and possibly even without them noticing. It's the nuclear option of damage-control PR. And that's not all they could do.
It also allowed Amazon to get users’ names and contact information through their friends, let Apple access users' Facebook contacts and calendars even if users had disabled data sharing, and let Yahoo view streams of friends’ posts “as recently as this summer,” despite publicly claiming it had stopped sharing such information a year ago, the report said. Collectively, applications made by these technology companies sought the data of hundreds of millions of people a month.
So, yes, in case you were wondering, Facebook's regard for your personal privacy, safety, and fundamental right to self-expression really is utterly non-existent, and the situation is far worse than we knew... with, doubtless, even worse revelations to come. Because this is just what we're learning in spite of Facebook's best efforts to keep all of this under wraps; what we'll learn next year, when the Democratic Party takes control of the U.S. Congress and its various investigative and oversight committees, is anyone's guess, but there's almost certainly more to learn here.

December 05, 2018

Fucking Facebook...

As someone who's never installed a Facebook app, I'd totally missed this when it first surfaced back in March, and I don't recall seeing it make headlines either. It should have. As reported by The Verge:
Yes, that's Facebook, bypassing Android's privacy controls to access data that they knew damn well they had no right to, without bothering to ask permission from anybody at all... because GREED.

October 09, 2018

Because misery loves company, I guess?

After a summer of shit for Facebook, and a week from hell for Microsoft, I guess Google must have been feeling left out, or something, because they've now shit the bed, too.

As reported by Futurism:
Remember Google+?
Me neither. But while we were blissfully ignorant of its continuing existence something predictable (and quite commonplace in 2018) happened: private user data leaked.
Here’s what happened. There was a bug that allowed hundreds of third party applications to access user’s personal data, according to a Google blog post. We’re talking user names, employers, job titles, gender, birth place and relationship status of at least half a million Google+ users, according to the Wall Street Journal.
As the Wall Street Journal points out, the bug has been around since 2015. Google says it only discovered and “immediately patched” it in March of this year — the same month Facebook’s Cambridge Analytica scandal started to blow up. In the same blog post, Google announced it will shut down Google+ entirely.
So why are we only hearing about this now, seven months later? Don’t Google users have a right to know if their personal data was vulnerable to hackers over the last three years? Internal memos obtained the Wall Street Journal suggest Google was trying to avoid triggering “immediate regulatory interest.” In other words: avoid fines and penalties.
Where do we even start?
The fact that the breach happened at all is already bad, but failing to disclose that breach to affected users, specifically to dodge regulatory action? That's beyond sketchy. Do you remember when Google's mission statement started with "don't be evil?" Yeah... apparently, neither do they.

October 05, 2018

In the midst of another huge data breach, Facebook adopts Comcast-style loss prevention strategy

I am surprised only that people are surprised. From Gizmodo:
Facebook is currently dealing with the fallout of a massive attack that compromised site security and allowed hackers to seize the access tokens of roughly 50 million accounts, potentially giving them full control of both the accounts and linked apps. It is still sorting out what user data might have been stolen. Amid all this, Facebook is also extending its grip on how long it can keep account deletion requests in hiatus from two weeks to a month, the Verge reported on Wednesday.
Here’s what that means. When a user tries to delete their Facebook, the site holds on to all of their data for a period of time in case they decide they want to come back. That used to be 14 days, and now it is conveniently a month, right around the same time users might be getting antsy that hackers were able to get past the site’s core security measures.
[...]
It’s not clear when the decision was made, or whether it predates September 25th, when the company says it became aware of the hack. (Gizmodo has reached out for comment, and we’ll update this post if we hear back.) Even if the updated data retention policies have nothing to do with the security incident, that still doubles the amount of time Facebook is able to hold user data after they decide they want out—essentially making it harder for users to manage their own privacy and security so that the company can try to retain them at a time growth is stalling.
Le sigh.

The depths of Facebook's asshole-ery really should not be at all surprising, at this point. Holding your delete request in a "hiatus" state at all is already bullshit; I can understand that it might take some time to effect the deletion, and that there might be other deletion requests in the queue ahead of a newly submitted one, but that was never why it took two weeks for Facebook to complete a deletion, something which they have now confirmed. This is a loss prevention strategy, plain and simple; it is Facebook simply not doing what you've clearly told them to do, simply because there's benefit to them in stalling as long as possible.

July 02, 2018

Facebook’s disclosures under scrutiny
by the FBI, SEC, FTC, and DOJ

When the extent of the Cambridge Analytica scandal was first breaking back in March, I wrote this:
There are people at Facebook who signed off on a business plan that involved collecting legally protected information about people with neither their knowledge nor their consent, and selling that data to third parties; people who then decided not to notify users when it was crystal clear that the whole shady business had gone very, very wrong. Those people will not just be facing lawsuits; those people will be facing jail time... in addition to the lawsuits.
Some readers (all two of you 😃) may have thought that I was being somewhat hyperbolic with that  statement. And, in fairness, apart from a few relatively uneventful appearances before lawmakers in the U.S. and EU, Facebook was looking like they might have escaped the worst of the possible outcomes that they could have been facing. But appearances can deceive, and Facebook themselves are now confirming that they've been under investigation, by multiple U.S. federal agencies, since at least May.

As reported by the Washington Post:
The questioning from federal investigators centers on what Facebook knew three years ago and why the company didn’t reveal it at the time to its users or investors, as well as any discrepancies in more recent accounts, among other issues, according to these people.The Capitol Hill testimony of Facebook officials, including Chief Executive Mark Zuckerberg, also is being scrutinized as part of the probe, said people familiar with the federal inquiries.
Facebook confirmed that it had received questions from the federal agencies and said it was sharing information and cooperating in other ways. “We are cooperating with officials in the US, UK and beyond," said Facebook spokesman Matt Steinfeld.
This puts yesterday's revelations (from last Friday's midnight document dump) in a different light. Who wants to bet that Facebook's 747-page infodump will be mostly information that investigators already know? Who else thinks that they were trying to get out ahead of the narrative on investigative heat that's about to get way hotter, in addition to burying as many juicy details as possible in the Friday night news graveyard?

Who else thinks that they might not get away with either of those things, this time around?

July 01, 2018

About-Face(book)

I'm just going to jump straight to the lede, from CNBC:
Facebook has admitted that it gave dozens of companies access to its users’ data after saying it had restricted access to such data back in 2015, the latest wrinkle in a firestorm over how the social network manages user information.
In news first reported by The Wall Street Journal, Facebook handed a 747-page document to U.S. lawmakers released late Friday. In that cache of information, Facebook said it granted 61 companies like AOL, Nike, UPS and dating app Hinge a "one-time" six-month extension to comply with its policy changes on user data. In addition, there are at least five other firms that may have accessed limited data, due to access they were granted as part of a Facebook experiment, the company added.
In 2015, Facebook said it had cut off developer access to its users’ data and their friends.
What's that you say? Facebook said one thing about its treatment of users' data in 2015, only to be forced to admit in 2018 that their previous claims were actually bullshit? Quelle surprise!

Unlike their 450-page written response to Congressional questioning, Facebook dropped their latest coma-inducing door-stopper late on Friday, which is what you do if you're trying to bury a story; the idea is that most newsrooms have closed for the weekend, leaving only cable news channels who mostly rely on print media outlets to do their actual reporting, anyway. The Wall Street Journal, however, apparently had other ideas, and posted an extensive write-up:
Facebook provided the document to the Energy and Commerce Committee of the U.S. House of Representatives in response to hundreds of questions from the committee, which quizzed Facebook Chief Executive Mark Zuckerberg during testimony in April. The committee said on its website that it received the responses shortly before midnight on Friday; the deadline for the responses was the close of business Friday.
It is Facebook’s second attempt at answering Congress’s queries [...] lawmakers asked Mr. Zuckerberg whether Facebook was in violation of a settlement the company made in 2012 with the Federal Trade Commission, under which the company is required to give its users clear and prominent notice and obtain their express consent before sharing their information beyond their privacy settings. Facebook said in the document that it has not violated the FTC act.
Facebook indicated it has struggled to fully reconstruct what happened to its users’ information. “It is possible we have not been able to identify some extensions,” Facebook said about companies that had access to users’ friends’ information past the 2015 cutoff.
Q:  Did you, Facebook, violate your 2012 settlement with the FTC?

A: No, because we changed our policies three years after that, which is clearly good enough to keep us out of trouble.

Q: Are you sure?

A: OK, it's possible that we issued extensions to some of our favourite corporate customers after 2015. So, maybe. But probably not. [Three weeks later.] OK, yes. Yes, we did. Can we please bury this in a Friday night news dump?

[And... scene!]

Seriously, somebody needs to hold Facebook accountable for this shit.

June 29, 2018

Not nearly enough:
California's new privacy law looks like too little, too late

Let's start with the lede, from c|net:
In two votes Thursday, the state's Senate and Assembly both passed the bill unanimously, and Gov. Jerry Brown signed it within a matter of hours. The rush to pass the bill comes courtesy of an even stricter voter initiative that would've appeared on California ballots this November if lawmakers hadn't gotten the bill through by 5 p.m. PT Thursday. Thursday is the state's deadline for withdrawing a ballot measure for the November election.
Privacy advocates cheered the new law. Marc Rotenberg, executive director of the Electronic Privacy Information Center, said the law means privacy could become an issue that impacts the upcoming midterm elections.
"This is a milestone moment for privacy law in the United States," Rotenberg said in a statement. "The California Privacy Act sends a powerful message that people care about privacy and that lawmakers will act."
Sounds great doesn't it? Two days ago, I'd have been cheering, too. But that was then, and this is now, and Exactis happened in between.

The Achilles' heel of California's new law is that people need to know when companies are collecting their data, in order to be able to tell them to stop. That might deal with the likes of Facebook, but most of the people affected by Exactis' data breach had no idea that the company even existed, and no way to know that they were building shadow profiles of as many as 60% of people living in the U.S. Yes, it sends a message; unfortunately, that message is that U.S. lawmakers still don't fully comprehend the scope and severity of the problem.

Lawmakers, both in California and elsewhere, should make the practice of shadow profiling into a criminal offence, with significant penalties for those found to be building such profiles anyway, and additional penalties for failing to secure databases of peoples' personal and financial information. Stop putting the onus on consumers to police this, and make it the responsibility of the companies in question to stop invading peoples' privacy and weakening the security of their data.

June 28, 2018

Facebook's "shadow profiles" are not unique, and that's a huge problem

Facebook's practice of building shadow profiles, collecting enormous amounts of personal data about people who don't have, or who never had, Facebook accounts, with neither their knowledge nor their consent, is hugely problematic. It's not just the ethical and privacy concerns, with an enormous corporation building a detailed profile which can be used to target you for all manner of subtle (or less-than-subtle) influencing; there's also a security concern here, because the sort of information that accumulates in these shadow profiles can be used to facilitate harassment, intimidation, or assault, spear phishing attacks, identity theft, doxxing, Swatting, and more. Lives may literally depend on the ability of the profilers to keep their shadow profile databases secure.

Enter Exactis, a marketing firm that you've probably never heard of, but who you're going to learn a lot more about in the coming weeks. From WIRED:
"It seems like this is a database with pretty much every US citizen in it," says Troia, who is the founder of his own New York-based security company, Night Lion Security. Troia notes that almost every person he's searched for in the database, he's found. And when WIRED asked him to find records for a list of 10 specific people in the database, he very quickly found six of them. "I don’t know where the data is coming from, but it’s one of the most comprehensive collections I’ve ever seen," he says.
Thanks to the avarice and incompetence of Exactis, a huge swath of the U.S. population is about to learn just how problematic it is to have a gigantic trove of personal information data, including yours, freely available online to literally whoever wanted access. Much like Equifax's security failure, which leaked the SSNs and credit card information of 145 million-plus Americans, along with tens of millions of Brits, the true impact of Exactis' security failures will likely take years to truly manifest, but the cost to society of failing to regulate the practice of data profiling people without their knowledge and informed consent is already significant, and growing with each passing day.

The inevitable sequence of public outcry, Congressional hearings, and class action lawsuits should be getting underway shortly. We can hope that no violence or deaths follow as a result of this breach... but given recent history, I'm not holding out much hope of avoiding that grisly outcome.

Seriously, there needs to be a law against this shadow profiling shit.

June 15, 2018

This week in Facebook's fiasco

Now that E3 is over, we can return our attention where it belongs: Facebook, who still haven't really cleaned up their mess, and whose attempts to smooth it over with PR aren't going as well as they might have liked.

Let's start with two pieces from the Guardian:
Apple strikes blow to Facebook as it clamps down on data harvesting
Rules appear to target services like Onavo Protect, which claims to protect user data even as it feeds information to Facebook
Apple has updated its rules to restrict app developers’ ability to harvest data from mobile phones, which could be bad news for a Facebook-owned data security app called Onavo Protect.
Onavo ostensibly provides users with a free virtual private network (VPN) which, it claims, helps “keep you and your data safe when you browse and share information on the web”. What is not immediately obvious is that it feeds information to Facebook about what other apps you are using and how much you are using them back to the social networking giant.
“The problem with Onavo is that it talks about being a VPN that keeps your data private, but behind the scenes it’s harvesting your data for Facebook,” said Ryan Dochuk, CEO of the paid-for VPN TunnelBear. “It goes against what people generally expect when they use a VPN.”
Onavo has been a Trojan horse for Facebook (in the classical sense, not as malware), allowing it to gather intelligence on the apps people use on tens of millions of devices outside its empire. This real-time market research highlights which apps are becoming popular and which are struggling. Such competitive intelligence can inform acquisition targets and negotiations as well as identify popular features it could copy in rival apps.
Just in case you needed a reminder that Cambridge Analytica were just the canaries in Facebook's coal mine, and that the root of the problem is Facebook itself. Yes, Facebook literally went undercover on users' systems, pretending to be a privacy protection VPN tool, and then mined those users' systems for additional data which was fed back to the parent company. It might not count as a trojan, in the sense of not allowing Facebook to literally control users' systems, that's only because Facebook want the data that those users generate. Which means that Facebook isn't just a risk to your privacy anymore: they're a risk to your security, too.

Which could be why the Guardian also ran this opinion piece today, from Emma Brockes.

May 24, 2018

What could possibly go wrong?

Today in Facebook, courtesy of David Bloom at Forbes:

Facebook Wants Your Nude Photos; What Could Possibly Go Wrong?

In a bid to wrap up the race for the Tin Ear of the Year Award before June 1, Facebook has begun asking its 2.2 billion users to discreetly share their indiscreet nude photos with the company. The plan, they say, is to train Facebook to block the images you don't ever want on Facebook, in cases such as revenge porn.
The company is partnering with several third-party groups – such as the Cyber Civil Rights Initiative and the National Network to End Domestic Violence – to distribute review forms to those who've had to deal with former sexual partners improperly posting their sensitive images.
Requesters are given a one-time upload link to send those images to Facebook, where they are reviewed by "a handful of specially trained members" of the company's burgeoning content-review team.
Those team members will create what's effectively a digital fingerprint of the images so that Facebook's systems can automatically recognize and block the images before they can be seen by anyone outside the company. The program is undergoing trials in the United States, United Kingdom and a couple of other territories.
This all sounds pretty good. Except, remember that this is the same company that has had such a bumpy time the past couple of years controlling what's happening to personal data on its site, and what's being shared with outside companies.
You may recall that little kerfuffle last year when it became clear that as many as 80 million people had their data improperly shared/exposed to third-party providers during the 2016 elections as part of relatively routine Facebook operations.
Does the expectation that people will send Facebook their nudes, specifically so that Facebook's crack team of privacy experts can examine them, strike anyone else as being even creepier that Facebook's usual level of creepy? I mean, really, just... ick.

There's also the problem that, even if it works, this will only help prevent revenge porn posts on Facebook and its subsidiaries. It won't help with revenge porn posts on, say, 4chan, or Reddit, "which previously had revenge porn subreddits and still has issues." Wouldn't it be better to get all of the major social media sites together to start an independent service that can create anonymized digital fingerprints of photos that any S.M. site can use to filter out likely revenge porn posts, and flag potential abusers?

So, yeah, it's nice that Facebook is finally trying to do something about revenge porn on its service, but timing is everything, especially for something this sensitive, and I'm finding it hard to imagine a worse time for Facebook to launch something like this. Talk about tone deaf.

May 03, 2018

Cambridge Analytica finally killed by the scandal that they caused...

... and somehow, they didn't see it coming. From CBC News:
The British data analysis firm at the centre of Facebook's privacy scandal is declaring bankruptcy and shutting down.
London-based Cambridge Analytica blamed "unfairly negative media coverage" and said it has been "vilified" for actions it says are both legal and widely accepted as part of online advertising.
"The siege of media coverage has driven away virtually all of the company's customers and suppliers," the company said in a statement on Tuesday. "As a result, it has been determined that it is no longer viable to continue operating the business."
The company said it has filed papers to begin insolvency proceedings in the U.K. and will seek bankruptcy protection in a federal court in New York. Employees were told on Wednesday to turn in their computers, according to the Wall Street Journal.
Facebook said it will keep looking into data misuse by Cambridge Analytica even though the firm is closing down. And Jeff Chester of the Center for Digital Democracy, a digital advocacy group in Washington, said criticisms of Facebook's privacy practices won't go away just because Cambridge Analytica has.
"Cambridge Analytica's practices, although it crossed ethical boundaries, is really emblematic of how data-driven digital marketing occurs worldwide," Chester said.
"Rather than rejoicing that a bad actor has met its just reward, we should recognize that many more Cambridge Analytica-like companies are operating in the conjoined commercial and political marketplace."
Just a little reminder, in case you still needed it, that there's more where Cambridge Analytica came from, and Facebook's fiasco is far from over. I have to disagree with Jeff Chester on one point, though: I think that most of us can still remember that, while also rejoicing in Cambridge Analytica's demise.

The other Facebook histoire du jour? The Facebook engineer, and professional stalker, that they had to fire for abusing FB's user information database, of course.