Showing posts with label Regulation. Show all posts
Showing posts with label Regulation. Show all posts

February 18, 2019

"Digital gangsters"

Facebook's had a relatively quiet couple of weeks, with no major new scandals breaking and not much news on the investigation front. That period of calm appears to be drawing to a close, though, with the UK Parliament firing the starting gun on the race to end Facebook's current status quo, as reported by Gizmodo:
The UK Parliament’s Digital, Culture, Media and Sport Committee was spurred to launch an investigation of social media in 2017 following revelations regarding Russian election-meddling and later, the Cambridge Analytica scandal. The resulting 108-page report takes Facebook to task on numerous issues including violating its own privacy agreement with users and participating in anti-competitive practices. “Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the committee wrote.
[...]
One of the report’s more interesting details is that it claims the Information Commissioner’s Office (ICO) shared the names of three “senior managers” at Facebook with the committee who allegedly were aware of the Cambridge Analytica data breach prior to the 2015 date that Facebook has claimed it first learned about the incident. The managers’ names were not revealed in the report but the committee found it unconscionable that the issue wasn’t brought to Zuckerberg’s attention until 2018. “The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests,” the committee wrote.
That sounds... potentially actionable. I wonder if the names have been withheld because active criminal investigations are underway?

February 03, 2019

This is how Facebook's week ends

It's been something of a roller-coaster week for Zuck & co. With the revelations of their creepy teen-surveillance Facebook Research app and their complicity in teen credit card fraud book-ending their strong financials and resulting share price gain, it was a little tough to tell whether this week should go on the books as a win or a loss for FB.

Well, it turns out the week wasn't over yet, and the latest report resolves that question in style: this week is definitely an "L" for Facebook.

From Gizmodo:
Amid the constant scandals swirling around social media giant Facebook and its questionable handling of user data, at least six state attorneys general have launched their own investigations of the company, Bloomberg reported this week.
Two distinct groups have formed, according to Bloomberg’s report: Pennsylvania and Illinois have joined Connecticut in an investigation of “existing allegations,” though the report does not mention what those are. Officials in New York, New Jersey, and Massachusetts, “which were already known to be probing Facebook, are seeking to uncover any potential unknown violations,” a source told the news agency.
Oh, my. That doesn't sound good.

EA: The fun is about to end

It the process of writing my post about the Epic Games/Metro:Exodus mess, I came across this SeekingAlpha article on what the future holds for EA Games, and OMG is it ever a must-read.
EA needs to change its business model fundamentally. Its current model alienates players and makes EA more susceptible to competition. A significant source of profits, lootboxes, are being regulated away. Players are moving to mobile and free to play, where EA is weaker than its competitor Activision Blizzard. We believe that EA is currently heading towards another inflection point where players will start leaving en masse. EA could've already crossed the inflection point. Either way, things aren't looking good.
"Good short candidate" refers to the investment strategy of short-selling, essentially selling stock that you've only borrowed for just that purpose. It's basically a bet that the stock's value is about to drop; if it does, they you pocket the difference between what the stock was worth when you sold it, and what it was worth when you had to buy it back to "return" the shares that you'd "borrowed."

The quoted passage, BTW, is the conclusion from fifteen pages of analysis, all of which are worth reading if you're at all interested in the video game business. Seriously, go read the whole thing, because it's fascinating in a way that stock analysis normally isn't.

The various section titles in the article should give you some idea what to expect, though:

The "Star Wars" section features one of the most glorious EA Star Wars memes I've ever seen, too:

EA's Star Wars games in a nutshell.

Did I mention that this article was a great read? Seriously, go read the whole thing right now.

January 30, 2019

This week in Facebook

Facebook's headlines this week are all about the children, and how Zuckerberg & co. are knowingly exploiting them.

First up, this piece from TechCrunch:
Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.
Pro tip: If you're cloaking your involvement in a shady project because you know it's too shady to be publicly associated with... you should probably be rethinking the whole enterprise. Just saying.

Facebook's "Project Atlas" shenanigans should sound familiar: it wasn't that long ago that Facebook's Onavo app was removed from the iOS app store for violating Apple's terms of service. And the new app is pretty comprehensive, potentially allowing the collections of "photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed." And, while Facebook apparently pulled an about-face at "at 11:20pm PT" (when TC's piece was updated), announced that FB was removing the app from Apple phones, they apparently have no plans yet to do the same on Android phones.

Also, it should be noted that most jurisdictions don't allow 13 year olds to sign legally binding contracts, which means that Facebook's use of just-barely-teens for this effort may be not-quite-legal. Which is when we get to the second piece of Facebook's sketchy and dodgy teen-involving bullshit, as reported by arstechnica:
Two Democratic senators have asked Facebook CEO Mark Zuckerberg to explain why the social network apparently "manipulated children into spending their parents' money without permission" while playing games on Facebook.
"A new report from the Center for Investigative Reporting shows that your company had a policy of willful blindness toward credit card charges by children—internally referred to as 'friendly fraud'—in order to boost revenue at the expense of parents," US Sens. Edward Markey (D-Mass.) and Richard Blumenthal (D-Conn.) wrote in a letter to Zuckerberg today. "Notably, Facebook appears to have rejected a plan that would have effectively mitigated this risk and instead doubled down on maximizing revenue."
Because parents didn't know that children would be able to make purchases without additional verification, "many young users incurred several thousands of dollars in charges while playing games like Angry Birds, Petville, Wild Ones, and Barn Buddy," the senators' letter said.
What, did you think that Facebook had dodged responsibility for this one? Well, think again, Apple fan, because the Democratically-controlled U.S. House of Representatives aren't about to let this go, and their colleague in the U.S. Senate look to also be keen to get in on the regulating-of-Facebook action. I told you that Facebook's troubles were just getting started.

And so, with two different Facebook-exploits-teens stories in the headlines, we can now head into Wednesday... and the rest of the week. That's right, folks, Facebook's week isn't even over yet. Winning!

December 20, 2018

Facebook's very bad year gets even worse

It turns out that Facebook couldn't even make it through one more day before getting hit with more bad news. This time, though, it's not news of their incompetence, or their outright malice, that's wrecking their week; rather, it's news of actual consequences for Facebook. Finally.

As reported by The Washington Post:
[...]
The D.C. case threatens to develop into an even worse headache for Facebook. Racine told reporters that his office has “had discussions with a number of other states that are similarly interested in protecting the data and personal information of their consumers,” though he cautioned there is no formal agreement for them to proceed jointly. And the attorney general’s aides said they could add additional charges to their lawsuit as other details about Facebook’s privacy lapses become public.
Hello, again, Christopher Wylie! I'd honestly forgotten that he even existed. But I digress...

December 19, 2018

Fucking Facebook's terrible year isn't over yet

With two more weeks to go, Facebook's horribad year is still getting worse, as reported by Gizmodo:
According to a bombshell report in the New York Times on Tuesday, Facebook’s behind-the-scenes efforts to give select corporate partners access to user data have been far more expansive than previously reported, including allowing certain third-party companies access to user contact lists and access to users’ private messages.
Yes, that’s right, Facebook gave Netflix and Spotify the ability to read users’ messages, and other tech giants including Microsoft, Amazon, and Sony access to data on users’ friends, according to hundreds of internal documents obtained by the paper and interviews with dozens of “former employees of Facebook and its corporate partners.” 
Not only did Facebook allow 150 companies, including Microsoft, Netflix, Spotify, Amazon, and Yahoo, access to users’ private messages, they also allowed them unprecedented access to users’ personal data. According to BuzzFeed News:
Facebook allowed Microsoft’s search engine Bing to see the names of nearly all users’ friends without their consent, and allowed Spotify, Netflix, and the Royal Bank of Canada to read, write, and delete users’ private messages, and see participants on a thread.
Let that sink in for a second: these companies could not only see your messages, they could delete any of them which they didn't like, allowing them to censor Facebook users without their consent, and possibly even without them noticing. It's the nuclear option of damage-control PR. And that's not all they could do.
It also allowed Amazon to get users’ names and contact information through their friends, let Apple access users' Facebook contacts and calendars even if users had disabled data sharing, and let Yahoo view streams of friends’ posts “as recently as this summer,” despite publicly claiming it had stopped sharing such information a year ago, the report said. Collectively, applications made by these technology companies sought the data of hundreds of millions of people a month.
So, yes, in case you were wondering, Facebook's regard for your personal privacy, safety, and fundamental right to self-expression really is utterly non-existent, and the situation is far worse than we knew... with, doubtless, even worse revelations to come. Because this is just what we're learning in spite of Facebook's best efforts to keep all of this under wraps; what we'll learn next year, when the Democratic Party takes control of the U.S. Congress and its various investigative and oversight committees, is anyone's guess, but there's almost certainly more to learn here.

July 10, 2018

Today in Facebook

It's been pretty quiet on the Facebook front lately, with FB's shareholders even driving their share price up, apparently in the belief that the worst was over. Today, however, it's looking like that burst of optimism might have been premature.

As reported by the Globe and Mail:
Britain’s privacy commissioner plans to fine Facebook for violating data protection laws and bar Canada’s AggregateIQ from handling data belonging to British citizens as part of a sweeping investigation into how personal data have been used during election campaigns.
The country’s Information Commission Office has spent months investigating how political consultants at Cambridge Analytica obtained personal data from 87 million Facebook users and then used that information to target voters during elections in the United States and Britain. In a report on its findings to be released Wednesday, the agency said it is targeting Facebook with a £500,000 ($871,000) fine for failing to properly handle personal data and for failing to respond in a robust way when the company found out about the scale of the data harvesting.
The ICO’s findings represent the first action by a national regulator against Facebook over the Cambridge Analytica scandal.
The fine is the largest allowed by UK law, although The Reg describes it as "18 mins of profit" for Facebook, which is actually more than I expected considering how gigantic FB have become. One thing that's been pretty clear for some time, though, is that FB appear to have violated laws in any number of jurisdictions worldwide, including the U.S., the EU, Canada, and China, just for openers. This fine from the U.K. is likely only the first of many.

The fact that the largest penalty available to UK regulators amounts to chump change for the global tech giants they're ostensibly regulating provides a very clear demonstration of just how far behind the times our legal framework is. The world needs better tools for controlling these emerging mega-corps, and quickly.

June 28, 2018

Magical thinking, or, securing the Internet of Things

The news about Exactis' failure to secure their enormous trove of shadow profile data got me thinking about security in general: about the extent to which corporations, of whose existence we might be completely ignorant, are already harvesting all manner of highly personal information about you and I, not only without our informed consent, but without us even knowing when or how often it's happening. And that got me to thinking about the other data collection scheme that Big Data is so keen on lately: IoT, the so-called Internet of Things.

The idea that everything in your environment that incorporates a microchip would inevitably be connected to the Internet, and thus vulnerable to, and controllable by, any sufficiently sophisticated hacker, is something which has concerned me for some time now. I'm not convinced that it's possible to secure such a wide range of devices, from an equally wide range of manufacturers; and even if were possible, I'm not convinced that the measures required to make it happen are desirable. At all.

I'm especially un-sold on the capacity for Artificial Intelligence to succeed at this task when human intelligence has repeatedly failed, or to compensate for the combination of ignorance, incompetence, apathy, and/or greed that will doubtless be a defining feature of IoT for a long time to come. First things first, though; let's start by describing the scope of the problem.

Facebook's "shadow profiles" are not unique, and that's a huge problem

Facebook's practice of building shadow profiles, collecting enormous amounts of personal data about people who don't have, or who never had, Facebook accounts, with neither their knowledge nor their consent, is hugely problematic. It's not just the ethical and privacy concerns, with an enormous corporation building a detailed profile which can be used to target you for all manner of subtle (or less-than-subtle) influencing; there's also a security concern here, because the sort of information that accumulates in these shadow profiles can be used to facilitate harassment, intimidation, or assault, spear phishing attacks, identity theft, doxxing, Swatting, and more. Lives may literally depend on the ability of the profilers to keep their shadow profile databases secure.

Enter Exactis, a marketing firm that you've probably never heard of, but who you're going to learn a lot more about in the coming weeks. From WIRED:
"It seems like this is a database with pretty much every US citizen in it," says Troia, who is the founder of his own New York-based security company, Night Lion Security. Troia notes that almost every person he's searched for in the database, he's found. And when WIRED asked him to find records for a list of 10 specific people in the database, he very quickly found six of them. "I don’t know where the data is coming from, but it’s one of the most comprehensive collections I’ve ever seen," he says.
Thanks to the avarice and incompetence of Exactis, a huge swath of the U.S. population is about to learn just how problematic it is to have a gigantic trove of personal information data, including yours, freely available online to literally whoever wanted access. Much like Equifax's security failure, which leaked the SSNs and credit card information of 145 million-plus Americans, along with tens of millions of Brits, the true impact of Exactis' security failures will likely take years to truly manifest, but the cost to society of failing to regulate the practice of data profiling people without their knowledge and informed consent is already significant, and growing with each passing day.

The inevitable sequence of public outcry, Congressional hearings, and class action lawsuits should be getting underway shortly. We can hope that no violence or deaths follow as a result of this breach... but given recent history, I'm not holding out much hope of avoiding that grisly outcome.

Seriously, there needs to be a law against this shadow profiling shit.

April 16, 2018

The "real" Facebook scandal starts to gain traction

It turns out that Facebook's "shadow profiles," wide-ranging data sets about users and non-users alike, might finally be getting the attention they deserve, rather than all of the attention being on the Cambridge Analytica angle... overseas, anyway. From the Sydney Morning Herald:
Lawmakers and privacy advocates immediately protested the practice, with many saying Facebook needed to develop a way for non-users to find out what the company knows about them.
Asked if people could opt out, Facebook added, "There are basic things you can do to limit the use of this information for advertising, like using browser or device settings to delete cookies. This would apply to other services beyond Facebook because, as mentioned, it is standard to how the internet works."
Facebook often installs cookies on non-users' browsers if they visit sites with Facebook "like" and "share" buttons, whether or not a person pushes a button. Facebook said it uses browsing data to create analytics reports, including about traffic to a site.
If you were wondering why one of FB's fourteen class action lawsuits was filed by a plaintiff who “does not have, and has never had, a Facebook account,” then wonder no longer, because this is why. Creepy AF... and possibly illegal, since people without FB accounts have never consented to having Facebook build a data profile of them. There's nothing obviously security-related about the practice, either; Facebook appear to have no legitimate need for this data, they just want it.

April 12, 2018

Slightly unexpected...

After Tuesday's underwhelming performance by U.S. Senators, I wasn't expecting Mark Zuckerberg's testimony before the House to go much differently. It turns out that I might have been a bit too pessimistic about that.

From the NY Times:
While Tuesday’s Senate hearing contained tough questions, the lawmakers were generally deferential to the executive. That was less the case in the House, where lawmakers repeatedly interrupted Mr. Zuckerberg and chided him for not answering questions to their satisfaction.
Lawmakers on both side of the aisle on Wednesday pushed Mr. Zuckerberg on his company’s handling of user data. They were particularly focused on the platform’s privacy settings, which put the onus on users to protect their privacy.
[...]
Representative Greg Walden, Republican of Oregon and chair of the Energy and Commerce Committee, kicked off the hearing by declaring that “while Facebook has certainly grown, I worry it has not matured.”
Mr. Walden floated the prospect of regulation, saying that “I think it is time to ask whether Facebook may have moved too fast and broken too many things.”
Later in the hearing, Mr. Zuckerberg said regulation was “inevitable.” But he repeated that the right kind of regulation mattered and he pointed out that some regulation could only solidify the power of a large company like Facebook, which could hurt start-ups.
Facebook's CEO not only recognizing that new regulation is "inevitable," but asserting that new regulations should not be unfairly advantageous for FB? Unexpected.

April 10, 2018

The U.S. Senate put their kid gloves on for Zuckerberg

Yesterday, I was genuinely curious how Facebook CEO Mark Zuckerberg's testimony would go.

On the one hand, it was a pretty friendly panel, stuffed with elected officials whose election and reelection campaigns receive substantial donations from Facebook, and from FB-connected PACs. On the other hand, the winds of public opinion have been blowing very much against FB for the past two weeks, and Senators wanting to pander to voters couldn't have asked for an easier target. It was going to be either a day of fireworks, or a snooze-fest that produced virtually no new information of note, but which one?

Well... now we know. Snooze-fest, it is. As reported by HuffPpst:
After initially apologizing and accepting responsibility for failing to protect user data, Facebook CEO Mark Zuckerberg declared support for some vague form of regulation as 44 senators questioned him during his first congressional testimony.
“My position is not that there should be no regulation,” Zuckerberg said. “I think the real question, as the internet becomes more important in people’s lives, is what is the right regulation?”
Under more direct questioning, the 33-year-old billionaire refused to endorse any specific regulatory proposal. He remained on the defensive, touting his company’s idealistic vision.
Sen. Ed Markey (D-Mass.) asked Zuckerberg if he would back legislation to mandate that digital platforms like Facebook obtain affirmative consent from users to collect their data for targeted advertising. Zuckerberg dodged: “In general, I think that principle is exactly right.”
When Sen. Maria Cantwell (D-Wash.) raised the possibility of the U.S. enacting data protection laws similar to the new rules about to go into effect in the European Union, he dodged again. “It’s certainly worth discussing,” he said.

April 05, 2018

Facebook admits that its tools were miused on a massive global scale

From The Washington Post:
[...]
Facebook said in a blog post Wednesday, “Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped.”
Yes... they're talking about doxxing and identity theft on a massive scale.

This "very useful" search functionality was, naturally, enabled by default and deliberately difficult to disable -- after all, how else were you going to find people on Facebook to expand your network of data nodes? Facebook would also have been aware of the body of research which "has consistently shown that users of online platforms rarely adjust default privacy settings and often fail to understand what information they are sharing," facts which expect to feature prominently in several of the fourteen (and counting) class action lawsuits that have already been filed here.

Still, there's really no way around the simple realities here: 1) Facebook cannot and will not effectively police themselves; and 2) Facebook are unlikely to face new regulations in the U.S. anytime soon, unless Democrats manage to win veto-proof majorities in both the House and the Senate. That makes the question of whether Facebook will broadly implement privacy protections like those found in the GDPR, into an even more pressing one. It also means that meaningful change will have to come from outside the U.S.

Thankfully, that second thing seems to be happening.

April 03, 2018

Facebook takes a step backwards

Was it just this morning that I was slow-clapping for Facebook's upcoming app management and fact-checking features? Did I really say in that post that today had actually been better for Facebook than yesterday?

Well, it would seem that Zuck ain't havin' none o' that, because he's gone and stuck another of this feet squarely in his own mouth.

As reported by Reuters:
Facebook Inc Chief Executive Mark Zuckerberg said on Tuesday the social network had no immediate plans to apply a strict new European Union law on data privacy in its entirety to the rest of the world, as the company reels from a scandal over its handling of personal information of millions of its users.
Zuckerberg told Reuters in a phone interview that Facebook already complies with many parts of the law ahead of its implementation in May. He said the company wanted to extend privacy guarantees worldwide in spirit, but would make exceptions, which he declined to describe.
“We’re still nailing down details on this, but it should directionally be, in spirit, the whole thing,” said Zuckerberg. He did not elaborate.
His comments signal that U.S. Facebook users, many of them still angry over the company’s admission that political consultancy Cambridge Analytica got hold of Facebook data on 50 million members, may soon find themselves in a worse position than Europeans.
Seriously, I'm starting to think that he's a closet centipede, or something. Because he can't possibly be this much of an idiot.

Adopting the EU standards for user privacy across Facebook's entire operation, as the best practice available in this are, should be a no-brainer at this point. Like allowing users to easily delete their accounts, adopting EU standards globally would be exactly the kind of both, pro-consumer move that would turn public opinion back in Facebook's favour, and stem the flow of exiting users. It might even save them money, since it allows them to have a single platform in all markets, rather than a patchwork of platforms in various countries, each of which runs differently.

There's only one reason to not do this, really, and that's if FB expect to make more money from exploiting users in places (like the U.S.) with lax regulations. Zuckerberg had a clear opportunity to put Facebook's money where his own mouth is, and promise to do better for all of Facebook's users, rather than just the ones protected by a strong regulatory regime; instead, he's put his own foot in his mouth again, and ensured that the day's new cycle will end with a discussion of what they're not doing to protect their users, rather than being about the things that they are doing, which they announced earlier.

To describe this as moronic is to fail to do it justice. I'd ask who the fuck let this happen... except that it was Mark Zuckerberg who approved both the new features of this morning, and uttered the tone-deaf statement of this evening. He's the CEO of Facebook; the buck stops with him.

GG, Zuck. GG.

#FacebookIsTheProblem
#DeleteFacebook

UPDATED: APRIL 5th:

Apparently someone has explained to Zuckerberg just how badly this was playing in the media, because he's walked it back a bit. As per HuffPost:
Asked specifically if he’d be willing to implement new privacy policies in the U.S. similar to the strict new privacy laws rolling out in the European Union, Zuckerberg said he was comfortable with the idea but not in the same format.
When the EU law takes effect on May 25, Facebook will have to get users’ explicit consent to collect data and be much more upfront about how it uses that data. Zuckerberg said Facebook “intends to make the same controls and settings available everywhere, not just in Europe.” That’s subject to some flexibility, however ― a variation he attributed to a patchwork of global laws on the matter.
So Facebook will implement GDPR as the standard Facebook-wide.... except that it will look different in different countries, depending on what's actually required by the laws in those countries. Which is being hailed as good news, from people who've failed to realize that Zuckerberg's said that Facebook both will and won't adopt GDPR world-wide because it represents the best practices available for privacy/ Zuck wants credit for saying that he'll provide the strongest possible privacy protection to users across the board, although he still wants the flexibility to implement something less strong than GDPR in markets where GDPR isn't the law of the land.

That's... how do you say?.. horseshit.

It's possible that the laws in some jurisdictions actually contradict the GDPR standards, of course, but rather than just say that, Zuck went vague. All he needed to say was that Facebook would implement GDPR as the strongest available standard for every market where its provisions weren't actually contradicted by other laws; and, further, that Facebook would lobby for GDPR to be adopted as the standard in jurisdictions where their users would be subject to lesser protections because GDPR provisions can't legally be implemented. What was needed was a clear, concise, and unambiguous statement of intent, here: a new dedication to their users' safety, security, and privacy that Facebook had previously not demonstrated.

Which leaves us exactly where we were; with Facebook planning to meet GDPR standards everywhere, except where they won't, and with no clarity about who will and won't be covered, or exactly why those left exposed won't be benefiting from the new practices. It's PR pablum, acknowledging that they need to do more on this issue, but without actually committing to doing anything more on this issue than they'd already be forced to do, in order to comply with EU laws. It's absolutely the bare minimum he could say, while managing to say nothing at all.

And, yet, it seems to be working. Everyone seems to be reporting this as if Zuckerberg had actually said what they wanted to hear, instead of hearing what he actually said. So, I can't exactly call it a fail; it's accomplishing what he wanted to accomplish. As failures of journalism go, that's pretty disheartening.

March 21, 2018

Facebook is the problem

I don't think my previous post quite made this clear, but there's a very simple reason why I've been posting about the of the Cambridge Analytica story here, on my tech blog, rather than over there, at my political blog. It's because the political angle of this never struck me as being the most important part of the story; because the problem here really isn't Cambridge Analytica, per se.

Yes, Steven Bannon was (and probably still is) a real piece of work, and the company to which he was attached did do some very bad things, but Cambridge Analytica didn't do anything that Facebook didn't allow them to do, at the time. Yes, CA scraped waaayyy more data from FB than Zuckerberg's crew expected, and clearly abused it, and then behaved in almost cartoonishly villainous ways, but the real problem is that FB had the data available to sell in the first place.

To get a real idea of how big, and bad, the problem is, consider the following hypothetical scenario:
  1. You "friend" or "follow" your doctor on Facebook. This is useful; it allows you to book appointments more easily, and keeps your doctor's contact info readily available if you need it...
  2. ... and you do need it, because you've just been diagnosed with something that's chronic, serious, and both difficult and expensive to treat. Your doctor mentions a few different medications that he might want you to try, and tells you who makes them, so you...
  3. ... follow those pharmaceutical companies online. After all, they make medications that you're now intensely interested in.
  4. Meanwhile, your doctor has reached out to some of their colleagues via a professional FB group. Your name is never mentioned, of course, just the basic fact that they have "a patient" with a difficult and unusual diagnosis, and they'd appreciate some advice.
  5. Facebook now know (a) your name, (b) your doctor's name, and (c) your interest in companies that make medications to treat (d) the condition that your doctor now also wants advice about, because it's a rare diagnosis and they're never seen an actual case before.
  6. ( a + b + c + d ) = details of your medical history, which you never divulged to anyone, but which Facebook now has in their database, access to which they now sell to...
  7. (e) anyone who might have a financial interest in knowing about the sudden increase in medical bills that you're about to incur. Have you applied for a mortgage recently? Or a job? Or extended medical insurance coverage? Would any or all of those companies maybe appreciate a solid cost-saving heads-up about your circumstance?
This may sound like a far-fetched hypothetical, but it's not. The data that Cambridge Analytica scraped from Facebook's database was of exactly this kind, and you'd better believe that they weren't the only firm to buy access to the data profile that Facebook has built of you, with neither your knowledge nor informed consent, and then sold to God knows who.

This is a problem because data, once sold, can't be un-sold; once Cambridge Analytica had scraped FB's data trove onto their own servers, there was nothing FB could do about it anymore. Do you know how many criminal organizations might have gained access to personal information about Facebook's users, and then re-sold it on the darknet? Because I don't, and neither do Facebook. The fact that they've just recently stopped/are about to stop doing these evil things doesn't begin to un-do all the previous evil they've already done... the effects of which their products users (i.e. you) will now be living with for years to come, at the very least.

Facebook's fiasco

Did I ever mention that I'm not on Facebook? I did have a Facebook account at one point, but I wasn't using it, so I suspended it years ago, and I never told Facebook all that much about myself. And, oh boy, am I ever glad that I'm not heavily invested in the Facebook ecosystem, because OMG what a fucking mess.

Facebook themselves have been really quiet about the whole Cambridge Analytica situation, to such an extent that I keep seeing articles commenting on how weird the silence of their CEO is, at a time of such crisis for the company, but that hasn't prevented the flood of "how to delete Facebook" articles, the start of the class action lawsuits (from their shareholders, natch, complaining that FB's mishandling of the matter amounts to negligence and is costing their shareholders money), and at least three official investigations from the governments of the United Kingdom, Canada, and the United States. So much for their hopes that an "independent" (yet still internal) audit would be enough to keep the steadily building outrage to manageable levels.

Suddenly, the probably-inevitable failure of their VR adventure (along with everyone else's VR adventures) is looking like the least of Facebook's problems. Mark Zuckerberg has gone from being a rumoured Presidential hopeful just last year, to being a dead CEO walking at the company he himself founded, with CNBC calling for him to step aside and let Facebook COO Sheryl Sandberg take over. And an industry that was built on collecting, and then selling, their customers' private and personal information is suddenly facing the very real prospect that they'll find themselves regulated, and heavily, within the year.

And all I can say is, it's about damn time.

Seriously, the Big Brother nature of Facebook and Twitter creeps me all the way out. I mean, Google might want to collect as much information about you as possible, but they're not literally selling your private deets to companies outside Google, they're not leveraging using your contact list to gather information about you without your knowledge or consent, and they're not doing this all behind a black-box wall of obscurity that allows you no visibility or control over the process at all.

My Google account settings have turned all of the data collection off, because Google lets me do that. Google lets you opt out. Facebook doesn't let you opt out, and will collect information about you that you didn't know they could access, all without even asking first. The fact that they're in the business of selling your information to others, and not just advertising services powered by that information, has always been all the way wrong, and crying out for regulation. And, as far as I can see, regulations really can't come soon enough.

And so, the last of the Wild West dot com boomers will be brought to heel, and we will spend the next decade (at least) grappling with the fallout from their recklessness, arrogance, and greed.

In the meantime, here's The Verge's guide to deleting Facebook.

February 27, 2018

California scraps rule requiring drivers in driverless cars

The self-driving autos of tomorrow are one step closer to being a reality of today. From the NYT:
The state’s Department of Motor Vehicles said Monday that it was eliminating a requirement for autonomous vehicles to have a person in the driver’s seat to take over in the event of an emergency. The new rule goes into effect on April 2.
California has given 50 companies a license to test self-driving vehicles in the state. The new rules also require companies to be able to operate the vehicle remotely — a bit like a flying military drone — and communicate with law enforcement and other drivers when something goes wrong.
The changes signal a step toward the wider deployment of autonomous vehicles. One of the main economic benefits praised by proponents of driverless vehicles is that they will not be limited by human boundaries and can do things like operate 24 hours in a row without a drop-off in alertness or attentiveness. Taking the human out of the front seat is an important psychological and logistical step before truly driverless cars can hit the road.
The requirement to have a human operator in a self-driving auto has been more a matter of politics and PR than science for a while now - all the available data showed that autonomous cars were safer when humans weren't able to override them, and all of the reported accidents involving autonomous autos have, so far anyway, been attributed to driver errors on the part of the humans in the other vehicles on the road. After all, with human driver error killing thousands of people each year in the U.S. alone, self-driving cars don't need to be perfect, they just need to be better than us, and the technology is already there.

At this point, it's worth remembering that California is typically more strict when it comes to automobile safety legislation than most other U.S. states. If CA is already on board with truly autonomous Autos, then it really is just a matter of time until self-driving Autos are on roads all over North America. And I, for one, will welcome our robot overlords.

I now return you to the Singularity, already in progress...

January 02, 2018

Microsoft moguls name privacy and surveillance as major issues needing attention in 2018.

Apparently they're oblivious to the irony of taking a position like this one:
The past 12 months brought another important year in a decade filled with milestones relating to privacy and surveillance. And there is every reason to believe that 2018 will offer more of the same. Two specific topics rose to the top in 2017.
The first involves a sea change in privacy regulation, marked by the European Union’s General Data Protection Regulation. It moves beyond the European Data Protection Directive adopted in 1995, enough so that “GDPR” has become a well - known word across the tech sector. The new EU regulation takes effect on May 25, imposing added requirements on companies that have the personal information of European consumers, regardless of where the company is located. While many regulations tell companies what they cannot do, GDPR also tells firms what they must do. Among the changes, the regulation requires that companies ensure that European consumers can learn what information businesses have about them, change the information if it’s inaccurate, move the information to another provider if desired, and delete it if they “wish to be forgotten.” In effect it prescribes new business processes and even product features.
Gee... does that mean that Windows 10 users will be able to opt out of telemetry at some point in 2018, or have an option in the control panel to turn off Cortana without a fucking registry edit? Or is Microsoft planning to continue doing the absolute minimum required to avoid (more) regulatory action, while continuing to treat users' PCs and personal data like Microsoft's pseudo-feudal fiefdom? Place your bets!

Microsoft, naturally, quickly move on to talking about government surveillance, while blowing their own bugle about the handful of court cases they're currently litigating to prevent the U.S. Government from encroaching on their big data fiefdom, but you shouldn't be fooled into thinking that Micrsoft have your best interests at heart, because they don't. This is all about protecting their interests; any benefit that you receive in the process is incidental.

November 22, 2017

Belgium rules loot boxes are gambling, and will seek to make them illegal. Hawaii may follow suit.


This story is, obviously, blowing up right now, but it looks like PC Gamer gets first post:
Last week, Belgium's Gaming Commission announced that it had launched an investigation into whether the loot boxes available for purchase in games like Overwatch and Star Wars Battlefront 2 constitute a form of gambling. Today, VTM News reported that the ruling is in, and the answer is yes.
The Google translation is a little sloppy, as usual, but the message is clear enough. "The mixing of money and addiction is gambling," the Gaming Commission declared. Belgium's Minister of Justice Koen Geens also weighed in, saying, "Mixing gambling and gaming, especially at a young age, is dangerous for the mental health of the child."
Geens, according to the report, wants to ban in-game purchases outright (correction: if you don't know exactly what you're purchasing), and not just in Belgium: He said the process will take time, "because we have to go to Europe. We will certainly try to ban it.
GamingBolt has another great post on this development:
Folks, we won. After Belgium confirmed last week that it would be investigating charges of unregulated gambling in popular video games such as Overwatch, thanks to the Star Wars Battlefront 2 controversy, they have come out with their decision- loot boxes are indeed gambling, they say, and they will move to have them banned in the European Union.
This is fantastic news for multiple reasons- if loot boxes are illegal in Europe, then publishers will have two options- either develop two versions of their games (one with loot boxes, one without), or forego a release in Europe (therefore, half the market for most western publishers) entirely. Therefore, unless publishers literally want to spend the money on balancing and QAing two progression paths for their games, they will have no chance but to remove loot boxes from their titles- if this regulation passes.
That, "folks, we won," at the start of GamingBolt's post may turn out to be the most important part of this story. This really is a case of consumers banding together, and staying together, to generate sustained public pressure and bring about change, in a gaming community that has famously been unable to do any of those things until five minutes ago. Gamers have finally realized that they have the power in this relationship, and can force the big AAA publishers to back down on issues that really matter to them, and this is unlikely to be the last time it happens, either.

November 17, 2017

Victory! Kinda...

It finally happened: after weeks of controversy, days of full-blown, outrage-driven consumer revolt, and yesterday's news that their bullshit business practices have prompted an investigation (with possible fines and/or straight-up banning of their product) in Belgium, a AAA videogame publisher has actually decided that the lure of filthy lucre just isn't worth it. For now, anyway.

As reported by Kotaku:
EA is temporarily pulling the microtransactions from Star Wars Battlefront II, a shocking move that comes after days of zealous fan anger and just hours before the official launch of the game.
“We hear you loud and clear, so we’re turning off all in-game purchases,” wrote Oskar Gabrielson, GM of Battlefront II developer DICE, in a blog post this evening. “We will now spend more time listening, adjusting, balancing and tuning. This means that the option to purchase crystals in the game is now offline, and all progression will be earned through gameplay. The ability to purchase crystals in-game will become available at a later date, only after we’ve made changes to the game. We’ll share more details as we work through this.”
You're reading that correctly -- they blinked. I guess CNN picking up the story was the final straw.

Even as a long-time Star Wars fan (I saw the original Star Wars in '77, back when it was still just called Star Wars, and long before Lucas' revisionist digital fuckery or those god-awful prequels), I was not interested in this game. I didn't play Star Wars Battlefront, because (a) I'm not a big FPS fan to start with, (b) I don't much care for MMOs, either, and (c) its total lack of a single-player story/campaign mode wasn't appealing at all, so the idea of buying a sequel to a game that I didn't care about wasn't something that I was ever going to consider. My objection to SWBF2's gacha wasn't motivated by any concern over how my personal experience with the game might be affected; I just hated the corporate greed and total bullshit on display on principle.

Time will tell if EA's disastrous foray into making a mediocre full-priced game much worse by adding free-to-play monetization will have any effect on the broader videogame industry; with regulators now awake to just how shitty this stuff can become, and already investigating the game that started it all, Activision Blizzard's Overwatch, we could have already passed a tipping point in which the AAA videogame industry actually backs away from an egregious bullshit practice due to its long-term costs, regardless of its short-term lucrativeness. Which is a rare occurrence in any industry, not just in videogames.

So I say, Huzzah! Let us celebrate our temporary, partial victory over the forces of the most banal of greedy and evil corporate practice. The fact that gamers have finally rallied to prove that there is a point when enough is e-fucking-nuff, even in videogames, is a good thing.