Showing posts with label Ethics. Show all posts
Showing posts with label Ethics. Show all posts

June 06, 2020

June 02, 2020

This week in Facebook: It seems that "Criticism" was putting it mildly

It seems like just yesterday that I was blogging about Facebook's nascent culture, doesn't it? Probably because it was yesterday: specifically, yesterday morning.

By yesterday afternoon, the story had already evolved, as reported by The Huffington Post:
Facebook employees staged a “virtual walkout” Monday in protest of the social media company’s failure to address President Donald Trump’s use of its platform to spread incendiary content.
It’s unclear how many of the company’s 48,000 global employees are participating in the walkout by taking the day off. Many of Facebook’s employees were already working from home due to the COVID-19 pandemic.
A number of the virtual protesters said they planned to use their time to attend the physical demonstrations against police brutality around the country.
Wow. Just... wow.

Again, I have to emphasize just how much of a culture shift this represents. A year and half ago, Facebook's rank and file were talkingshop using burner phones to avoid having managers overhear, and complaining about the unfairness of Facebook's media coverage. Yesterday, they staged a walkout to protest Facebook itself.

June 01, 2020

This week in Facebook: Employees, including at least 3 senior managers, go public with criticism of Zuckerberg

This probably makes for some awkward water cooler conversations.

As reported by Reuters, via CBC News:
Facebook employees critical of CEO Mark Zuckerberg's decision not to act on U.S. President Donald Trump's inflammatory comments about protests across the United States went public on Twitter, praising the rival social media firm for acting and rebuking their own employer.
Many tech workers at companies including Facebook, Google and Amazon have actively pursued issues of social justice in recent years, urging their employers to take action and change policies.
Even so, the weekend criticism marked a rare case of high-level employees publicly taking their chief executive to task, with at least three of the seven critical posts seen by Reuters coming from people who identified themselves as senior managers.
This marks a shift in corporate culture for Facebook. When last we'd heard, Facebook employees were using burner phones to talk about the company because they feared what fallout might come if they spoke out openly... while simultaneously complaining about the unfairness of the media coverage of the company.

That was December of 2018; now, just a year and a half later, their senior managers are openly talking about Mark Zuckerberg being "wrong" about Facebook's ethical obligations, and Facebookers themselves are speaking out on Twitter. That's a significant culture shift of the kind that often never happens at any company of any size, absent some sort of company-wide restructuring.

Does this signify a shift in the company more broadly? Google's rank-and-file have successfully forced senior management to change course on ethically dubious initiatives; if Facebook' rank and file are going to embrace that sort of "ethical tech" mindset, then it could well lead to Facebook becoming a force for much less evil in the world.

If that happens, then I'll take it. Baby steps, people, baby steps.

November 30, 2018

"When sorrows come, they come not single spies, but in battalions."

For anyone who's been defending Facebook against the NY Times' Definers Media story by claiming that it was an isolated incident... it wasn't. Of course it wasn't. After everything we've learned about the depths of Facebook's rampant amorality over the course of this past year, why would you ever think it was?

As reported by TechCrunch:
Facebook is still reeling from the revelation that it hired an opposition research firm with close ties to the Republican party, but its relationship with Definers Public Affairs isn’t the company’s only recent contract work with deeply GOP-linked strategy firms.
[...]
According to sources familiar with the project, Facebook also contracted with Targeted Victory, described as “the GOP’s go-to technology consultant firm.” Targeted Victory worked with Facebook on the company’s Community Boost roadshow, a tour of U.S. cities meant to stimulate small business interest in Facebook as a business and ad platform. The ongoing Community Boost initiative, announced in late 2017, kicked off earlier this year with stops in cities like and Topeka, Kansas and Albuquerque, New Mexico.
Facebook also worked with Targeted Victory on the company’s ad transparency efforts. Over the last year, Facebook has attempted to ward off regulation from Congress over ad disclosure, even putting forth some self-regulatory efforts to appease legislators. Specifically, it has dedicated considerable lobbying resources to slow any progress from the Honest Ads Act, a piece of legislature that would force the company to make retain copies of election ads, disclose spending and more. Targeted Victory, a digital strategy and marketing firm, is not a registered lobbyist for Facebook on any work relating to ad transparency.
Just as Cambridge Analytica were only the ones that got caught, rather than being the only ones mining Facebook's user data for fun, profit, and geopolitical sabotage, Facebook's ethically-challenged use of the anti-semitic Soros-bashing Definers Media was only one of several such efforts by the firm. And this time, Sheryl Sandberg can't plead ignorance; after all, she did that with Definers Media, only to later admit that the decision really had crossed her desk, after all. Fool you once, shame on you; fool me twice.. can't get fooled again, is what I'm saying.

There is no such thing as a cockroach, folks. If you see one, the simple truth, on which you can absolutely rely, is that there are more of them hiding just out of sight. When you see a cockroach, you don't tell yourself that it's okay because you've only seen one of the pests; you call an exterminator, pronto. And it's clearly long past time to call in the social media equivalent of the Orkin Man, to deal with Facebook's sketchy, dodgy, and downright evil ways.

#FacebookIsTheProblem
#deleteFacebook

November 26, 2018

A new week in Facebook begins

Did you have a good Thanksgiving weekend? Because Facebook didn't.

Shall we start with their Black Friday news dump? Normally, dumping bad news on Friday helps to bury those ledes, as days can pass before major news outlets are able to properly cover them. Unfortunately for Facebook, though, news media organizations have adapted to this technique, a special favourite of the Trump administration, so they were primed and ready to cover whatever happened on Black Friday, including this story, as reported here by Slate:

November 16, 2018

Stop me if you've heard this one...

Under pressure over the NY Times' bombshell story detailing Facebook's own campaign of anti-Semitic disinformation which they pursued in order to deflect criticism over the Cambridge Analytica scandal, Mark Zuckerberg offered a truly defense in response. In essence, he claimed:
  1. the everybody knew that Facebook had employed Definers Media (i.e. nothing to see here);
  2. that he himself didn't know that Facebook was employing Definers (i.e. it wasn't me);
  3. that an un-named comms staffer had actually decided key details of Facebook's damage-control/PR strategy, apparently without anyone signing off on it (this, after testifying before Congress about how he "took full responsibility for" exactly this sort of decision-making at Facebook); and
  4. that Facebook had now cut ties with Definers, literally yesterday (i.e. now that we all know about their shady business, they'd like to be seen doing a right thing).
As reported by Gizmodo:
Today, Facebook set up a press conference addressing a bombshell report from The New York Times that alleged, among other things, that the company contracted a Republican opposition research firm called Definers to run interference on the company’s image, a job which reportedly included leaning on George Soros conspiracy theories.
On the call, Mark Zuckerberg claimed he only found out the group was working for Facebook yesterday—which would mean the CEO learned about his company’s dealings well after most reporters.
Facebook ended its relationship with Definers yesterday, following backlash from the public as well as from the president of the Open Societies Foundation: one of the groups run by Soros, who has been a frequent target of anti-semitic conspiracy theories. In the wake of that abrupt dismissal, Facebook published a rebuttal which included the following statement:
Our relationship with Definers was well known by the media – not least because they have on several occasions sent out invitations to hundreds of journalists about important press calls on our behalf.
“Me personally, I didn’t know we were working with them,” Zuckerberg said during today’s Q&A. [...] Who would have known or approved of such a relationship? Zuckerberg, who previously stated that personnel matters are outside the purview of public disclosure, pinned the blame on “someone on our comms team.”
At this point, I can't help but wonder if anyone in Facebook's senior leadership had any idea what ethics even are. They've certainly behaved with reckless disregard for the truth, and utter contempt for the consequences of their decisions, with such consistency and for so long that I can no longer believe anything that they say without supporting documentation. Zuckerberg, personally, has done almost nothing but hide the truth and deflect criticism, all while espousing his own commitment to transparency, love of facts, and personal qualities of responsible leadership. The extent of the cynical hypocrisy on display here is simply breathtaking.

And I'm far from being the only person who's not buying it anymore.

June 28, 2018

Magical thinking, or, securing the Internet of Things

The news about Exactis' failure to secure their enormous trove of shadow profile data got me thinking about security in general: about the extent to which corporations, of whose existence we might be completely ignorant, are already harvesting all manner of highly personal information about you and I, not only without our informed consent, but without us even knowing when or how often it's happening. And that got me to thinking about the other data collection scheme that Big Data is so keen on lately: IoT, the so-called Internet of Things.

The idea that everything in your environment that incorporates a microchip would inevitably be connected to the Internet, and thus vulnerable to, and controllable by, any sufficiently sophisticated hacker, is something which has concerned me for some time now. I'm not convinced that it's possible to secure such a wide range of devices, from an equally wide range of manufacturers; and even if were possible, I'm not convinced that the measures required to make it happen are desirable. At all.

I'm especially un-sold on the capacity for Artificial Intelligence to succeed at this task when human intelligence has repeatedly failed, or to compensate for the combination of ignorance, incompetence, apathy, and/or greed that will doubtless be a defining feature of IoT for a long time to come. First things first, though; let's start by describing the scope of the problem.

Facebook's "shadow profiles" are not unique, and that's a huge problem

Facebook's practice of building shadow profiles, collecting enormous amounts of personal data about people who don't have, or who never had, Facebook accounts, with neither their knowledge nor their consent, is hugely problematic. It's not just the ethical and privacy concerns, with an enormous corporation building a detailed profile which can be used to target you for all manner of subtle (or less-than-subtle) influencing; there's also a security concern here, because the sort of information that accumulates in these shadow profiles can be used to facilitate harassment, intimidation, or assault, spear phishing attacks, identity theft, doxxing, Swatting, and more. Lives may literally depend on the ability of the profilers to keep their shadow profile databases secure.

Enter Exactis, a marketing firm that you've probably never heard of, but who you're going to learn a lot more about in the coming weeks. From WIRED:
"It seems like this is a database with pretty much every US citizen in it," says Troia, who is the founder of his own New York-based security company, Night Lion Security. Troia notes that almost every person he's searched for in the database, he's found. And when WIRED asked him to find records for a list of 10 specific people in the database, he very quickly found six of them. "I don’t know where the data is coming from, but it’s one of the most comprehensive collections I’ve ever seen," he says.
Thanks to the avarice and incompetence of Exactis, a huge swath of the U.S. population is about to learn just how problematic it is to have a gigantic trove of personal information data, including yours, freely available online to literally whoever wanted access. Much like Equifax's security failure, which leaked the SSNs and credit card information of 145 million-plus Americans, along with tens of millions of Brits, the true impact of Exactis' security failures will likely take years to truly manifest, but the cost to society of failing to regulate the practice of data profiling people without their knowledge and informed consent is already significant, and growing with each passing day.

The inevitable sequence of public outcry, Congressional hearings, and class action lawsuits should be getting underway shortly. We can hope that no violence or deaths follow as a result of this breach... but given recent history, I'm not holding out much hope of avoiding that grisly outcome.

Seriously, there needs to be a law against this shadow profiling shit.

June 27, 2018

Reminder: VR is still not useful.
Also, tech journalism continues to be a bad joke.

Spotted today, on Tech Republic: "5 top use cases for AR/VR in business, and how you can get started."

Challenge accepted! Shall we keep score?
According to an Altimeter report by analyst Omar Akhtar, the combined market size for augmented reality (AR) and virtual reality (VR) is expected to grow exponentially from about $18 billion in 2018 to $215 billion in 2021. With this growing push toward immersive technology, many business are questioning how they can utilize it and how to being implementing it into their strategies.
Analysts have been making equally aggressive growth forecasts for VR for the past two years; as yet, this forecast growth has not materialized, and there is no sign that it's going to suddenly start happening anytime soon. I've noticed that Altimeter are now rolling AR and VR together into this number, which is probably wise considering that VR is not a thing, but there's no evidence yet of AR being ready for prime time, either. Not a good start. F.
Emily Olman, CEO of VR/AR at Hopscotch Interactive, said in the report that immersive technology implementation is a question of "when, not if."
"The sooner your company is able to understand the language [of AR/VR] and become fluent in what the possibilities are, the faster they are going to be able to react," Olman said in the report.
When people with a vested material interest in something keep predicting that it's just about to happen, for years on end, with no sign of it actually happening, you should be very suspicious. Someone whose job title includes "of VR/AR" definitely falls into this category, as do Altimeter themselves, whose actual business is "providing research and advisory on how to leverage disruptive technologies." I'd recommend that you take any of their recommendations with a healthy pinch of salt, if double handfuls of salt weren't actually needed here. F.
Here are the five use cases for immersive technology outlined in the report.
This is where things really start to go downhill.

June 26, 2018

And then there were four...

The conga line of tech companies whose employees have better ethics than their CEOs just got a little longer:
Protests from within the technology industry continue as hundreds of employees at Salesforce have requested that the company’s leadership reassess its work with U.S. Customs and Border Protection (CBP) following reports that authorities have separated thousands of children from their parents at the U.S–Mexico border.
In a letter addressed to Marc Benioff, Salesforce’s co-founder and CEO, the employees argue that, by providing CBP with a number of its products, the company is potentially abandoning its core values and making employees “complicit in the inhumane treatment of vulnerable people.”
“We are particularly concerned about the use of Service Cloud to manage border activities,” the letter, sent to Benioff on Monday, reads. “Given the inhumane separation of children from their parents currently taking place at the border, we believe that our core value of Equality is at stake and that Salesforce should re-examine our contractual relationship with CBP and speak out against its practices.”
Salesforce are obviously not giants like Amazon, Google, or Microsoft, but it may be just as significant that Salesforce employees are following in the ethically conscious footsteps of their colleagues from larger firms. The Silicon Valley ethos of disrupting now, and only worrying about the consequences later (if ever) may be shifting to one where the ethical considerations inherent in these tech companies' practices are one of their employees foremost concerns.

If so, it's a long overdue development. Technology isn't just a collection of devices; it all impacts human beings at some point, and that human impact needs to be factored into the designs. All too often, though, it hasn't been, which is how Uber and Theranos happened. If we're starting to see the kind of sea change that will make the next Theranos less likely, then that can only be a good thing.

Also, the intersection of ethics, politics, and tech? It's not stopping anytime soon.

June 21, 2018

And then there were three...

Remember when I wrote that Microsoft weren't the only giant multinational whose normally-safe (and very lucrative) government contracts were becoming increasingly problematic in the Trump Era? I was thinking of Google's troubles with Project Maven. I was not expecting that Amazon's employees were literally days away from following in the footsteps of their Google and Microsoft counterparts... but here we are:
Following employee protests at Google and Microsoft over government contracts, workers at Amazon are circulating an internal letter to CEO Jeff Bezos, asking him to stop selling the company’s Rekognition facial recognition software to law enforcement and to boot the data-mining firm Palantir from its cloud services.
[...]
“Along with much of the world we watched in horror recently as U.S. authorities tore children away from their parents,” the letter states. “In the face of this immoral U.S. policy, and the U.S.’s increasingly inhumane treatment of refugees and immigrants beyond this specific policy, we are deeply concerned that Amazon is implicated, providing infrastructure and services that enable ICE and DHS.”
In May, an investigation by the American Civil Liberties Union revealed that Amazon had heavily marketed its Rekognition software to police departments and government agencies. The technology can recognize and track faces in real time, and the ACLU noted that such a powerful surveillance tool could easily be misused by law enforcement. Earlier this week, several Amazon shareholders called on the company to stop selling Rekognition to the police.
That backlash has now spread among employees as well.
“Our company should not be in the surveillance business; we should not be in the policing business; we should not be in the business of supporting those who monitor and oppress marginalized populations,” the employee letter states.
Now all that remains is for Facebook's employees to start sending open letters to Mark Zuckerberg about that company's egregiously awful business practices, and the Grand Slam of Tech Ethics will be complete. And it's about damn time, too.

Of course, it remains to be seen whether Microsoft or Amazon back down in the face of their internal revolts with the same alacrity that Google showed in walking away from their big-dollar Pentagon contract. There's no guarantee of it, in either case; Google was founded with an explicit "don't be evil" mission statement, which a significant percentage of their employees have clearly internalized, but the same may not be true of their gigantic tech rivals. Still, images of traumatized children can have a wonderfully clarifying effect on one's worldview, so one can hope.

One thing is becoming increasingly clear, though; tech companies, who had clearly hoped to remain above the political fray, are not going to be able to avoid recognizing and dealing with the political ramifications of their various businesses. Trump's malignant influence simply cannot be escaped, and wishing isn't going to make any of these issues go away.

June 19, 2018

Microsoft has even bigger problems
than the new XP

As Microsoft abandons their Windows 10-focused strategy in favour of one built on Azure, AI, and "the intelligent edge," they're trying desperately to put past failures behind them and focus on the future. As of this week, that's not going as smoothly as they might have hoped.

From Gizmodo:
Microsoft employees are putting pressure on their management to cancel a contract with U.S. Immigration and Customs Enforcement, part of a backlash against the agency’s policy of separating children from their families at the U.S. border.
In an open letter to Microsoft CEO Satya Nadella sent today, employees demanded that the company cancel its $19.4 million contract with ICE and instate a policy against working with clients who violate international human rights law. The text of the employee letter was first reported by the New York Times and confirmed by Gizmodo.
“We believe that Microsoft must take an ethical stand, and put children and families above profits,” the letter, signed by Microsoft employees, states. “We request that Microsoft cancel its contracts with ICE, and with other clients who directly enable ICE. As the people who build the technologies that Microsoft profits from, we refuse to be complicit. We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm.”
Yesterday, as word of the contract between ICE and Microsoft’s Azure cloud platform spread within Microsoft’s ranks, some employees were incensed—and considering quitting. Now, Gizmodo has learned, those outside the company are having second thoughts about working with a tech giant that’s a “proud” and willing collaborator with ICE.
Mat Marquis, a writer and developer, announced on Twitter that he was canceling his contract with Microsoft in protest against its ICE contract.
“It would be easy to think of coding as neutral—we solve puzzles,” Marquis told Gizmodo. “[...] It’s important, though, to consider the bigger picture for the things we help to build—how can it be misused, who am I supporting with it, who benefits from it and who bears the costs? I didn’t work with the Azure team; I would never have ended up there, considering my skillset. But the decision to work with an organization is a decision to help them achieve their goals, and Microsoft has shown that they’re willing to lend their name to ICE’s goals. I will not.”
Microsoft eventually responded to employees' concerns with some of the blandest PR pablum I've seen in quite some time, as if anxious to prove that they've learned nothing from their past mistakes, at least organizationally. Judging from the Microsoft employees' open letter, though, it would seem that consumers aren't the only people who are fed up with this shit, and ready to force some ethics on a giant multinational.

April 03, 2018

Today in Facebook: A better (and long overdue) effort.

Facebook's first efforts to offer an appearance of caring about the privacy and/or security of their users were pretty half-hearted, but their latest release looks like it might actually make a difference. As reported by TechCrunch:
Following the Cambridge Analytica scandal, users have flocked to their Facebook privacy settings to sever their connection to third-party apps that they no longer wanted to have access to their data. But deleting them all took forever because you had to remove them one by one. Now Facebook has released a new way to select as many apps as you want, then remove them in bulk. The feature has rolled out on mobile and desktop, and Facebook also offers the option to delete any posts those apps have made to your profile.
Facebook confirmed the launch to TechCrunch, pointing to its Newsroom and Developer News blog posts from the last few weeks that explained that “We already show people what apps their accounts are connected to and control what data they’ve permitted those apps to use. In the coming month, we’re going to make these choices more prominent and easier to manage.” Now we know what “easier” looks like. A Facebook spokesperson told us “we have more to do and will be sharing more when we can.” The updated interface was first spotted by Matt Navarra, who had previously called on Facebook to build a bulk removal option.
Facebook also says it will be automatically removing apps that users haven’t accessed "in over three months;" this is just a more proactive option. And, to be clear, it is a good thing that Facebook are finally doing something like this. The problem is that this really just locks the barn door long after the horse was already stolen; the ability to finally manage these apps is good, but the damage those apps cause has already been achieved.

The timing of this is interesting, though. Mark Zuckerberg has been claiming that Facebook were working on some of their privacy issues over the past year, a claim of which I was pretty dismissive at the time, but this feature has come pretty quickly after the Cambridge Analytica scandal finally broke. Either it's a major piece of work on Facebook's part, in which case they really have been working on this for some time, behind the scenes; or it was actually quite easy to implement, in which case one wonders why they didn't do something of this sort months or years ago.

I don't think that's just idle speculation on my part, either; you can look forward to having those very questions asked by both Congress (during testimony before Congress, with or without subpoena support), and/or by the litigants of the various class action lawsuits that Facebook currently faces (during the discovery phase of those proceedings, probably).

And those aren't the only changes coming to Facebook, either, at least for U.S. users.

January 05, 2018

VR's ethical concerns

OMG, am I ever happy to finally start seeing articles like this one, from VentureBeat:
Virtual reality (VR) has a great deal of potential for the betterment of society – whether it be inspiring social change or training surgeons for delicate medical procedures.
But as with all new technologies, we should also be aware of any potential ethical concerns that could emerge as social problems further down the line. Here I list just a few issues that should undoubtedly be considered before we forge ahead in optimism.
Preach, sister!

Now, some of the list are (IMHO) relatively minor things that should not be as highly ranked as they are (e.g. #6, "Unpalatable fantasies," or #9,"Appropriate roaming and re-creation," which both feel like first-world, corporate-boardroom concerns even in a world where VR porn already exists), and others (e.g. #10, "Privacy and data") are not unique to VR, but some of them are very much VR-specific and definitely concerns that I've written about before, including:

1) Sensory vulnerability

When we think of virtual reality, we automatically conjure images of clunky headsets covering the eyes — and often the ears — of users in order to create a fully immersive experience. There are also VR gloves and a growing range of other accessories and attachments. Though the resultant feel might be hyper-realistic, we should also be concerned for people using these in the home — especially alone. Having limited access to sense data leaves users vulnerable to accidents, home invasions, and any other misfortunes that can come of being totally distracted.
Remember when BestBuy closed down their Oculus Rift demo stations after discovering that their customers didn't really want to strap on a VR headset and leave themselves feeling horribly vulnerable in the middle of a public space? With "VR 2.0" apologists pushing portable VR as the natural next phase of the industry, this one has obvious relevance.

And then there's:

2) Social isolation

There’s a lot of debate around whether VR is socially isolating. On the one hand, the whole experience takes place within a single user’s field-of-vision, excluding others from physically participating alongside them. On the other hand, developers like Facebook have been busy inventing communal meeting places like Spaces, which help VR users meet and interact in a virtual social environment. Though, as argued, the latter could be helpfully utilized by the introverted and lonely (such as seniors), there’s also a danger that it could become the lazy and dismissive way of dealing with these issues.
There is also the question of whether forums like Spaces may even end-up “detaching” users by leading them to neglect their real-world social connections. Studies have already demonstrated that our existing social media consumption is making many of us feel socially isolated, as well as guilty and depressed. There’s also plenty of evidence to show that real face-to-face interactions are a crucial factor in maintaining good mental health. Substituting them with VR without further study would be ill-advised.
With Colorado already running VR experiments on inmates, this sort of thing is an obvious concern. Granted, Colorado is starting small, testing the usefulness of VR in acclimating possibly-institutionalized prison inmates to the outside world prior to release, but the potential for "VR solitary," and the permanent neurological damage that can result, has to loom large over any prison-system application of this technology.

Which leads us to:

5) Psychiatric

There could also be more profound and dangerous psychological effects on some users (although clearly there are currently a lot of unknowns). Experts in neuroscience and the human mind have spoken of “depersonalization”, which can result in a user believing their own physical body is an avatar. There is also a pertinent worry that VR might be swift to expose psychiatric vulnerabilities in some users, and spark psychotic episodes. One investor has even warned that virtual reality gaming could cause real-life post-traumatic stress disorder.
Needless to say, we must identify the psychological risks and symptoms ahead of market saturation, if that is an inevitability.
I've said it before, and I'll say it again: there's a reason why new therapeutic devices are generally required to prove themselves both effective and safe before being approved for widespread use on patients. It's unlikely that a device as human-centric as VR will ever have an animal testing phase, but closely-controlled and -supervised, double-blind human trials should definitely be done before we start prescribing VR for mental illness. Yes, there's potential application here, but there's also a strong whiff of snake oil about it all, and no data yet to show that VR does more good than harm as a therapeutic tool.

The list also includes at least one item that I hadn't considered before:

8) Manipulation

Attempts at consumer manipulation via advertising trickery are not new, but up until now they’ve been 2-dimensional. As such, they’ve had to work hard compete with our distracted focus. Phones ringing, babies crying, traffic, conversations, music, noisy neighbors, interesting reads, and all the rest. With VR, commercial advertisers will have access to our entire surrounding environment (which some psychologists argue has the power to control our behavior). This will ramp up revenue opportunities for developers, who now have (literally) whole new worlds of blank space upon which they can sell advertising.
Commentators are already warning that this could lead to new, covert tactics involving product placement, brand integration and subliminal advertising.
Again, this may be more of a first-world, corporate-boardroom concern at the moment; consumers are mostly kicking advertising's ass right now, and there's no data yet to suggest that VR ads will be any more effective, or that consumers won't quickly find (and adopt) VR ad-blocking even before VR manages to achieve widespread adoption. It's probably still worth keeping an eye on, though, and it's heartening to see articles in VentureBeat (which is aimed at potential VR investors, remember) that are exploring issues like this one.

And that's my take on the entire piece: a heartening dose of sober self-reflection for an industry that's seemingly built entirely out of hype. The entire article is worth a read, and well worth supporting just on principle, so go give them some clicks.