Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

January 13, 2019

So.... I guess that was CES?

Does anyone else find it weird that 2019's big Consumer Electronics Show wasted the entire week without showcasing anything for actual consumers?

I mean, sure, we got LG's rollup OLED TV, which looks sexy but costs US$8000, and which will need to be replaced in two years' time because of OLED's severe screen burn-in issues. Who can afford to spend $8K every two years on a roll-up gimmick TV? Who is this for?

We also got a plethora of 8K TVs, at a time when even 4K TVs aren't really a thing yet. I mean, it's great that the likes of LG are making 4K sets that are comparable in price to 1080p sets; if you're needing to replace your TV, and don't need a refresh rate higher than 60 Hz for any reason, then you can certainly go 4K because it won't cost extra so why not? But you still don't need a 4K TV for which there's almost no content available, and you definitely don't need an expensive 8K set for which there's even less content on the menu. 8K is nothing but costly, boasting high price points while delivering zero value to the consumer... which was basically the prevailing trend of CES2019.

Oh, yes, and then there's 5G... which, again, boasts a premium price while being completely useless to consumers since there are no 5G networks. And, no, AT&T's 5G E nonsense is not a 5G network, and does not count. Which brings us to CES2019's other prevailing trend, which was straight-up lies told to consumers about expensive products which are being marketed at them, without being in any way designed for them.

Worse yet, the one big discussion about technology that consumers actually care about was never mentioned by any of the big exhibitors.

November 21, 2018

Is it time to cool our quantum jets?

I've previously described quantum computing as being in its "ENIAC" phase, an analogy that would put effective quantum computing several decades away, at best. After all, it took seven decades for binary computers to get from ENIAC to Android, and there was no reason to suspect that quantum computers would any easier to develop than their binary predecessors.

According to a long article on IEEE.org, penned by Mikhail Dyakonov, who does
research in theoretical physics professionally, that decades-away estimate might actually be too optimistic. Quantum computers, he argues, might be more than merely difficult to develop, but impossible.
While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, I’m skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?
My answer is simple. No, never.

July 30, 2018

Updating your OS: Windows 10 VS Ubuntu
Or, what consumers really want.

Last week, when Microsoft announced that they were upgrading Windows 10 Update with AI to make it suck less, I wasn't sure what to think about it. I mean, it is an instance of Microsoft at least trying to fix something that Windows 10 users have been complaining about for two and a half years, but the actual announcement was somehow... underwhelming. I felt nothing; no joy, no satisfaction, no anger, no disappointment, just... nothing. And I didn't know why.

I experienced the same lack of feeling when Microsoft announced that they planned to win back consumers, having done so much over the last decade to lose those consumers in the first place. This should also have been good news, right? I mean, it seemed to indicate some awareness on Microsoft's part that it was their fault that consumers didn't care about Microsoft anymore. And yet... I felt nothing.

Maybe it was the name? As a PC gamer who'd spent the last year watching AAA gaming companies' attempts to turn everything they made into a "live service," Modern Life Services just sounded like so many horribly empty PR buzzwords. But I couldn't feel outraged about it. Once again, I was utterly unmoved, and couldn't put my finger on exactly why I was so unmoved.

And then I came across an interesting article by Forbes which brought it all into focus for me. He was writing about OS updates, as it turns out, but not Windows 10 updates, though; no, sir, Señor , you see:
Updates on both Windows and Ubuntu come in many forms. You have security updates, feature updates and software updates among others. If you’re someone who’s ever entertained the idea of ditching Windows for Linux, chances are Windows’ aggressive update behavior is a primary reason.
Microsoft’s system update policy has reached a point where its implementing artificial intelligence to guess when a user is away from their PC so that Windows can reboot and apply the latest updates. When I wrote about that so many people said “hey, what about just letting the human in front of the PC make that choice?”
And just like that, it all made sense. Yeah, I said to my self, fuck yeah. That was my reaction, too.

Because that had been my reaction; I remember thinking almost exactly that, in passing, and I'm not even on Windows 10. If Microsoft are so big on winning back consumers, I asked myself, why don't they make Windows 10, their flagship product, the one Microsoft product that almost everyone uses, more user-friendly? Not by experimenting with over-complicated AI, which they'd be doing anyway because AI and "intelligent edge" are the focus of their post-Windows corporate strategy, but through the simple, pro-consumer expedient of giving control of their PCs back to Windows' users?


June 28, 2018

Magical thinking, or, securing the Internet of Things

The news about Exactis' failure to secure their enormous trove of shadow profile data got me thinking about security in general: about the extent to which corporations, of whose existence we might be completely ignorant, are already harvesting all manner of highly personal information about you and I, not only without our informed consent, but without us even knowing when or how often it's happening. And that got me to thinking about the other data collection scheme that Big Data is so keen on lately: IoT, the so-called Internet of Things.

The idea that everything in your environment that incorporates a microchip would inevitably be connected to the Internet, and thus vulnerable to, and controllable by, any sufficiently sophisticated hacker, is something which has concerned me for some time now. I'm not convinced that it's possible to secure such a wide range of devices, from an equally wide range of manufacturers; and even if were possible, I'm not convinced that the measures required to make it happen are desirable. At all.

I'm especially un-sold on the capacity for Artificial Intelligence to succeed at this task when human intelligence has repeatedly failed, or to compensate for the combination of ignorance, incompetence, apathy, and/or greed that will doubtless be a defining feature of IoT for a long time to come. First things first, though; let's start by describing the scope of the problem.

April 19, 2018

Machine Learning is a transformative technology... and its transformations won't all be good ones

This is both hilarious and terrifying. From Kotaku:
Last year, University of Washington researchers used the technology to take videos of things that former President Barack Obama had already said, then generate faked videos of him spitting out those lines verbatim in a machine-generated format. The research team stopped short of putting new words in Obama’s mouth, but Get Out director Jordan Peele and BuzzFeed have done just that in a PSA warning malicious actors could soon generate videos of anyone saying just about anything.
Using technology similar to the University of Washington study and Peele’s (fairly good!) imitation of Obama’s voice, here’s a clip of the former POTUS saying “So, for instance, they could have me say things like, I don’t know, Killmonger was right. Or uh, Ben Carson is in the sunken place. Or how about this, simply, President Trump is a total and complete dipshit.”
[...]
“We’ve covered counterfeit news websites that say the pope endorsed Trump that look kinda like real news, but because it’s text people have started to become more wary,” BuzzFeed CEO Jonah Peretti wrote. “And now we’re starting to see tech that allows people to put words into the mouths of public figures that look like they must be real because it’s video and video doesn’t lie.”
[...]
As colleague Adam Clark Smith noted before, there are countless potential uses of this technology that would qualify as mundane, like improving the image quality of video chat apps, or recreating mind-blowing facsimiles of historic speeches in high-definition video or holograms.
But machine-learning algorithms are improving rapidly, and as security researcher Greg Allen wrote at the time in Wired, it is likely only a matter of years before the audio component catches up and makes Peele’s Obama imitation unnecessary. Within a decade, some kinds of forensic analysis may even be unable to detect forged audio.
Here's the clip:


Machine Learning is only a baby step on the road to Artificial Intelligence, but it's already at least powerful enough to convincingly swap celebrities’ faces with those of porn actors, and the potential chaos that this almost certainly will cause in our public discourse is mid-blowing. It's still a little crude, with FakeApp Obama still lodged firmly inside the Uncanny Valley... but we're also clearly on the upslope that leads out of that valley, and not that far away from the day when even the video that you see on the Internet simply can't be trusted.

This is a lot of power to put into the hands of almost everyone on Earth, and if there's one thing that we know, it's that this power will be used for evil. FakeApp Obama is just a proof of concept; the genuinely malicious fake videos are coming, and you're going to need to be very alert to spot them. Especially since we're living in an era when the actual news of the day is... surreal, to put it lightly. Stay woke, bitches.

We now return you to the Singularity, already in progress.

April 02, 2018

Microsoft's AI focus announced immediately with Windows ML

Just in case you were thinking that Microsoft's recent restructuring, with an increased focus on AI, were some sort of half-baked reaction to Windows 10's failure to thrive, allow me to reiterate a point that Paul Thurrott made last week: Terry Myerson's ouster was premeditated, and Microsoft have been planning for this for months. As proof, I offer today's latest announcement from Team Nadella: Windows ML, as reported by The Verge:
Microsoft is planning to include more artificial intelligence capabilities inside Windows 10 soon. The software giant is unveiling a new AI platform, Windows ML, for developers today, that will be available in the next major Windows 10 update available this spring. Microsoft’s new platform will enable all developers that create apps on Windows 10 to leverage existing pre-trained machine learning models in apps.
Windows ML will enable developers to create more powerful apps for consumers running Windows 10. Developers will be able to import existing learning models from different AI platforms and run them locally on PCs and devices running Windows 10, speeding up real-time analysis of local data like images or video, or even improving background tasks like indexing files for quick search inside apps.
[...]
Microsoft’s Windows machine learning model is designed to run across a number of different devices, including laptops, PCs, Internet of Things devices, servers, datacenters, and the HoloLens headset. AI processors, like Intel’s Movidius VPU, will also be supported, and Microsoft’s platform will optimize tasks for the hardware available.
[...]
Developers will be able to get an early look at the AI platform on Windows with Visual Studio Preview 15.7, and they’ll be able to use the Windows ML API in standard desktops apps and Universal Windows Apps across all editions of Windows 10 this year.  
Yes, Microsoft really are serious about this AI business. Details are still sketchy, including things like the expected release date (something which Microsoft is terrible with, anyway), and specifics about what sorts of enhancements this will translate into for real-world applications, as opposed to the purely theoretical speculations, but the idea that Microsoft is now working to evolve Machine Learning away from being the exclusive province of huge corporations is very good news. It means that ML might actually have a widespread impact on the daily lives of individual users in a way that they can actively, and knowingly, engage with, as opposed to being purely a black-box Big Data concern that seeks to manipulate them without their knowledge or consent. It's all potentially good stuff.

Unlike VR, Machine Learning is a "future tech now" which actually might live up to its hype, and making ML (i.e. Machine Learning) available for all developers, and usable even for standard desktop (i.e. Win32, or Windows 7) apps, could be very egalitarian and pro-consumer developments. It's also a sharp departure from the monopolistic, anti-consumer approach that Microsoft had take to all things Windows starting with Windows 8. The devil is in the details, of course, and Microsoft have said things before that sounded pro-consumer, only to have the final result be.... rather less than that, shall we say, but if they follow though on making ML available to developers of any Windows app, then I can only approve.

So, here we are, one day into the new Microsoft era, and their very first announcement is.... good? I mean, we've been burned so many times over the last few years that I'm reluctant to get excited about anything this soon, but... if Windows ML really does do what they say it'll do, and really is as accessible as they're saying it will be, then this really is good news. So... good job, Microsoft. More like this, please.

January 15, 2018

Finally asking the question, then completely missing the point

This article from We Live Security asks a really good question... and then totally fucks up the answer:
Last year, CES 2017 heralded the age of ubiquitous Virtual Reality or VR as the cool kids call it, but now CES 2018 has come and gone and you probably still don’t own or use a VR system. So why not?
VR has been a slow burn. The problem has been to create realism to the extent that the brain can stop nagging you that you’re not in a real environment and just adapt, and learn whatever’s being presented.
Yes, that is the question. But a lack of realism is not the problem with VR.

A quick search of YouTube will turn up videos that show people falling over while wearing VR headsets after trying to climb on, or sit on, virtual objects. Yes, VR sickness is still a thing, caused by signals from your vestibular system that conflict with the visual information that your VR headset is providing, but that's not the same thing as saying that VR doesn't look realistic enough to fool your brain, because anyone who's tried a VR headset will tell you that simply isn't true.

So, higher resolution images aren't the silver bullet that VR is waiting for. And passages like this one are simply nonsense:
Even if you have high-definition displays, they don’t emulate moving through a real environment. This is because you overlay a flat image onto a surface, but when you “pass by” that object, your peripheral vision doesn’t detect the other side of the image being displayed on the other side of the object.
Again, I direct to to YouTube for thousands of videos filled with evidence which rebut this argument. And, for the record, I don't think that AI is the answer to VR's problems, either:
However, due to the strong strides and commoditization this year in AI cores that “know” or can infer more about your changing environment, the overall experience can seem far more real. AI has come a long way in recent years, especially around integrating it into other environments through hooking APIs and such.
Talk about whistling past the graveyard. Notice how there isn't a single sentence in there which talks about the current state of AI research in any detail, or any specific claim about how application of AI to VR would help. Instead, the claims are nearly identical to those that VR advocates have been making for the headsets themselves: vague, nearly limitless, and somehow due to arrive any day now in spite of a total lack of anything resembling a relevant detail. AI is not magic; you can't simply wave an AI wand at your failing technology to make it magically relevant to consumers are almost entirely uninterested.

VR is not catching on with consumers because VR's advocates do not have a value proposition that it good enough to sell the tech. It really doesn't matter how low the price goes; VR still isn't useful for anything that can't be done without VR, which means that it will continue to be too expensive at any asking price. VR does not suffer from a lack of realism, or a lack of AI, or a lack of content, or an excess of sticker shock. I mean, yes, all of those things are issues with VR, but they are not the issue.

No, the issue with VR is that it fails to provide sufficient value for money to be worth buying. And nobody connected with the VR industry seems to care to tackle that fundamental issue, thus ensuring that it goes unresolved.

December 28, 2017

Beyond VR?

If you're looking for more evidence that VR in its current incarnation has already failed, I think one need look no further than the fact that some of VR's proponents are already trying to rebrand it as something other than VR. Something more useful and less problematic, perhaps.

From Alphr:
We are at a frontier. Just ahead, almost within reach, are a series of technological developments that are finally growing out of their infancy and will change not just the way we think about technology, but the way we think about reality and existence itself.
These developments will form part of what is called extended reality, or XR. The term describes the entire spectrum of reality, from the virtual to the physical, from augmented reality to augmented virtuality, virtual reality and everything in between. But what it implies is a dramatic, potentially species-defining change in human experience.
To many people, this kind of talk will likely sound overly conceptual, but XR’s implications are highly tangible. Psychiatrists could treat a phobia using VR to simulate, with near-perfect precision, the physical and psychological environment required to induce the phobic response. At the Tribeca Film Festival, ‘Tree’ gave guests the opportunity to immerse themselves in a rainforest and take in the sights and smells of the Amazon while running their hands on the trunk of a centuries-old tree. These examples barely scratch the surface of what is possible. XR’s potential is nearly limitless and in 2018, it will arrive.
[...]
This arrival of XR represents the collapse of the virtuality/reality divide. Within the new XR framework, virtuality and reality are no longer opposites. Neither are digital and biological. XR implies a far more complex relationship between these things – one in which virtuality can make things real.
If you're thinking that this all sounds a lot like the case that VR's advocates and apologists were making for VR itself, not that long ago, then you're not alone. From the promises of "nearly limitless" and yet somehow still vague potential, with the same tired old examples that still "barely scratch the surface of what is possible," to the promise that it will all arrive next year, in exactly the same way that VR has been predicted to explode into mass adoption sometime in the next year ever since Oculus Rift was released in 2016, this is exactly the same tired, old, VR sales pitch that has utterly failed to captivate consumers for two years now, and counting.

What's new, though, is the deliberate attempt to shift the discussion away from the VR label, to a new term, "XR," which allegedly combines Virtual Reality, Augmented Reality, Microsoft Mixed Reality, and any other, similar technology, into a seamless spectrum that "represents the collapse of the virtuality/reality divide," with virtuality and reality ceasing to be opposites.

Of course, exactly why consumers are supposed to want this next year, when they didn't last year and don't this year, is not specified; neither is there any mention of a specific technological development or breakthrough which will make this happen (next year, remember), in precisely the way that all existing VR/AR/MR headsets have so far failed to achieve. There's still no mention of a specific use for "XR" which is quantitatively different from any existing experience, desirable for the average consumer, and which also requires "XR" technology in a way that simply isn't the case for existing VR technology.

That qualitative enhancements to existing experiences are simply not enough to shift large volumes of expensive VR headset is plainly evident in VR's still-lacklustre sales numbers, and in the VR content developers who are retooling VR offerings to work without the tech. Neither is there any reason to think that the "XR" technologies of literally tomorrow will be able to "simulate, with near-perfect precision," any sort of environment at all, when existing VR headsets can't, and when the PCs that drive them are not increasing significantly in processing power. Have I mentioned lately that Moore's Law isn't a thing anymore? And while VR hardware developers are making incremental improvements by iterating on the display technology, there are any number of other problems with VR that aren't directly related to the quality and feature sets of the displays.

Let's be clear: VR is not currently a thing. It wasn't a thing last year, it didn't become a thing this year, and absent divine intervention, it's not going to become a thing next year, either. AR might have more potential, as demonstrated by the likes of Pokemon Go, but it's still in a profoundly primitive state, and years away from enabling any "dramatic, potentially species-defining change in human experience." While machine learning and automation are definitely fuelling profound changes our society and economy (the Singularity, already in progress), there's no reason to think that it's going to have any specific application to VR/AR any time soon. And Microsoft's "MR" headsets are just VR headsets with different branding... which is exactly what is being attempted in Alphr's article.

"XR" is not on the verge of taking off, any more than VR is on the verge of taking off, and the folks at Alphr are whistling past the graveyard. I stand by my prediction: VR will continue to not be a thing, and 2018 will be the year when tech media outlets finally start to admit it.

December 04, 2017

I, for one, will welcome our new computer overlords...

Still don't think that the Singularity is underway? Well, check our the next thing in AI, as reported by Science Alert:
In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that's capable of generating its own AIs.
More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a 'child' that outperformed all of its human-made counterparts.
The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task.
For this particular child AI, which the researchers called NASNet, the task was recognising objects - people, cars, traffic lights, handbags, backpacks, etc. - in a video in real-time.
AutoML would evaluate NASNet's performance and use that information to improve its child AI, repeating the process thousands of times.
When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call "two of the most respected large-scale academic data sets in computer vision," NASNet outperformed all other computer vision systems.
Being able to automate the task of programming teaching automation systems is obviously the next level here, and has the potential to greatly accelerate the process of developing and deploying machine learning systems that outperform anything currently in use. This has the potential to rapidly and significantly improve all of our current machine learning systems and applications, of course, including e.g. self-driving cars and other autonomous vehicles (or, Autos).

It also adds an extra layer of opacity to the process; Google is already heavily dependent on effectively black box algorithms, and struggling to explain decisions made in an emergent manner by systems that are already too complex to be easily understood by the humans who nominally designed them, and this new technology will result in systems that we barely understand designing systems, on their own, that humans won't understand at all... and eventually trusting them to manage all the nuts and bolts of our institutions. This raises some pretty obvious concerns, and Science Alert's article  doesn't overlook them:
Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what's to prevent the parent from passing down unwanted biases to its child?
What if AutoML creates systems so fast that society can't keep up? It's not very difficult to see how NASNet could be employed in automated surveillance systems in the near future, perhaps sooner than regulations could be put in place to control such systems.
It's at this point that I'd like to say that I, for one, will welcome our new computer overlords. I now return you to the Singularity, already in progress.

August 25, 2017

We now return you to the Singularity, already in progress...

If you're starting to feel as if modern life is starting to look more and more like an episode of Black Mirror, then you're not alone. It seems like every other day that I'm now seeing stories like this one, from the CBC:
There was a time that oil companies ruled the globe, but "black gold" is no longer the world's most valuable resource — it's been surpassed by data.
The five most valuable companies in the world today — Apple, Amazon, Facebook, Microsoft and Google's parent company Alphabet — have commodified data and taken over their respective sectors.
"Data is clearly the new oil," says Jonathan Taplin, director emeritus of the USC Annenberg Innovation Lab and the author of Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy.
But with that domination comes responsibility — and jurisdictions are struggling with how to contain, regulate and protect all those ones and zeros.
For instance, Google holds an 81 per cent share of search, according to data metrics site Net Market Share.
By comparison, even at its height, Standard Oil only had a 79 per cent share of the American market before antitrust regulators stepped in, Taplin says.
[...]
Traditionally, this is where the antitrust regulators would step in, but in the data economy it's not so easy. What we're seeing for the first time is a clash between the concept of the nation state and these global, borderless corporations. A handful of tech giants now surpass the size and power of many governments.
For comparison sake, Facebook has almost two billion users, while Canada has a population of just over 36 million. Based on the companies' sheer scale alone, it is increasingly difficult for countries to enforce any kind of regulation, especially as the tech giants start pushing for rules that free them from local restrictions, says Open Media's Meghan Sali.
Machine learning and automation technologies could render as much as 40% of the population in the developed world not merely unemployed, but unemployable. Huge technology companies now control more access to information, and more money, than most governments. There's a growing realization that everyone around you has an internet-enabled video camera in their pocket, and could be surveilling you with neither your knowledge nor your consent, all the time, and uploading the data to those same huge mega-corporations. It's not quite Neuromancer, but we're definitely living a world that very different than the world of even twenty years ago; a world that's still changing, since all of these technological trends are still in their early days.

And that doesn't even factor in the Earth's changing climate...

Welcome to the world of tomorrow, today; the Singularity, already underway.

There's a lot more to that CBC piece, by the way; it's definitely worth reading.

July 24, 2017

We now return you to the Singularity, already in progress...

When it comes to truly autonomous self-driving vehicles, or capital-"a" Autos, it's helpful to remember that this cost- and life-saving technology doesn't need to be perfect in order to see widespread adoption; as CGP Grey put it, they just need to be better than us. And here's the thing; they already are better than us. Waymo's Auto has been in only 14 crashes during testing, of which 13 were caused by the human drivers that the AI had to share the road with; only one was caused by the Auto's software.

With a safety record like that, there was never any question that Autos would find their way onto our roads. The only question was, "When?" How long would it take the general public to accept the presence of self-driving vehicles on our roads? How long would public unease with these new AI drivers prompt risk-averse politicians to drag their feet on giving approval for automakers to bring these autonomous autos to market?

Well, we now know some of the answers to those questions. The general public may still be working their way around to a general acceptance of this new tech, but it seems that the politicians are all done with the foot-dragging, as of last Wednesday.

From Reuters:
A U.S. House panel on Wednesday approved a sweeping proposal by voice vote to allow automakers to deploy up to 100,000 self-driving vehicles without meeting existing auto safety standards and bar states from imposing driverless car rules.
Representative Robert Latta, a Republican who heads the Energy and Commerce Committee subcommittee overseeing consumer protection, said he would continue to consider changes before the full committee votes on the measure, expected next week. The full U.S. House of Representatives will not take up the bill until it reconvenes in September after the summer recess.
[...]
Democrats praised the bipartisan proposal but said they want more changes before the full committee takes it up, including potentially adding other auto safety measures.
[...]
Separately, Republican Senator John Thune, who is working with Democrats, said Wednesday he hopes to release a draft self-driving car reform bill before the end of July. 
AI does not need to be super-intelligent to completely reshape the way we do everything; automation technologies are already good enough to replace forty to fifty percent of the work force, and the impact that will have on society is incalculable. Self-driving Autos alone can replace cab drivers (Uber and Lyft are both working on this) and truck drivers (long-haul transportation, in particular, would benefit) long before consumers get around to replacing all of their personal transports with a shiny, new, auto. The economic impact of that can't be understated; it's not just the drivers' jobs that would be replaced, but also the related businesses, like truck stops, that would suddenly find themselves obsolete, and their employees looking for increasingly rare employment.

If you were thinking that this change was decades away, then think again. The benefits of Autos, in increased safety and productivity, are simply too compelling to ignore, and the only voices that might be inclined to fight against it (i.e. labour unions) just aren't influential enough, anymore, to be able to turn the tide. This is happening. Self-driving cabs and long-haul trucks will be playing city streets and highways across the continental U.S. in a matter of just a few years, with other countries doubtless following close behind, and industrialized society will never be the same again.

This is the Singularity, happening in slow motion. AI may still be well short of the super-intelligent mark, or even human-equivalent general intelligence, but it turns out that those things are not necessary for the Singularity to occur. There's a reason why governments across the industrialized world are already looking into things like Universal Basic Income as a way of caring for the basic needs of a populace who will find themselves not merely unemployed, but unemployable, within our lifetimes, through absolutely no fault of their own.

What will society look like when 40% of the population having nothing but time on their hands? What happens when simply having a job becomes a weird sort of status symbol, instead of being simply the baseline assumption that drives our perceptions of ourselves, and of each other? I don't know; I don't think anybody knows. But the U.S. congress, in a rare display of bipartisan agreement, have just decided that everybody, both within the U.S. and without, are about to find out.

We now return you to the Singularity, already in progress.

March 28, 2017

Welcome to the singularity

The Singularity: wherein developments in artificial intelligence reshape our civilization in profound and irreversible ways. It sounds like science fiction, but we live in a world where science fiction becomes science fact every single day, and the singularity isn't some far-future possibility anymore. Increasingly, it's the world we live in right now. And people are starting to notice.

From the NY Times:
Who is winning the race for jobs between robots and humans? Last year, two leading economists described a future in which humans come out ahead. But now they’ve declared a different winner: the robots.
The industry most affected by automation is manufacturing. For every robot per thousand workers, up to six workers lost their jobs and wages fell by as much as three-fourths of a percent, according to a new paper by the economists, Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University. It appears to be the first study to quantify large, direct, negative effects of robots.
The paper is all the more significant because the researchers, whose work is highly regarded in their field, had been more sanguine about the effect of technology on jobs. In a paper last year, they said it was likely that increased automation would create new, better jobs, so employment and wages would eventually return to their previous levels. Just as cranes replaced dockworkers but created related jobs for engineers and financiers, the theory goes, new technology has created new jobs for software developers and data analysts.
But that paper was a conceptual exercise. The new one uses real-world data — and suggests a more pessimistic future. The researchers said they were surprised to see very little employment increase in other occupations to offset the job losses in manufacturing.
CGP Grey posted an excellent video on this same subject a back in 2014, so this isn't news to everybody, but with former optimists coming around to the more pessimistic realistic view, look for this to become a more common narrative thread going forward. I know that common wisdom in the last U.S. election was that Dems didn't spend enough time talking about how they were going to bring back blue-collar jobs, but reality is what persists in being true, regardless of what you want to believe, and reality is that Obama was right: the economic force that's elminating factory jobs really is automation, and there really isn't much that anyone can do about that now, except prepare for the coming paradigm shift.

Imagine a society where everyone gets a guaranteed minimum income, and hustles on the side for extra cash. And then stop imagining, because it's already happening, with pilot projects in Finland and Ontario. This is our new normal: unemployment statistics are, and have always been, more fiction than fact, "full employment" is something that we'll never see again in our lifetimes, and there may be no other way to ensure that we can continue having an economy, when 40% of workers' jobs have been replaced by automation.

Welcome to the Singularity.