Showing posts with label Singularity. Show all posts
Showing posts with label Singularity. Show all posts

November 21, 2018

Is it time to cool our quantum jets?

I've previously described quantum computing as being in its "ENIAC" phase, an analogy that would put effective quantum computing several decades away, at best. After all, it took seven decades for binary computers to get from ENIAC to Android, and there was no reason to suspect that quantum computers would any easier to develop than their binary predecessors.

According to a long article on IEEE.org, penned by Mikhail Dyakonov, who does
research in theoretical physics professionally, that decades-away estimate might actually be too optimistic. Quantum computers, he argues, might be more than merely difficult to develop, but impossible.
While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, I’m skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?
My answer is simple. No, never.

October 20, 2018

I welcome our new robot overlords...

There's been some debate between rival economists about the potential impact of new automation technologies. On one side, there are people who see the combination of machine learning and automation as being capable of replacing most or all of the people who do simple or repetitive tasks, or even more complex tasks like driving. CGP Grey's video, Humans Need Not Apply, gives a solid and unsettling summation of this viewpoint.

On the other side, of course, are those who have been clinging to the very convictions that Grey's video spends fifteen minutes dismantling: that automating grunt work will free up humans to do other, more intellectual work, improving life for displaced workers once they've all been retrained and reentered the workforce as higher-skilled, better-credentialed high-tech workers.

Never mind that we have no way to retrain anywhere near 10% of our work force, let alone 25%, 40%, or more, or any clear idea what new jobs we'd be retraining them to do; never mind that the rise of high-tech industries hasn't actually succeeded in doing this at any point in the last thirty years. We should just stop worrying about a future where those workers who are still employed are also largely disposable, and where everyone else gets to take on enormous debt loads in the form of college loans, often as middle-aged students who'll have little hope of paying off those debts. Once these technologies start rolling out in the real world, the argument goes, we'll have a better idea of the impact they'll have on the employment picture, and we should all just chill out until that happens.

Enter Uniqlo, who have just rolled out this technology in the real world, as reported by Quartz:

June 28, 2018

Magical thinking, or, securing the Internet of Things

The news about Exactis' failure to secure their enormous trove of shadow profile data got me thinking about security in general: about the extent to which corporations, of whose existence we might be completely ignorant, are already harvesting all manner of highly personal information about you and I, not only without our informed consent, but without us even knowing when or how often it's happening. And that got me to thinking about the other data collection scheme that Big Data is so keen on lately: IoT, the so-called Internet of Things.

The idea that everything in your environment that incorporates a microchip would inevitably be connected to the Internet, and thus vulnerable to, and controllable by, any sufficiently sophisticated hacker, is something which has concerned me for some time now. I'm not convinced that it's possible to secure such a wide range of devices, from an equally wide range of manufacturers; and even if were possible, I'm not convinced that the measures required to make it happen are desirable. At all.

I'm especially un-sold on the capacity for Artificial Intelligence to succeed at this task when human intelligence has repeatedly failed, or to compensate for the combination of ignorance, incompetence, apathy, and/or greed that will doubtless be a defining feature of IoT for a long time to come. First things first, though; let's start by describing the scope of the problem.

April 23, 2018

Fighting fire with fire

Last week, I posted about this Kotaku article, which discussed the use of Machine Learning algorithms to enable feats of video face-swapping which are increasingly difficult to spot. The video example, in which Jordan Peele masquerades as Barack Obama to warn us about the dangers of the coming "Fucked up dystopia," while exhorting us all to "stay woke, bitches," was both hilarious and disturbing:


If that video gave you both belly laughs and nightmares, then MIT Technology Review has an antidote for you.... kinda.
The ability to take one person’s face or expression and superimpose it onto a video of another person has recently become possible. [...] This phenomenon has significant implications. At the very least, it has the potential to undermine the reputation of people who are victims of this kind of forgery. It poses problems for biometric ID systems. And it threatens to undermine public trust in videos of any kind.
[...]
Enter Andreas Rossler at the Technical University of Munich in Germany and colleagues, who have developed a deep-learning system that can automatically spot face-swap videos. The new technique could help identify forged videos as they are posted to the web.
But the work also has sting in the tail. The same deep-learning technique that can spot face-swap videos can also be used to improve the quality of face swaps in the first place—and that could make them harder to detect.
Artificial Intelligence: making your life both better and worse since 2018.

The fact that the same techniques that make detecting DeepFakes harder also makes them easier to fake in the first place creates an awkward conundrum. I'm inclined to think that arming individuals with the power to spot fakes more easily is a good thing, but it's not exactly a no-brainer. Do you arm individuals with the tools they need to tell real videos from clever ML-powered forgeries, knowing that some of those individuals will use those same tools to create more clever ML-powered forgeries? Would withholding this power from rule-abiding individuals help prevent the DeepFake apocalypse, or just leave them helpless to protect themselves against society's black hats and bad actors, who will almost certainly be disseminating these tools on darknets anyway?

And this is still just Machine Learning, and not the full-blown Artificial Intelligence that it may well lead to. Count on it: things will only get wilder from here.

We now return you to the Singularity, already in progress....

April 19, 2018

Machine Learning is a transformative technology... and its transformations won't all be good ones

This is both hilarious and terrifying. From Kotaku:
Last year, University of Washington researchers used the technology to take videos of things that former President Barack Obama had already said, then generate faked videos of him spitting out those lines verbatim in a machine-generated format. The research team stopped short of putting new words in Obama’s mouth, but Get Out director Jordan Peele and BuzzFeed have done just that in a PSA warning malicious actors could soon generate videos of anyone saying just about anything.
Using technology similar to the University of Washington study and Peele’s (fairly good!) imitation of Obama’s voice, here’s a clip of the former POTUS saying “So, for instance, they could have me say things like, I don’t know, Killmonger was right. Or uh, Ben Carson is in the sunken place. Or how about this, simply, President Trump is a total and complete dipshit.”
[...]
“We’ve covered counterfeit news websites that say the pope endorsed Trump that look kinda like real news, but because it’s text people have started to become more wary,” BuzzFeed CEO Jonah Peretti wrote. “And now we’re starting to see tech that allows people to put words into the mouths of public figures that look like they must be real because it’s video and video doesn’t lie.”
[...]
As colleague Adam Clark Smith noted before, there are countless potential uses of this technology that would qualify as mundane, like improving the image quality of video chat apps, or recreating mind-blowing facsimiles of historic speeches in high-definition video or holograms.
But machine-learning algorithms are improving rapidly, and as security researcher Greg Allen wrote at the time in Wired, it is likely only a matter of years before the audio component catches up and makes Peele’s Obama imitation unnecessary. Within a decade, some kinds of forensic analysis may even be unable to detect forged audio.
Here's the clip:


Machine Learning is only a baby step on the road to Artificial Intelligence, but it's already at least powerful enough to convincingly swap celebrities’ faces with those of porn actors, and the potential chaos that this almost certainly will cause in our public discourse is mid-blowing. It's still a little crude, with FakeApp Obama still lodged firmly inside the Uncanny Valley... but we're also clearly on the upslope that leads out of that valley, and not that far away from the day when even the video that you see on the Internet simply can't be trusted.

This is a lot of power to put into the hands of almost everyone on Earth, and if there's one thing that we know, it's that this power will be used for evil. FakeApp Obama is just a proof of concept; the genuinely malicious fake videos are coming, and you're going to need to be very alert to spot them. Especially since we're living in an era when the actual news of the day is... surreal, to put it lightly. Stay woke, bitches.

We now return you to the Singularity, already in progress.

December 28, 2017

Beyond VR?

If you're looking for more evidence that VR in its current incarnation has already failed, I think one need look no further than the fact that some of VR's proponents are already trying to rebrand it as something other than VR. Something more useful and less problematic, perhaps.

From Alphr:
We are at a frontier. Just ahead, almost within reach, are a series of technological developments that are finally growing out of their infancy and will change not just the way we think about technology, but the way we think about reality and existence itself.
These developments will form part of what is called extended reality, or XR. The term describes the entire spectrum of reality, from the virtual to the physical, from augmented reality to augmented virtuality, virtual reality and everything in between. But what it implies is a dramatic, potentially species-defining change in human experience.
To many people, this kind of talk will likely sound overly conceptual, but XR’s implications are highly tangible. Psychiatrists could treat a phobia using VR to simulate, with near-perfect precision, the physical and psychological environment required to induce the phobic response. At the Tribeca Film Festival, ‘Tree’ gave guests the opportunity to immerse themselves in a rainforest and take in the sights and smells of the Amazon while running their hands on the trunk of a centuries-old tree. These examples barely scratch the surface of what is possible. XR’s potential is nearly limitless and in 2018, it will arrive.
[...]
This arrival of XR represents the collapse of the virtuality/reality divide. Within the new XR framework, virtuality and reality are no longer opposites. Neither are digital and biological. XR implies a far more complex relationship between these things – one in which virtuality can make things real.
If you're thinking that this all sounds a lot like the case that VR's advocates and apologists were making for VR itself, not that long ago, then you're not alone. From the promises of "nearly limitless" and yet somehow still vague potential, with the same tired old examples that still "barely scratch the surface of what is possible," to the promise that it will all arrive next year, in exactly the same way that VR has been predicted to explode into mass adoption sometime in the next year ever since Oculus Rift was released in 2016, this is exactly the same tired, old, VR sales pitch that has utterly failed to captivate consumers for two years now, and counting.

What's new, though, is the deliberate attempt to shift the discussion away from the VR label, to a new term, "XR," which allegedly combines Virtual Reality, Augmented Reality, Microsoft Mixed Reality, and any other, similar technology, into a seamless spectrum that "represents the collapse of the virtuality/reality divide," with virtuality and reality ceasing to be opposites.

Of course, exactly why consumers are supposed to want this next year, when they didn't last year and don't this year, is not specified; neither is there any mention of a specific technological development or breakthrough which will make this happen (next year, remember), in precisely the way that all existing VR/AR/MR headsets have so far failed to achieve. There's still no mention of a specific use for "XR" which is quantitatively different from any existing experience, desirable for the average consumer, and which also requires "XR" technology in a way that simply isn't the case for existing VR technology.

That qualitative enhancements to existing experiences are simply not enough to shift large volumes of expensive VR headset is plainly evident in VR's still-lacklustre sales numbers, and in the VR content developers who are retooling VR offerings to work without the tech. Neither is there any reason to think that the "XR" technologies of literally tomorrow will be able to "simulate, with near-perfect precision," any sort of environment at all, when existing VR headsets can't, and when the PCs that drive them are not increasing significantly in processing power. Have I mentioned lately that Moore's Law isn't a thing anymore? And while VR hardware developers are making incremental improvements by iterating on the display technology, there are any number of other problems with VR that aren't directly related to the quality and feature sets of the displays.

Let's be clear: VR is not currently a thing. It wasn't a thing last year, it didn't become a thing this year, and absent divine intervention, it's not going to become a thing next year, either. AR might have more potential, as demonstrated by the likes of Pokemon Go, but it's still in a profoundly primitive state, and years away from enabling any "dramatic, potentially species-defining change in human experience." While machine learning and automation are definitely fuelling profound changes our society and economy (the Singularity, already in progress), there's no reason to think that it's going to have any specific application to VR/AR any time soon. And Microsoft's "MR" headsets are just VR headsets with different branding... which is exactly what is being attempted in Alphr's article.

"XR" is not on the verge of taking off, any more than VR is on the verge of taking off, and the folks at Alphr are whistling past the graveyard. I stand by my prediction: VR will continue to not be a thing, and 2018 will be the year when tech media outlets finally start to admit it.

December 04, 2017

I, for one, will welcome our new computer overlords...

Still don't think that the Singularity is underway? Well, check our the next thing in AI, as reported by Science Alert:
In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that's capable of generating its own AIs.
More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a 'child' that outperformed all of its human-made counterparts.
The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task.
For this particular child AI, which the researchers called NASNet, the task was recognising objects - people, cars, traffic lights, handbags, backpacks, etc. - in a video in real-time.
AutoML would evaluate NASNet's performance and use that information to improve its child AI, repeating the process thousands of times.
When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call "two of the most respected large-scale academic data sets in computer vision," NASNet outperformed all other computer vision systems.
Being able to automate the task of programming teaching automation systems is obviously the next level here, and has the potential to greatly accelerate the process of developing and deploying machine learning systems that outperform anything currently in use. This has the potential to rapidly and significantly improve all of our current machine learning systems and applications, of course, including e.g. self-driving cars and other autonomous vehicles (or, Autos).

It also adds an extra layer of opacity to the process; Google is already heavily dependent on effectively black box algorithms, and struggling to explain decisions made in an emergent manner by systems that are already too complex to be easily understood by the humans who nominally designed them, and this new technology will result in systems that we barely understand designing systems, on their own, that humans won't understand at all... and eventually trusting them to manage all the nuts and bolts of our institutions. This raises some pretty obvious concerns, and Science Alert's article  doesn't overlook them:
Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what's to prevent the parent from passing down unwanted biases to its child?
What if AutoML creates systems so fast that society can't keep up? It's not very difficult to see how NASNet could be employed in automated surveillance systems in the near future, perhaps sooner than regulations could be put in place to control such systems.
It's at this point that I'd like to say that I, for one, will welcome our new computer overlords. I now return you to the Singularity, already in progress.

November 30, 2017

375 million jobs may be automated by 2030

This according to a new report by CNN Tech:
The McKinsey Global Institute cautions that as many as 375 million workers will need to switch occupational categories by 2030 due to automation.
The work most at risk of automation includes physical jobs in predictable environments, such as operating machinery or preparing fast food. Data collection and processing is also in the crosshairs, with implications for mortgage origination, paralegals, accounts and back-office processing.
To remain viable, workers must embrace retraining in different fields. But governments and companies will need to help smooth what could be a rocky transition.
"The model where people go to school for the first 20 years of life and work for the next 40 or 50 years is broken," Susan Lund, a partner for the McKinsey Global Institute and co-author of the report, told CNN Tech. "We're going to have to think about learning and training throughout the course of your career."
The authors believe we may see a massive transition on a scale not seen since the early 1900s, when workers shifted from farms to factories. The report also cited the potential need for an effort on the same scale as the Marshall Plan, when the United States spent billions to rebuild Western Europe after World War II. 
This is noteworthy mainly because outfits like CNN had been, until now, entirely focused on giving airtime and mindshare to economists whose entire message was that new technologies would comfortably replace any and all jobs lost to other new technologies. This report is the first I've seen from the CNNs of the world that actually looked at the scale of the problem, and the cost of retraining the numbers of workers who will be displaced by existing automation technologies.

And that is the problem here: we're not talking about the impact of possible future technologies.We're talking about automation technologies that exist now, that are being deployed now, that are displacing workers already, and that are going to displace hundreds of millions of workers, worldwide, in the next decade. These people will not merely be unemployed; they'll be unemployable, through no fault of their own, trained and experienced in jobs that simply won't exist anymore. This is exactly the scenario that "alarmists" have been trying to warn policymakers about, and that has some lawmakers running Universal Basic Income pilot programs.

The good news? Universal Basic Income is looking like it has might be an effective way of tackling the issue. As reported by The Independent:
Support for a basic income has grown in recent years, fuelled in part by fears about the impact that new technology will have on jobs. As machines and robots are able to complete an increasing number of tasks, attention has turned to how people will live when there are not enough jobs to go round.
Ontario’s Premier, Kathleen Wynne, said this was a major factor in the decision to trial a basic income in the province.
She said: "I see it on a daily basis. I go into a factory and the floor plant manager can tell me where there were 20 people and there is one machine. We need to understand what it might look like if there is, in fact, the labour disruption that some economists are predicting."
Ontario officials have found that many people are reluctant to sign up to the scheme, fearing there is a catch or that they will be left without money once the pilot finishes.
Many of those who are receiving payments, however, say their lives have already been changed for the better.
[...]
Finland is also trialling a basic income, as is the state of Hawaii, Oakland in California and the Dutch city of Utrecht.
And for those skeptics in the United States, there's this report from Futurism:
In recent months, everyone from Elon Musk to Sir Richard Branson has come out in favor of universal basic income (UBI), a system in which every person receives a regular payment simply for being alive. Now, a study carried out by the Roosevelt Institute has concluded that implementing a UBI in the U.S. could have a positive effect on the nation’s economy.
The study looked at three separate proposals: a “basic income” of $1,000 per month given to every adult, a “base income” of $500 per month given to every adult, and a “child allowance” of $250 per month for every child. The researchers concluded that the larger the sum, the more significant the positive economic impact.
They projected that the $1,000 basic income would grow the economy by 12.56 percent over the course of eight years, after which point its effect would diminish. That would translate to an increase in the country’s gross domestic product of $2.48 trillion.
So the question becomes, as developing machine intelligence technology displaces hundreds of millions of human workers, will the world's governments have the political will to actually support them by implementing a "dole" system that looks like the best possible solution? In counties like the United States, where "socialism" is currently a worse political insult than "fascist," and where income disparity is about to get a lot worse as Republicans push through a huge tax break for their wealthy donors at the expense of lower-income Americans and more than a trillion dollars of additional debt, will Universal Basic Income have any chance of becoming a thing in time to matter? And what happens to the global economy if the can't?

And with that, I return you to the Singularity, already in progress.

September 01, 2017

Should robots pay income tax?

I have good news, and bad news. The bad news is that existing automation technologies are already good enough to steal 5% of all jobs in the U.S. by 2021; a Canadian study puts the figure at 42% within two decades. The good news is that people are actually thinking about this inevitable issue, and how to tackle it.

From CBC News:
Jane Kim, a municipal politician in San Francisco, launched a campaign this week called the Jobs of the Future Fund to study how a statewide income tax on job-stealing machines might work.
Assuming automation is inevitable, Kim proposes that proceeds from the tax bankroll new opportunities (for those of us who aren't made up of chips and data) through job retraining and investments in education.
Since robots can't actually pay taxes on their own (for now), a company that employs robots might pay the government a tax in accordance with how much money each robot has generated, or based on the profits that come from the labour savings of an automated workforce.
The idea of a robot tax was first introduced earlier this year by Bill Gates, who said in an interview with Quartz: "Right now if a human worker does $50,000 worth of work in a factory that income is taxed. If a robot comes in to do the same thing, you'd think we'd tax the robot at a similar level."
[...]
But the concept has its detractors. Critics argue that taxing robots would disincentivize companies from adopting them and could impede innovation.
Taxing robots is a particularly a bad idea in an era of low productivity growth, according to Robert Seamans, an associate professor of management at New York University.
"The existing empirical evidence suggests that robots boost productivity growth, so a tax on robots would limit that productivity," he says.
Gates, who is a philanthropist these days, argues that slowing down the adoption of automation might not be such a bad idea. It would give us more time to be thoughtful in how we approach the shifting economy, and to avoid the social crisis that could arise if we're not prepared for widespread job displacement.
I think I'm with Gates on this one. Universal Basic Income is already being tested in Canada and Finland, but U.B.I. needs to be funded; if we can implement a funding model which simultaneously slows the pace at which workers get replaced, then we might have a model which can show how to manage the transition from our present, in which workers struggle to find jobs, to our future, in which the very term "worker" is obsolete. Make no mistake, though - that future is coming.

We now return you to the Singularity, already in progress.

August 25, 2017

We now return you to the Singularity, already in progress...

If you're starting to feel as if modern life is starting to look more and more like an episode of Black Mirror, then you're not alone. It seems like every other day that I'm now seeing stories like this one, from the CBC:
There was a time that oil companies ruled the globe, but "black gold" is no longer the world's most valuable resource — it's been surpassed by data.
The five most valuable companies in the world today — Apple, Amazon, Facebook, Microsoft and Google's parent company Alphabet — have commodified data and taken over their respective sectors.
"Data is clearly the new oil," says Jonathan Taplin, director emeritus of the USC Annenberg Innovation Lab and the author of Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy.
But with that domination comes responsibility — and jurisdictions are struggling with how to contain, regulate and protect all those ones and zeros.
For instance, Google holds an 81 per cent share of search, according to data metrics site Net Market Share.
By comparison, even at its height, Standard Oil only had a 79 per cent share of the American market before antitrust regulators stepped in, Taplin says.
[...]
Traditionally, this is where the antitrust regulators would step in, but in the data economy it's not so easy. What we're seeing for the first time is a clash between the concept of the nation state and these global, borderless corporations. A handful of tech giants now surpass the size and power of many governments.
For comparison sake, Facebook has almost two billion users, while Canada has a population of just over 36 million. Based on the companies' sheer scale alone, it is increasingly difficult for countries to enforce any kind of regulation, especially as the tech giants start pushing for rules that free them from local restrictions, says Open Media's Meghan Sali.
Machine learning and automation technologies could render as much as 40% of the population in the developed world not merely unemployed, but unemployable. Huge technology companies now control more access to information, and more money, than most governments. There's a growing realization that everyone around you has an internet-enabled video camera in their pocket, and could be surveilling you with neither your knowledge nor your consent, all the time, and uploading the data to those same huge mega-corporations. It's not quite Neuromancer, but we're definitely living a world that very different than the world of even twenty years ago; a world that's still changing, since all of these technological trends are still in their early days.

And that doesn't even factor in the Earth's changing climate...

Welcome to the world of tomorrow, today; the Singularity, already underway.

There's a lot more to that CBC piece, by the way; it's definitely worth reading.

July 24, 2017

We now return you to the Singularity, already in progress...

When it comes to truly autonomous self-driving vehicles, or capital-"a" Autos, it's helpful to remember that this cost- and life-saving technology doesn't need to be perfect in order to see widespread adoption; as CGP Grey put it, they just need to be better than us. And here's the thing; they already are better than us. Waymo's Auto has been in only 14 crashes during testing, of which 13 were caused by the human drivers that the AI had to share the road with; only one was caused by the Auto's software.

With a safety record like that, there was never any question that Autos would find their way onto our roads. The only question was, "When?" How long would it take the general public to accept the presence of self-driving vehicles on our roads? How long would public unease with these new AI drivers prompt risk-averse politicians to drag their feet on giving approval for automakers to bring these autonomous autos to market?

Well, we now know some of the answers to those questions. The general public may still be working their way around to a general acceptance of this new tech, but it seems that the politicians are all done with the foot-dragging, as of last Wednesday.

From Reuters:
A U.S. House panel on Wednesday approved a sweeping proposal by voice vote to allow automakers to deploy up to 100,000 self-driving vehicles without meeting existing auto safety standards and bar states from imposing driverless car rules.
Representative Robert Latta, a Republican who heads the Energy and Commerce Committee subcommittee overseeing consumer protection, said he would continue to consider changes before the full committee votes on the measure, expected next week. The full U.S. House of Representatives will not take up the bill until it reconvenes in September after the summer recess.
[...]
Democrats praised the bipartisan proposal but said they want more changes before the full committee takes it up, including potentially adding other auto safety measures.
[...]
Separately, Republican Senator John Thune, who is working with Democrats, said Wednesday he hopes to release a draft self-driving car reform bill before the end of July. 
AI does not need to be super-intelligent to completely reshape the way we do everything; automation technologies are already good enough to replace forty to fifty percent of the work force, and the impact that will have on society is incalculable. Self-driving Autos alone can replace cab drivers (Uber and Lyft are both working on this) and truck drivers (long-haul transportation, in particular, would benefit) long before consumers get around to replacing all of their personal transports with a shiny, new, auto. The economic impact of that can't be understated; it's not just the drivers' jobs that would be replaced, but also the related businesses, like truck stops, that would suddenly find themselves obsolete, and their employees looking for increasingly rare employment.

If you were thinking that this change was decades away, then think again. The benefits of Autos, in increased safety and productivity, are simply too compelling to ignore, and the only voices that might be inclined to fight against it (i.e. labour unions) just aren't influential enough, anymore, to be able to turn the tide. This is happening. Self-driving cabs and long-haul trucks will be playing city streets and highways across the continental U.S. in a matter of just a few years, with other countries doubtless following close behind, and industrialized society will never be the same again.

This is the Singularity, happening in slow motion. AI may still be well short of the super-intelligent mark, or even human-equivalent general intelligence, but it turns out that those things are not necessary for the Singularity to occur. There's a reason why governments across the industrialized world are already looking into things like Universal Basic Income as a way of caring for the basic needs of a populace who will find themselves not merely unemployed, but unemployable, within our lifetimes, through absolutely no fault of their own.

What will society look like when 40% of the population having nothing but time on their hands? What happens when simply having a job becomes a weird sort of status symbol, instead of being simply the baseline assumption that drives our perceptions of ourselves, and of each other? I don't know; I don't think anybody knows. But the U.S. congress, in a rare display of bipartisan agreement, have just decided that everybody, both within the U.S. and without, are about to find out.

We now return you to the Singularity, already in progress.

July 09, 2017

Quantum computing's ENIAC

One of the earliest electronic general-purpose computers ever made, the Electronic Numerical Integrator and Computer (or, ENIAC) was a monster of a machine. Brought online in 1946, ENIAC eventually grew to include 17,468 vacuum tubes, 7200 crystal diodes, 1500 relays, 70,000 resistors, 10,000 capacitors and approximately 5,000,000 hand-soldered joints. It weighed over 27 tonnes, consumed 150 kW of electricity, ran programs from punch cards (a huge leap forward from earlier models of computer), cost 6.1 million inflation-adjusted US dollars, and had roughly the computational power of a scientific calculator.

It's now 71 years later, and today's personal computing technology bears very little resemblance to ENIAC. A modern smartphone can weigh as little as 155 g (5.47 oz, or 0.00000574% of ENIAC), yet packs 1300 times ENIAC's computing power into that tiny package. Compared to today's computing technology, ENIAC is a dinosaur, and it would be virtually useless for any modern computing application, but ENIAC is also the start of the modern computer era; something like ENIAC must first exist, in order for our modern information age to be possible.

Knowing this, I have to admit that it gave me something of a frisson to see what the current state of quantum computing looks like, courtesy of Linus Tech Tips:


VoilĂ ! The ENIAC of our age.

March 28, 2017

Welcome to the singularity

The Singularity: wherein developments in artificial intelligence reshape our civilization in profound and irreversible ways. It sounds like science fiction, but we live in a world where science fiction becomes science fact every single day, and the singularity isn't some far-future possibility anymore. Increasingly, it's the world we live in right now. And people are starting to notice.

From the NY Times:
Who is winning the race for jobs between robots and humans? Last year, two leading economists described a future in which humans come out ahead. But now they’ve declared a different winner: the robots.
The industry most affected by automation is manufacturing. For every robot per thousand workers, up to six workers lost their jobs and wages fell by as much as three-fourths of a percent, according to a new paper by the economists, Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University. It appears to be the first study to quantify large, direct, negative effects of robots.
The paper is all the more significant because the researchers, whose work is highly regarded in their field, had been more sanguine about the effect of technology on jobs. In a paper last year, they said it was likely that increased automation would create new, better jobs, so employment and wages would eventually return to their previous levels. Just as cranes replaced dockworkers but created related jobs for engineers and financiers, the theory goes, new technology has created new jobs for software developers and data analysts.
But that paper was a conceptual exercise. The new one uses real-world data — and suggests a more pessimistic future. The researchers said they were surprised to see very little employment increase in other occupations to offset the job losses in manufacturing.
CGP Grey posted an excellent video on this same subject a back in 2014, so this isn't news to everybody, but with former optimists coming around to the more pessimistic realistic view, look for this to become a more common narrative thread going forward. I know that common wisdom in the last U.S. election was that Dems didn't spend enough time talking about how they were going to bring back blue-collar jobs, but reality is what persists in being true, regardless of what you want to believe, and reality is that Obama was right: the economic force that's elminating factory jobs really is automation, and there really isn't much that anyone can do about that now, except prepare for the coming paradigm shift.

Imagine a society where everyone gets a guaranteed minimum income, and hustles on the side for extra cash. And then stop imagining, because it's already happening, with pilot projects in Finland and Ontario. This is our new normal: unemployment statistics are, and have always been, more fiction than fact, "full employment" is something that we'll never see again in our lifetimes, and there may be no other way to ensure that we can continue having an economy, when 40% of workers' jobs have been replaced by automation.

Welcome to the Singularity.

September 28, 2016

Speaking of transformative technology...

Remember when I was saying that Autos (i.e. fully autonomous vehicles) weren't going to be limited to self-driving cars, and that the ability to remove drivers entirely was going to have a huge impact on the future shapes they would take?

Well, you can change all the tenses in that statement from future to present.


Yes, that a massive, self-driving mining dump truck, and it's just the first piece in the creation of totally automated mining sites:


We are one step closer to being able to strip-mine the Earth on autopilot, with mine sites like this that won't need anywhere near as many humans to operate as mines do today. And this is just the beginning of the sort of full-scale, next-generation automation technology that's in the works.

BTW, if you want to see another, smaller-scale example of the possible shape of future autonomous vehicles... Nissan's got you covered.


Welcome to the actual Singularity, already in progress. #nohype

September 14, 2016

This is what a transformative technology looks like

From TechCrunch:
Beginning today, a select group of Pittsburgh Uber users will get a surprise the next time they request a pickup: the option to ride in a self driving car.
The announcement comes a year-and-a-half after Uber hired dozens of researchers from Carnegie Mellon University’s robotics center to develop the technology.
Uber gave a few members of the press a sneak peek Tuesday when a fleet of 14 Ford Fusions equipped with radar, cameras and other sensing equipment pulled up to Uber’s Advanced Technologies Campus (ATC) northeast of downtown Pittsburgh.
During my 45-minute ride across the city, it became clear that this is not a bid at launching the first fully formed autonomous cars. Instead, this is a research exercise. Uber wants to learn and refine how self driving cars act in the real world. That includes how the cars react to passengers — and how passengers react to them.
“How do drivers in cars next to us react to us? How do passengers who get into the backseat who are experiencing our hardware and software fully experience it for the first time, and what does that really mean?” said Raffi Krikorian, director of Uber ATC.
If they are anything like me, they will respond with fascination followed by boredom.
Driver error kills thousands of people every year in the U.S. alone, so self-driving cars don't have to be perfect in order to make our roads much, much safer -- they just have to be better than us. And here's the thing: they're already better than us. The hurdles to getting self-driving autos on the road aren't technological -- the technology already exists, and it already works well enough to be an enormous improvement over the status quo.

No, the real hurdles to adoption of this technology are cultural -- matters of public perception, and the influence that perception can have on the politicians who will be called on to modify existing laws in order to allow fully autonomous vehicles to be rolled out in large numbers.