smhwpf: (BuffyAnne)
The idea of the technological singularity, where artificial intelligence overtakes human intelligence, leading to runaway technological growth with unknown implications for human society, is well-established, although how likely it is remains controversial.

There are numerous concerns about the implications of increasingly autonomous computers and robot systems with artificial intelligence. A very important one relates to autonomous weapon systems, or killer robots, that not only operate without a physical human pilot/driver, but which use AI algorithms to make their own decisions about who to target, and when. In the short term, there are all sorts of moral and legal concerns - who is to be held responsible if an algorithm kills an innocent person? In the longer term, the potential for killer robots to turn against their makers and take over the world and destroy humanity. Such a risk may be far in the future, but it seems to me far from implausible, once you start building algorithms that 'work', but in ways that human programmers do not fully understand, there must exist a risk that they will develop in ways completely contrary to the intentions of the programmers. The Campaign to Stop Killer Robots campaigns on just this issue. It seems to me that, as AI becomes a reality, something akin to Asimov's Laws of Robotics becomes a no-brainer.

The other big potential danger that is often talked about, and which is the main subject of this post, is that of mass unemployment as robots replace more and more human jobs. This has long been the case for blue-collar manufacturing jobs of course, but now the middle classes are beginning to sit up and take notice, with cases like the recent decision by Japanese insurance company Fukoko Mutual Life to replace 34 employees who assess insurance claims with the IBM Watson AI system. The Nomura Research Institute estimated in 2015 that half of all jobs in Japan could be replaced by robots by 2035.

Up to now, advances in technology have certainly caused significant sectoral employment problems among workers with particular skills that are no longer needed; the tendency of a Capitalist economy has been to shrug its shoulders at the fate of these obsolete workers and leave them to rot on the dole, if they're lucky. Sometimes, where there are more social-democratic oriented governments, there may be some effort at retraining, reskilling, industrial and regional policy, etc., to provide new opportunities to such workers. So far, however, fears that advancing technology would lead to permanent and growing mass unemployment have proved unfounded; new technologies make some occupations redundant or less needed, but create new ones, and expand the production possibility frontier so that the great majority of workers can still be employed one way or another, but producing more and more output. Not that this is unproblematic, for all sorts of social, economic and environmental reasons, but the majority of humanity has not been thrown on the scrapheap, and indeed extreme poverty continues to diminish.

Perhaps, then, fears of economic doom due to AI are misplaced? In fact, I think it may be worse than most people think.

Starting with economic fundamentals, production (in the economy as it has been up to now) requires a combination of labour and capital. (The latter in a broad sense may include land). Labour is paid a wage, capital receives a rate of return, in the form of profits, interest or rent.

But capital, and the owners of capital, needs labour needs the rest of us, tho great majority of us who depend on our labour for our livelihoods*, in two ways: first as a means of production; you need some combination of people, land and machines; but secondly as a market for the goods and services produced by labour.

This is crucial. Capital does not reproduce itself, does not get a rate of return by some intrinsic magical property, but because there is demand for the goods and services capital helps produce. It is true that the rich themselves form an important market, but that is not enough to sustain the great majority of owners of capital. The owners of Starbucks and Macdonalds could not become rich just by selling to the rich. Even, say, landlords can only earn rent if their tenants are able to pay it, which means they need employment (or government transfers).

But if AI becomes sufficiently advances, this could cease to be true. If capital can create more capital without labour input, that is if robots can build robots, that can in turn do all (or almost all) the necessary work, then those who own capital (robots and the technology that drives them) not only no longer need labour for production, they no longer need to mass-produce products and services to be sold to the majority of the population. Their capital can provide them with all the necessities and luxuries they desire, and continue to reproduce itself to greater and greater levels of sophistication. No doubt a small number of very lucky humans would be needed to help maintain things (who themselves would quickly join the ranks of the rich, the robot-owners), but the great majority of the population would become completely surplus to the requirements of the elite.

This is a truly terrifying prospect. Would the rest of humanity even be allowed to survive? Perhaps the elite 1% or so would allow the rest of us to continue to eke out an existence as best we could on whatever portions of the earth they decided they had no use for, and without access to the technologies that allow the elite their luxurious lifestyle. They would certainly want to sequest for themselves all the key natural resources they need to keep this new economy running. They would protect themselves of course not only with high fences but with robot armies. They would probably see a need to 'cull the herd' periodically of the roaming barbarians outside their protected zones, less we threaten their system in some way. I suspect they would quite quickly come to see the rest of us as less than human. Maybe some of them would extend 'charity' to a few of us.

Perhaps in such a scenario, a robot rebellion would not be the ultimate fear, but our only hope.

Is there a flaw in my reasoning? There may well be, I hope there is, and please do point it out if so. Or is the point when capital becomes self-reproducing so far in the future that it is not a serious concern for now, especially in the face of other civilization-threatening challenges? Perhaos the Future of Humanity Institute has already analyzed this question, although I did not see anything obviously relevatn on a cursory look at their website.

But if my line of reasoning is correct, then Socialism becomes all the more urgent - that is, the socialization of the means of production, of the technologies that would enable self-contained labour-free production. If capital is all that is needed for production, then we must all own the capital.

The choice will be between fully automated luxury space communism, or the end of humanity as we know it.



*We must also include those of us who do not own capital, but who are unable to work due to unemployment, sickness or disability, or old age. Those of us in this position either depend on our own past labour (savings, pensions), or on a social transfer system that relies on labour income from a large proportion of the population.

Profile

smhwpf: (Default)
smhwpf

August 2017

S M T W T F S
  1234 5
6789101112
13141516 171819
20212223 242526
2728293031  

Syndicate

RSS Atom

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 22nd, 2017 01:20 pm
Powered by Dreamwidth Studios