Are we now living in an Automation Age?

Are we now living in an Automation Age?

Historians have grouped epochs of human technological advancement into various "Technology Ages" (characterized by the predominant types of tools and weapons used by humans) including the following (some of these are debatable but most are universally accepted and cited, I'm just listing these here for reference to frame my Question):

  • The Stone Age (~3.4M BCE to ~3300 BCE)
  • The Neolithic / Agricultural Age (~12,500 BCE to ~3300 BCE)
  • The Bronze Age (~3300 BCE to ~1200 BCE)
  • The Iron Age (~1200 BCE to ~200 BCE)
  • The Greek / Roman Age (~200 BCE to ~400 CE)
  • The "Dark" Age (Early Middle Ages ~500 CE - ~1000 CE)
  • The Middle Ages (~500 CE to ~1500 CE)
  • The Renaissance Age aka Age of Discovery (~1500 to ~1800)
  • The Industrial Age (~1800 to July 16, 1945)
  • The Atomic Age (1945 to 1958?)
  • The Jet Age (Late 1940s or 1958? to ???)
  • The Space Age (1957 to 1970s?)
  • The Information Age aka Computer Age, Digital Age, Internet Age (1970s? to ????)

Question Are we still in the Information Age? Or with robotics, data analytics, machine learning, deep learning, neural networks, artificial intelligence, drones and autonomous vehicles, are we turning the corner into a new Automation Age as this Harvard Business Review Article and this Futurism Article suggest?

If not a new Automation Age, are we perhaps in some other Age now? Obviously Ages can overlap as has been the case with some of the above listed "Ages" - so we could very well still be in the Information Age, or a second Space Age with commercialization of Space starting to take off (pun intended) as well as a new Automation Age all at the same time.

I'm looking for some "official" "Age-Keeper" decree or announcement (who gets to identify and name all the "Ages" by the way?) but I haven't found anything I would consider authoritative yet on the subject of what Age we may now be in. By authoritative I mean some article in a recognized peer-reviewed journal, or a doctoral thesis. Is this something we may only learn in hindsight?

Further Research
Additional research on this topic has uncovered the following articles all referencing the current Age of Automation:

  • The Key to Surviving in the Age of Automation - Forbes, Feb 2, 2017
  • Entering the Automation Age: How Software Can Help Young Tech Companies Scale - Forbes, May 14, 2018
  • Human Capabilities Will Trump Robots in the Age of Automation - Silicon Republic, April 9, 2018
  • Study: The U.S. isn't doing enough to prepare students for the Automation Age - The Hechinger Report, April 25, 2018
  • The Age of Automation: Artificial Intelligence, Robotics and the Future of Low-Skilled Work - The RSA (Royal Society for the encouragement of Arts, Manufactures and Commerce), Sept 17, 2017

While none of these may be scientific journals (though the last one may at least have the notoriety of being a semi or pseudo official organization?) they all separately refer to the current Age of Automation, in addition to the previously cited HBR and Futurism articles. While lacking accreditation, does this growing body of work start to form some sort of industry consensus regarding a current Age of Automation?

Previous Unrelated Question on Modern / Late Modern / Post Modern terminology refers to cultural periods, not technological ages (nor to geological ages for that matter). The answer to the Modern / Late Modern / Post-Modern question in no way answers this question.

The very concept of "Age of" is flawed. If one hears phrases like stone age or dark ages, which are almost about the only two ages an historian would use seriously, you have instantly negative connotations about the timeframe to discuss.

That is not to say that "ages" are not a well established, popular and even classic concept:

The Ages of Man are the stages of human existence on the Earth according to Greek mythology and its subsequent Roman interpretation.

Very much like in Hesiod's ages of man:

  • Golden Age - The Golden Age is the only age that falls within the rule of Cronus. Created by the immortals who live on Olympus, these humans were said to live among the gods, and freely mingled with them. Peace and harmony prevailed during this age. Humans did not have to work to feed themselves, for the earth provided food in abundance. They lived to a very old age but with a youthful appearance and eventually died peacefully. Their spirits live on as "guardians". Plato in Cratylus (397e) recounts the golden race of men who came first. He clarifies that Hesiod did not mean men literally made of gold, but good and noble. He describes these men as daemons upon the earth. Since δαίμονες (daimones) is derived from δαήμονες (daēmones, meaning knowing or wise), they are beneficent, preventing ills, and guardians of mortals.
  • Silver Age - The Silver Age and every age that follows fall within the rule of Cronus's successor and son, Zeus. Men in the Silver age lived for one hundred years under the dominion of their mothers. They lived only a short time as grown adults, and spent that time in strife with one another. During this Age men refused to worship the gods and Zeus destroyed them for their impiety. After death, humans of this age became "blessed spirits" of the underworld.
  • Bronze Age - Men of the Bronze Age were hardened and tough, as war was their purpose and passion. Zeus created these humans out of the ash tree. Their armor was forged of bronze, as were their homes, and tools. The men of this Age were undone by their own violent ways and left no named spirits; instead, they dwell in the "dank house of Hades". This Age came to an end with the flood of Deucalion.
  • Heroic Age - The Heroic Age is the one age that does not correspond with any metal. It is also the only age that improves upon the age it follows. It was the heroes of this Age who fought at Thebes and Troy. This race of humans died and went to Elysium.
  • Iron Age - Hesiod finds himself in the Iron Age. During this age humans live an existence of toil and misery. Children dishonor their parents, brother fights with brother and the social contract between guest and host (xenia) is forgotten. During this age might makes right, and bad men use lies to be thought good. At the height of this age, humans no longer feel shame or indignation at wrongdoing; babies will be born with gray hair and the gods will have completely forsaken humanity: "there will be no help against evil."

But that does not make it useful for historical discussion. Not because of mythological content. But because it is Hesiod's attempt to construct a useable past, or as we would call it: writing history. In our contemporary understanding we cannot use the same terms to describe the same periods of time. But the theme of nostalgic past and decay for the present and future very well resonates today. But that makes it more about a certain world view, philosophy or just politics. From Hesiod's description, we for sure still live in a heroic age, and are discontent with it.

In contemporary - owing to the nature of the question I hesitate t call it modern - historiography, we need a different approach, if we want to keep is on a level of scientific investigation.

That is called periodization:

Periodization is the process or study of categorizing the past into discrete, quantified named blocks of time2 in order to facilitate the study and analysis of history. This results in descriptive abstractions that provide convenient terms for periods of time with relatively stable characteristics. However, determining the precise beginning and ending to any "period" is often arbitrary. It has changed over time in history.

To the extent that history is continuous and ungeneralizable, all systems of periodization are more or less arbitrary. Yet without named periods, however clumsy or imprecise, past time would be nothing more than scattered events without a framework to help us understand them. Nations, cultures, families, and even individuals, each with their different remembered histories, are constantly engaged in imposing overlapping, often unsystematized, schemes of temporal periodization; periodizing labels are continually challenged and redefined, but once established, a period "brand" is so convenient that many are very hard to shake off.

That gives the problem of arbitrariness: Does it make sense to speak of medieval China? Does it make sense to apply concepts fitting very well into Western/European history to Africa? Sometimes it does. Most times it doesn't.

To really come to terms on a general level with the question, we might need to grab some inspiration from art history. There, certain styles are useful for periodization. This is proto-geometric pottery, this is gothic architecture, and this is impressionist painting style. When were these styles named as such? Seldom, when they were "in". At the earliest when they were on the way out. Therefore: it is only then a style if you do not have it any more.

See for example this currently unresolved HistorySE question: What did they call Gothic and Baroque architecture before the modern terms came into use?. The 'style' originated in the 'high middle ages' in France and we now quite clearly recognise churches and other buildings from that time as gothic. They apparently didn't see it that way:

The Gothic style began to be described as outdated, ugly and even barbaric. The term "Gothic" was first used as a pejorative description. Giorgio Vasari used the term "barbarous German style" in his 1550 Lives of the Artists to describe what is now considered the Gothic style.
WP: Gothic architecture

In other words: when a period, or age if you will, starts, ends or what it defines, cannot be seen by us.

The "hellenistic period":

covers the period of Mediterranean history between the death of Alexander the Great in 323 BC and the emergence of the Roman Empire as signified by the Battle of Actium in 31 BC

and yet:

The word originated from the German term hellenistisch, from Ancient Greek Ἑλληνιστής (Hellēnistḗs, "one who uses the Greek language"), from Ἑλλάς (Hellás, "Greece"); as if "Hellenist" + "ic".

"Hellenistic" is a modern word and a 19th-century concept; the idea of a Hellenistic period did not exist in Ancient Greece.

This is the job of future historians. And they will argue about it with the most joyous and bitter disagreements.

We might still define 'an age' for us and to us, believing to be living in it. That this classification will survive even the next generation is highly unlikely.

As I say elsewhere:

many interrelated issues seems critical now, but we will know which were truly important only when this period ends

First, Atomic, Jet and Space "ages" are continuing now simultaneously - we use nuclear energy and weapons (as deterrent), fly in jets and actively use (near) space. IOW, they are not mutually exclusive, but describe different prominent technologies.

Similarly, the The Industrial Age was originally called the Age of Steam and then renamed the Age of Electricity, but, in retrospect, the Industrial Age seems a better choice.

Second, the current period seems to be called the "Post-industrial" age.

The Information age is merely an aspect of the Post-industrial society. The moniker might be superseded by an "age of AI" in a few years - or it might not.

The Anthropocene is a proposed designation of our current epoch characterised by significant human impact upon the earths geology and ecology; it includes but is not limited to anthropogenic climate change.

The term has not yet been officially recognised by the International Union of the Geological Sciences and nor by the International Commission on Stratigraphy though it appears to be going through a process of ratification with the term presented as a recommendation to the International Geological Congress in August 2016 to be incorporated officially as part of the Geological Time Scale.

Scientists in the Soviet Union in 1960s used the term to refer to the Quaternary, the most recent geological period. It was then picked up by the Ecoloogist Eugene Stoermer in the 80s and then widely popularised by the atmospheric chemist Paul Crutzen in the 2000s.

The assembly line has long been considered one of the greatest innovations of the 20th century. It has shaped the industrial world so strongly that businesses that did not adopt the practice soon became extinct, and it was one of the key factors that helped integrate the automobile into American society.

The Early Assembly Line Concept

Prior to the Industrial Revolution, manufactured goods were usually made by hand with individual workers taking expertise in one portion of a product. Each expert would create his own part of the item with simple tools. After each component was crafted they would be brought together to complete the final product.

As early as the 12th century, workers in the Venetian Arsenal produced ships by moving them down a canal where they were fitted with new parts at each stop. During its most successful time, the Venetian Arsenal could complete one ship each day.

Eli Whitney and Interchangeable Parts

With the start of the Industrial Revolution, machines began to perform work that once required human hands. With the use of machines, factories sprang up to replace small craft shops. This change was made possible by the concept of interchangeable parts, an innovation designed by Eli Whitney.

The concept of interchangeable parts first took ground in the firearms industry when French gunsmith Honoré LeBlanc promoted the idea of using standardized gun parts. Before this, firearms were made individually by hand, thus each weapon was unique and could not be easily fixed if broken. Fellow gunsmiths realized the effect LeBlanc’s idea could have on their custom creations and the concept failed to catch on. Another European craftsman had similar ideas. Naval engineer Samuel Bentham, from England, used uniform parts in the production of wooden pulleys for ships.

It wasn’t until Eli Whitney introduced the idea in the United States that the practice took off. He was able to use a large unskilled work force and standardized equipment to produce large numbers of identical gun parts at a low cost, within a short amount of time. It also made repair and parts replacement more suitable.

Ransom Olds

Ransom Olds created and patented the assembly line in 1901. Switching to this process allowed his car manufacturing company to increase output by 500 percent in one year. The Curved Dash model was able to be produced at an exceptionally high rate of 20 units per day.

The Oldsmobile brand then had the ability to create a vehicle with a low price, simple assembly and stylish features. Their car was the first to be produced in large quantities. Olds’ assembly line method was the first to be used in the automotive industry and served as the model for which Henry Ford created his own.

Henry Ford improved upon the assembly line concept by using the moving platforms of a conveyor system. In this system the chassis of the vehicle was towed by a rope that moved it from station to station in order to allow workers to assemble each part.

Using this method, the Model T could be produced every ninety minutes, or totaling nearly two million units in one of their best years. Often credited as the father of the assembly line, he would be more appropriately referred to as the father of automotive mass production.

Mass Production and the Robotic Age

Throughout the 1950s and 1960s, engineers around the world experimented with robotics as a means of industrial development. General Motors installed its own robotic arm to assist in the assembly line in 1961. In 1969, Stanford engineer Victor Scheinman created the Stanford Arm, a 6-axis robot that could move and assemble parts in a continuous repeated pattern. This invention expanded robot use in ways that continue to be applied in modern assembly. At Philips Electronics factory in the Netherlands, production is completed by a number of robot arms assigned to specific tasks.

Today robotics are reaching a completely new level of sophistication. Companies like Rethink Robotics are striving to develop adaptive manufacturing robots that can work next to humans these robots would help to improve efficiency and increase productivity. Rethink Robotics especially is working on making their robots low-cost and user-friendly. Rethink Robotics’ Baxter robot, originally launched in 2012, is being upgraded all the time.

Unbounded Robotics recently launched a robot called UBR-1, with manipulation, intelligence, and mobility for under $50k. It is being offered to universities as a research platform, similar to Baxter, but mobile. The one-armed robot can do human-scale tasks and offers advanced software and state-of-the-art hardware. Unbounded Robotics has been taking orders for UBR-1 and plans to start shipping this summer.

Lest anyone think that robotics are not cost-effective and that they will replace humans in the workplace, it should be said that in fact, robots like Baxter operate at about $3 per hour and that three to five million new jobs will be created this decade due to the creation of Baxter and other co-robots. And not only that, but U.S. efficiency and productivity are three times that of China.

It’s obvious that robotics already have and will certainly continue to have a place in the world of manufacturing in the future, if not in other areas of life. With the increase in technology that we see every year, there are great things to be seen in the field of robotics in the near future.

America's Second Gilded Age: More class envy than class conflict

That may explain why the period that Twain dubbed “the Gilded Age” seems so familiar today.

That time (roughly 1870-1900) shares much with our time: economic inequality and technological innovation conspicuous consumption and philanthropy monopolistic power and populist rebellion two presidential elections in which the popular vote loser won (Hayes in 1876 and Harrison in 1888) and change — constant, exhilarating, frightening.

Historian Richard White, who published a book last year on the period, The Republic for Which it Stands, likes to say he was living in the second Gilded Age while writing about the first. The rest of us will get a chance to share that sensation with the debut next year of Downton Abbey creator Julian Fellowes’ dramatic TV series, The Gilded Age.

This second Gilded Age began sometime around 1990, around the fall of the Berlin Wall and the election of Bill Clinton. But gilded does not mean golden in both eras, prosperity and progress overlaid poverty, racism, corruption and financial instability.

If the similarities between the periods weren’t sufficiently uncanny, in 2016 the nation elected as president Donald Trump, who personifies the second Gilded Age as much as robber baron industrialists and financiers did the first.

“It’s as if J. P. Morgan had been elected president,’’ says Boston College historian Patrick Maney. “Donald Trump puts an exclamation point on this Gilded Age.’’

One of the largest armies of unemployed men parade in Washington on bike, wagons and on foot, on the first Labor Day observance on Sept. 3, 1894. Shown are Jacob S. Coxey's "Army of the Commonweal of Christ," with Coxey in the buggy at right, wearing light suit. (Photo: AP)

Historical echoes

The prime similarity between the two gilded ages is a widening chasm between rich and poor. Last year, a UBS/PwC report concluded that the nation had “levels of inequality not seen since 1905.’’

But there are many others.

► In the Gilded Age, a rabble-rouser from Ohio named Jacob Coxey led a gang of unemployed men — Coxey’s Army — in a march on Washington to demand help from the government.

In the Second Gilded Age, the Occupy Wall Street movement took to the streets to demand an end to preferential treatment of the rich — the 1% — by the government.

Neither got what they wanted.

► In the Gilded Age, philanthropists founded institutions such as the Philadelphia Museum of Art (1876), New York’s Metropolitan Opera (1880), the Boston Symphony Orchestra (1881) and Chicago’s Field Museum of Natural History (1893). A rich man who dies rich, said industrialist Andrew Carnegie, “dies disgraced.’’

In the second Gilded Age, the 175 billionaires who've signed The Giving Pledge to give most of their wealth include Microsoft founder Bill Gates, Facebook CEO Mark Zuckerberg, former New York Mayor Michael Bloomberg and investor Warren Buffett (who’s also called for higher taxes on the rich).

► In the Gilded Age, John D. Rockefeller formed the greatest monopoly in history – Standard Oil — and railroad tycoons schemed to avoid competition.

In the second Gilded Age, consolidation has led toward oligopoly. According to a UN report, almost half of U.S. industries in 2012 were dominated by their four largest companies. Google, meanwhile, accounts for 87% of all internet searches.

► In the Gilded Age, Alva Vanderbilt’s “house warming party’’ for her “Petit Chateau,’’ a Fifth Avenue mansion that was the city’s most sumptuous, cost millions in today’s dollars. The costume masquerade ball was attended by 1,000 and established her in New York society.

In the Second Gilded Age, the 40th birthday party of Tyco CEO Dennis Kozlowski's wife in 2001 cost $2 million, half of which was paid by the publicly-traded company. The affair, held on the island of Sardinia, featured an ice sculpture of Michelangelo's David urinating Stolichnaya vodka, and a private concert by Jimmy Buffett.

► In the Gilded Age, companies were ruined, jobs lost and savings wiped out by periodic financial panics (1873, 1893 and 1907). But those behind the companies that failed often emerged unscathed.

In the second Gilded Age, companies were ruined, jobs lost and savings wiped out in the dotcom bubble (2000) the collapse of Enron Corp. (2001) and the housing finance crisis (2008). But those behind the companies that failed often emerged unscathed.

► In the Gilded Age, nativists feared the “yellow peril” of Chinese immigration and the “huddled masses’’ from southern and eastern Europe.

In the second Gilded Age, immigration fears have prompted calls for a wall across America’s Southern border and restrictions on immigration from Muslim nations.

► In the Gilded Age, Thomas Edison invented the phonograph in 1877.

In the second Gilded Age, Steve Jobs introduced the Apple iPhone in 2007.

► In the Gilded Age, white Southerners systematically withdrew the vote from millions of blacks during the rollback of federal Reconstruction from 1876 to 1900.

In the second Gilded Age, dozens of states have adopted measures to restrict or scrutinize voting eligibility.

►In the Gilded Age, Susan B. Anthony, Elizabeth Cady Stanton and Lucy Stone formed the first national suffrage organizations.

In the second Gilded Age, women who endured sexual harassment on the job formed the #MeToo movement.

Not all Gilded Ages are equal

For all the similarities, there are many differences between the two eras. Unions, and workers’ real wages, were rising then and stagnant or declining now. The current social services safety net, however tattered, was only a dream in the Gilded Age.

Above all, the Gilded Age was one of class conflict. The bloody labor strikes of the 19th century – like one in 1892 at Carnegie’s Homestead, Pa., steel works in which 12 people were killed – would be almost unimaginable today.

In the Gilded Age, the rich were regarded with suspicion or contempt. Now, says Maney, when the rich flaunt their wealth as a sign of their success, the strongest emotion they provoke is envy.

The likelihood of a violent death has never been lower on average, we’re better educated than ever, and childhood mortality has plummeted

Nonetheless, as the researchers Ian Goldin and Chris Kutarna argued recently in their book Age of Discovery, we’ve never had it so good. Life expectancy, they point out, has risen more in the past 50 years than the previous 1000: a child born in 2016 stands a fairly good chance of seeing the arrival of the 22nd Century. The likelihood of a violent death has never been lower on average, we’re better educated than ever, and childhood mortality has plummeted. Among the most striking changes, the last few decades has brought remarkable successes in tackling global poverty: in 1981, almost half the people in the developing world lived below the poverty line as of 2012, that figure had dropped to 12.7%.

Apart from an increase in living standards, such improvements mean that we are, in turn, better placed to solve the 21st Century’s problems. “These conditions create an ideal habitat for ideas and genius to flourish, and that flourishing is well under way,” write Goldin and Kutarna. “Science and technology has never been closer to flipping our basic condition from scarcity to abundance.”

And that’s important because in 2016 major global challenges are manifold. We may have never had it so good, but there’s no doubt that there remains much to improve about the world. Humanity’s profligacy is threatening to send global temperatures rising our over-use of drugs places us on the cusp of an antibiotic apocalypse, and we are far from solving the continual and ubiquitous global suffering caused by disease, cancer and mental health problems. And that’s before you even consider the myriad societal issues we have yet to deal with, such as inequality, oppression, prejudice and lack of personal freedom.

No single person can change the world, but if enough talented minds are put to enough discrete problems – if we share knowledge, and exchange ideas with one another – then seemingly incremental progress can gradually transform into great leaps.

This year has had its share of bad headlines, but the bigger picture is rosy (Credit: iStock)

That’s why BBC Future decided to bring together a selection of the world’s most fascinating thinkers at a live event. Sharing ideas, we believe, is the best way to nudge our species forward: what makes human beings unique among life on Earth is the ability to connect our minds.

Our ‘World-Changing Ideas Summit’ on 15 November in Sydney promises to be a thought-provoking exploration of how technology, science and health will transform the human experience. A unique and diverse group of people from the worlds of tech, medicine, transport, space travel and more will present bold ideas, showcase new technology, provoke discussion, and challenge imaginations about our shared future.

  • BBC TV presenter Michael Mosley on the future of food and health
  • Scientist Heather Hendrickson on tackling the antibiotic apocalypse
  • Uber’s Kevin Corti on the hidden patterns of city transport
  • Researcher and TV presenter Emma Johnston on the impact of cities on oceans
  • Astronauts Ron Garan and Andrew Thomas on the coming era of space travel

To be part of this conversation, and to discuss the world-changing ideas that could transform our future for the better, join us in Sydney as an audience member, or follow BBC Future on Facebook for news, updates and live coverage.

So, to return to the opening question: what is the true best time to be alive? If humanity can continue to intelligently exploit the findings of science and technology to reshape our future, the answer is simple:

Richard Fisher is the Editor of BBC Future. Twitter: @rifish

The WIRED Guide to Robots

To revist this article, visit My Profile, then View saved stories.

To revist this article, visit My Profile, then View saved stories.

Modern robots are not unlike toddlers: It’s hilarious to watch them fall over, but deep down we know that if we laugh too hard, they might develop a complex and grow up to start World War III. None of humanity’s creations inspires such a confusing mix of awe, admiration, and fear: We want robots to make our lives easier and safer, yet we can’t quite bring ourselves to trust them. We’re crafting them in our own image, yet we are terrified they’ll supplant us.

But that trepidation is no obstacle to the booming field of robotics. Robots have finally grown smart enough and physically capable enough to make their way out of factories and labs to walk and roll and even leap among us. The machines have arrived.

You may be worried a robot is going to steal your job, and we get that. This is capitalism, after all, and automation is inevitable. But you may be more likely to work alongside a robot in the near future than have one replace you. And even better news: You’re more likely to make friends with a robot than have one murder you. Hooray for the future!

The definition of “robot” has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek’s play R.U.R., or Rossum's Universal Robots. “Robot” comes from the Czech for “forced labor.” These robots were robots more in spirit than form, though. They looked like humans, and instead of being made of metal, they were made of chemical batter. The robots were far more efficient than their human counterparts, and also way more murder-y—they ended up going on a killing spree.

R.U.R. would establish the trope of the Not-to-Be-Trusted Machine (e.g., Terminator, The Stepford Wives, Blade Runner, etc.) that continues to this day—which is not to say pop culture hasn’t embraced friendlier robots. Think Rosie from The Jetsons. (Ornery, sure, but certainly not homicidal.) And it doesn’t get much family-friendlier than Robin Williams as Bicentennial Man.

The real-world definition of “robot” is just as slippery as those fictional depictions. Ask 10 roboticists and you’ll get 10 answers—how autonomous does it need to be, for instance. But they do agree on some general guidelines: A robot is an intelligent, physically embodied machine. A robot can perform tasks autonomously to some degree. And a robot can sense and manipulate its environment.

A field of robotics that studies the relationship between people and machines. For example, a self-driving car could see a stop sign and hit the brakes at the last minute, but that would terrify pedestrians and passengers alike. By studying human-robot interaction, roboticists can shape a world in which people and machines get along without hurting each other.

The classical sci-fi robot. This is perhaps the most challenging form of robot to engineer, on account of it being both technically difficult and energetically costly to walk and balance on two legs. But humanoids may hold promise in rescue operations, where they’d be able to better navigate an environment designed for humans, like a nuclear reactor.

Typically, a combination of an electric motor and a gearbox. Actuators are what power most robots.

A field of robotics that foregoes traditional materials and motors in favor of generally softer materials and pumping air or oil to move its parts.

Lidar, or light detection and ranging, is a system that blasts a robot’s surroundings with lasers to build a 3-D map. This is pivotal both for self-driving cars and for service robots that need to work with humans without running them down.

The hypothetical point where the machines grow so advanced that humans are forced into a societal and existential crisis.

The idea that robots and AI won’t supplant humans, but complement them.

Think of a simple drone that you pilot around. That’s no robot. But give a drone the power to take off and land on its own and sense objects and suddenly it’s a lot more robot-ish. It’s the intelligence and sensing and autonomy that’s key.

But it wasn’t until the 1960s that a company built something that started meeting those guidelines. That’s when SRI International in Silicon Valley developed Shakey, the first truly mobile and perceptive robot. This tower on wheels was well-named—awkward, slow, twitchy. Equipped with a camera and bump sensors, Shakey could navigate a complex environment. It wasn’t a particularly confident-looking machine, but it was the beginning of the robotic revolution.

Around the time Shakey was trembling about, robot arms were beginning to transform manufacturing. The first among them was Unimate, which welded auto bodies. Today, its descendants rule car factories, performing tedious, dangerous tasks with far more precision and speed than any human could muster. Even though they’re stuck in place, they still very much fit our definition of a robot—they’re intelligent machines that sense and manipulate their environment.

Robots, though, remained largely confined to factories and labs, where they either rolled about or were stuck in place lifting objects. Then, in the mid-1980s Honda started up a humanoid robotics program. It developed P3, which could walk pretty darn good and also wave and shake hands, much to the delight of a roomful of suits. The work would culminate in Asimo, the famed biped, which once tried to take out President Obama with a well-kicked soccer ball. (OK, perhaps it was more innocent than that.)

Today, advanced robots are popping up everywhere. For that you can thank three technologies in particular: sensors, actuators, and AI.

So, sensors. Machines that roll on sidewalks to deliver falafel can only navigate our world thanks in large part to the 2004 Darpa Grand Challenge, in which teams of roboticists cobbled together self-driving cars to race through the desert. Their secret? Lidar, which shoots out lasers to build a 3-D map of the world. The ensuing private-sector race to develop self-driving cars has dramatically driven down the price of lidar, to the point that engineers can create perceptive robots on the (relative) cheap.

Lidar is often combined with something called machine vision—2-D or 3-D cameras that allow the robot to build an even better picture of its world. You know how Facebook automatically recognizes your mug and tags you in pictures? Same principle with robots. Fancy algorithms allow them to pick out certain landmarks or objects.

Sensors are what keep robots from smashing into things. They’re why a robot mule of sorts can keep an eye on you, following you and schlepping your stuff around machine vision also allows robots to scan cherry trees to determine where best to shake them , helping fill massive labor gaps in agriculture.

New technologies promise to let robots sense the world in ways that are far beyond humans’ capabilities. We’re talking about seeing around corners: At MIT, researchers have developed a system that watches the floor at the corner of, say, a hallway, and picks out subtle movements being reflected from the other side that the piddling human eye can’t see. Such technology could one day ensure that robots don’t crash into humans in labyrinthine buildings, and even allow self-driving cars to see occluded scenes.

Within each of these robots is the next secret ingredient: the actuator, which is a fancy word for the combo electric motor and gearbox that you’ll find in a robot’s joint. It’s this actuator that determines how strong a robot is and how smoothly or not smoothly it moves. Without actuators, robots would crumple like rag dolls. Even relatively simple robots like Roombas owe their existence to actuators. Self-driving cars, too, are loaded with the things.

Actuators are great for powering massive robot arms on a car assembly line, but a newish field, known as soft robotics, is devoted to creating actuators that operate on a whole new level. Unlike mule robots, soft robots are generally squishy, and use air or oil to get themselves moving. So for instance, one particular kind of robot muscle uses electrodes to squeeze a pouch of oil, expanding and contracting to tug on weights. Unlike with bulky traditional actuators, you could stack a bunch of these to magnify the strength: A robot named Kengoro, for instance, moves with 116 actuators that tug on cables, allowing the machine to do unsettlingly human maneuvers like pushups. It’s a far more natural-looking form of movement than what you’d get with traditional electric motors housed in the joints.

And then there’s Boston Dynamics, which created the Atlas humanoid robot for the Darpa Robotics Challenge in 2013. At first, university robotics research teams struggled to get the machine to tackle the basic tasks of the original 2013 challenge and the finals round in 2015, like turning valves and opening doors. But Boston Dynamics has since that time turned Atlas into a marvel that can do backflips, far outpacing other bipeds that still have a hard time walking. (Unlike the Terminator, though, it does not pack heat.) Boston Dynamics has also begun leasing a quadruped robot called Spot, which can recover in unsettling fashion when humans kick or tug on it. That kind of stability will be key if we want to build a world where we don’t spend all our time helping robots out of jams. And it’s all thanks to the humble actuator.

At the same time that robots like Atlas and Spot are getting more physically robust, they’re getting smarter, thanks to AI. Robotics seems to be reaching an inflection point, where processing power and artificial intelligence are combining to truly ensmarten the machines. And for the machines, just as in humans, the senses and intelligence are inseparable—if you pick up a fake apple and don’t realize it’s plastic before shoving it in your mouth, you’re not very smart.

This is a fascinating frontier in robotics (replicating the sense of touch, not eating fake apples). A company called SynTouch, for instance, has developed robotic fingertips that can detect a range of sensations, from temperature to coarseness. Another robot fingertip from Columbia University replicates touch with light, so in a sense it sees touch: It’s embedded with 32 photodiodes and 30 LEDs, overlaid with a skin of silicone. When that skin is deformed, the photodiodes detect how light from the LEDs changes to pinpoint where exactly you touched the fingertip, and how hard.

Is the Digital Age Over and, If So, What “Age” are We in Now?

In speaking with colleagues recently, the question came up as to whether or not we are now post the Digital Age and, if we are, what would we call this next wave?

Some robust discussion and debate followed. One perspective is that we haven’t even scratched the surface of the Digital Age. With the Internet of Things (IOT), advancements in mobile still unrealized, eye-sensor technology that merchants now can place on shelves to assess how long a consumer is looking at product, the virtual dressing room, wearable and gesture-based technologies that are coming of age, it could appear that we still have a long way to go before calling an end to the Digital Age and the dawn of the next age.

To me, the personal computer followed by the internet ushered in the “Information Age.” For the past 2 decades or so, we’ve seen amazing advancements based upon these technologies. But, today, we see that computers and the internet are more or less making improvements on the fringe within these two areas (i.e. faster computers, more memory, etc. and faster, more broadly accessible internet). The real exciting stuff, in my opinion, is the interpretation of all of the data and information being generated by computers, mobile devices, wearable technology, beacons/sensors, etc. If we are moving beyond the Digital Age, and I am not suggesting that we definitively are, then perhaps what we are moving into is the era of the Age of Insights. And, in short, it is all about the data!

One can argue, for example, that companies like Nike, divesting themselves out of the Fuelband, at least the hardware / wearable side of the market, are keen to just focus on the data that can be captured via these digital channels. Harnessing big data and leveraging modern analytics methodologies, across all of the federated and omni-channel arenas, allows for an incredible ability to synthesize this vast information to gain customer insights previously unimaginable. And to be clear, the omni-channel channel origin of the data is no longer relevant but the analysis of, and learnings from, such data will be the true game changer that will place one company ahead of its competition.

Never before have individuals carried around devices such as mobile phones or connected tablet devices nearly 24/7. And never before have we had beacon/sensor or high-resolution camera technologies that can capture location-based data on countless people across vast geographies. Never before have we had connected devices of all sorts uploading data and/or communicating machine to machine as we do today. In fact, today’s reality is practically last generation’s science fiction. And the pace of such advancements is only increasing. One only can imagine where this will all be in just a mere 5 to 10 more years.

Nina Nets It Out: The Digital Age has given rise to many technological improvements. But there is a shift underway which prioritizes data over technology. The Age of Insights is upon us – and this new future will require new forms of knowledge, understanding and, most of all, leadership.

The changing nature of careers

Let’s examine what a “career” really is. The traditional idea of a career has three components:

  • A career represents our expertise, our profession, and ultimately our identity. It defines who we are and what we do. This form of self-identity makes changing careers dauntingly difficult: What if we switch careers and fail? Then who are we?
  • A career is something that builds over time and endures. It gives us the opportunity to progress, advance, and continuously feel proud. When we are asked to change our career or path, what happens to all we have learned? Do we throw it all away? Or can we carry it forward?
  • A career gives us financial and psychological rewards. It makes life meaningful, gives us purpose, and pays us enough to live well. What happens if our career suddenly becomes less valuable, even if we still enjoy it? Should we continue to make less money or jump to a new path?

The changing world of work has disrupted all three elements: expertise, duration, and rewards. And as scary as this may be for employees trying to stay ahead, it’s equally disruptive for employers who must try to hire and develop the workforce of today, tomorrow, and five years from now.

Expertise has an ever-shorter shelf life

It used to be that only certain types of jobs—think of computer programmers and IT troubleshooters—needed constant training and upskilling. Now, all of us are expected to continuously learn new skills, new tools, and new systems. Just as COBOL programmers had to learn C++ and Java, administrative assistants have switched from typewriters and dictation machines to PCs and voice memos, assembly-line workers have had to learn to operate robots, and designers have moved from sketchpads and clay models to touchscreens and 3D printing.

In technical fields, there is constant pressure to master new technologies or risk becoming instantly obsolete. One of our clients anonymously surveyed its IT department about what skills people wanted to learn, and more than 80 percent said they were desperate to learn tools such as AngularJS (a new open-source programming environment for mobile apps), even though the company was not yet using the technology. 8

Today even experts find themselves disrupted. Few professions today are hotter than that of a software engineer . . . and yet many foresee automation taking over the work of coding in the near future. 9 Artificial intelligence is doing the rote work of lawyers, 10 simplifying the work of doctors, 11 and changing skilled jobs from truck driver to financial analyst. As we describe later, it’s important for each one of us to learn new tools, adapt our skills, and become more multidisciplinary in our expertise.

What this means to employers is simple: Your employees are constantly feeling a need to “keep up.” Millennials, for example, rate “learning and development opportunities” as the number-one driver of a “good job.” 12 Managers should give people time, opportunity, and coaching to progress if you don’t, people often just look elsewhere.

The idea of a single, long-lasting career is becoming a thing of the past

Remember the 30-year “lifelong career” that companies promoted during the last century? Well, today only 19 percent of companies still have traditional functional career models. 13 Why have so many organizations let multi-decade career models fade away?

First, business structures have changed. The iconic industrial companies of the early 1900s (steel, automobile, energy, and manufacturing) have outsourced to smaller firms many of their business processes and sales channels, as well as various parts of their value chain. The result has been a steady increase in innovation and profitability, but a dramatic decay in the security of a “company man” career. 14

When I entered the workforce in 1978 as a fresh engineering graduate from Cornell, I remember dozens of big companies looking for young engineers to train for lifetime careers, each offering job rotation, heavy amounts of training, and seemingly lifelong employment. I actually joined one of these companies—IBM—only to find my career options altered entirely when management launched a massive turnaround. (I decided to move to a smaller, faster-growing company.)

Similar stories can be told in automobile, manufacturing, financial services, retail, hospitality, and many other industries. In 1970, the 25 biggest American corporations employed the equivalent of over 10 percent of the private labor force. 15 Today, many of the largest US employers by number are retailers, 16 and the retail industry alone accounts for more than 10 percent of US employment. 17 In the current economic recovery, the fastest-growing segment of work has been health care, including small and large hospitals, eldercare providers, and various types of personal-care work. 18 However excellent these employers might be, their primary workforce is mid-level labor—service and delivery roles that neither pay as well nor offer the long-term “career professional” advancement that large companies once routinely offered.

This has created opportunities for some workers but has left others behind their parents at the same age. One study found that workers who entered the labor force in the 1980s and 1990s were more than twice as likely to stay in low-wage, dead-end jobs over the next decade compared with similar employees who joined the workforce in the late 1960s and early 1970s (at the high point of the corporate economy). 19 Part of the reason: Big corporations have outsourced many specialized (and highly paid tasks), which can make it harder to “move up” in socioeconomic status.

Driven by opportunism (why stay at a company where advancement opportunities are limited?) and necessity (what else can you do when your job is outsourced?), the practice of switching jobs and companies grew more common, until job-hopping became the norm. People my age, for instance, typically worked for four to five companies during their working lifetime. Today, a college graduate may work for as many companies in their first 10 years after graduation. 20

The longevity dividend: Planning for a longer horizon

There’s a happy reason for some of the anxiety about unsettled career paths: Human beings—in most countries, that is—are living longer than ever. 21 While babies born in 1900 rarely lived past the age of 50, in most countries the life expectancy of babies born today exceeds 70 research suggests that Millennials will reach an average age of 90. 22

Governments, anticipating a flood of retirement benefit payouts, are responding by looking to push back workers’ standard retirement age. 23 And indeed, with unions in decline and much more rapid job mobility, fewer workers—even in labor-intensive roles—are able to retire after 30 years, forcing people to work longer. 24 This means that young people should expect careers spanning half a century or longer schools and employers should help prepare and guide people through working lives in which they learn, work, learn, work, and cycle through career stages many times.

I recently met with the senior executive team of a revered, century-old manufacturer that enjoys tremendously high employee retention. As we discussed these issues, the executives decided that they were going to redesign their career strategy around employees working longer—actively encouraging and supporting workers’ efforts to continuously reinvent themselves. 25

Top 12 Examples, How Technology Has Changed Our Lives

Technology has changed our lives by increasing the speed of time. We were human. We invented and developed the technologies to change our lives to their best. Now that technology is changing our lives every second.

Robots are our new human model and in the end, only robots control this world. Technology is now trying to go inside our body it’s almost there and the target is human blood and emotions. So, far! Technology is successful in that.

In business, the use of Artificial Intelligence, cloud computing, machine learning, predictive analytics, and business intelligence tools, applications now creating new methods to conduct, operate and manage the business. Uses of Artificial Intelligence in Daily Life

The rise of cloud computing, cloud storage, Artificial intelligence, and Machine learning is the example, that we will be soon on the node that connects our body and capture the data of human activities in real-time.

The invention and development of Technology have changed our life positively and negatively. The new technologies and inventions are the result of our curiosity, creativity, and problem-solving techniques.

What else we will do on this earth if we’re not improving our self every day. But it’s important that technological development should be environmental and human-friendly. Technology is a flower for life, not a productivity killer.

We’re looking like robotic humans and it’s the biggest example of how technology has changed our lives positively and negatively.

Technology is in the air, water, food, education, business, office, electricity, marketing, data storage, communication, cars, parking, traveling, foods, shopping, and banks etc. It’s almost everywhere and in everything that is involved in our daily life.

Technology is the king and human is a slave. But if a slave is clever, knowledgeable enough then they can manipulate the king (technologies).

Do you know when you get up in the morning what you think? For example: How to improve my writing? How to market products, how to earn more money, how to live the life you want to live, how to achieve your goals and dreams and how to make this world better. These are things in which technologies help us.

And for you, there are thousands of businesses and people ready to help you on the internet. You have to search and get. They are selling you change product on the internet and everywhere. Almost all the companies talk about change in their advertisement. They use technologies to target you.

E.g. How to change the world by following 7 habits? How to change your life in 30 days? How to get 1000 visitors to your business website by buying this or that. What is that? That is the impact of technology on our thinking, communication, habits and social activities. We depend on technologies to help us. And technologies really help us in good and bad ways.

How technology has changed our lives, let’s learn in following points:

1. Technology has changed education:-

Technology changed the ways of education and learning methods. We are not able in the past to get data, information, and knowledge so quickly with flexibility. The school was very far from home. We learned that was not interesting in certain ways.

But today because of technology there are online schools. Anyone can do degrees online by using the internet and computers. There are various types of online courses for everyone with different contents. If you’re interested, learn more: Online Basic Computer Courses – Learn Essential Computer Skills

This is how has technology changed education. It is a positive change.

Even after machine learning, it’s highly possible that robots and machines start contributing to training people.
Here you can learn the dedicated articles:

2. Technology changed the ways of communication

Today we have mobile, internet, computer and social media, video conferencing tools, and mobile apps to communicate with anyone around the world. It was not possible in the past. The benefit of communication change is that it’s the fast, easy and quickest method to communicate.

In the past, a letter takes 10 days to reach the destination such as money order, greeting cards, personal letter and thousands of other communication sources are not fast enough.

But technologies changed that now you can send an email. You can transfer the money from your mobile phone. This is a technology and it’s non-stop.

Learn a dedicated article: How Technology affects communication

3. New kind of habits and digital addictions:-

With technology changes, we got new kind of habits and it’s tough for parents to solve such kind of problems. It’s because they don’t know the solutions. Most of the kids, teens are addicted to the internet. For them, technology is a toy. This can be a good change, but what about creativity and their brain development.

Is artificial intelligence is the new tool of creativity? I don’t think so. And after 20 years, the Internet will be boring for them. Then they will use robots for their works as today we’re using the internet or Google.

4. Lifestyle changes that happened after the use of technology:-

Technology has a positive and negative impact on our daily life. Today we live more appearance filled the life. The craze and disadvantages of taking selfies in the risky area are well known. We do online shopping and there are varieties and price comparison tools.

Technologies have an internet of things that anyone can use in their daily life. We are busier than productive. 30 years ago people have time for friends and families. They live and enjoy their life in real-time. At the same time, they are emotional, care of nature and humanity.

Today, we do the same things on social media but without emotions. This is a technological effect. And we’re responsible for this change. It’s is good or bad, depends on the users, how they’re using it.

5. Technology has changed our health:-

Technology has increased the pace of our life, but the quality is reduced. Technology impacted our life positively and negatively. Today we have more health care technologies then past. But the reason, behind the inventions of health technologies, is the overuse of technology in daily life.

In old times people have less electric equipment for house and fieldwork. In the old-time people, physical stamina is better than today’s fitness freaks.

They lived their life longer without technology but today after technology the average human age is reducing. We’re greedy and we want more, faster and in fewer efforts.

That’s why business God is selling air purifier, mineral waters more confidently other than inspiring humans to plant trees and grow natural sources. Even after the technological development in medical science, doctors are not confident in sayings about life is safe for a patient after an operation in certain cases.

6. Our critical thinking skills are almost dead:-

Today most of the people do not invent, they do business to solve people problem. They can sell anything and every human being is targeted, customers. They are collecting data, what you search, where you click, what you buy, how you react to certain copywriter’s text. This is impacting our life because they are creating products based on our search results and technologies habits.

All are doing searches, everyone wants to read the easy explanation, people don’t have time to think, and they do searches. Why think? When there are search engines who give even instant suggestions to search this or that.

We‘re not able to think critically that’s why we are inventing artificial intelligence for the future generation. The positive is we can get knowledge and information and data by using technology. But the negative is too much data, an overload of information and overuse of technology equipment making us addict to the tech tools.

We can’t analyze, understand easily without seeing visual examples. This is how I think technology is impacting our critical thinking.

In the old times (My childhood) No tuition classes anything, everyone read and learn based on their power of the brain. Today, even admitting kids in the best schools they need tuitions and extra classes. Why? Might be subjects and syllabus is advanced that’s why they need technologies in the classroom.

It was not in the past. People live without less tension. They have less tension than today’s humans. Today’s robotic humans are more tensioned and fighting with analysis paralysis problems.

Today, people think and analyze everything because today we’re living more risky life. We’re at the speed of filling greed store and technology is the tools and root.

7. Technology is making a difference in between business processes and systems:-

Latest information technologies are impacting our work and in the way we do business. Almost everything soon will be automated. New business models are technology-based. I don’t think there is a single business around the world that is not using technologies.

People are thinking more about passive incomes and more than one source of income in fewer human efforts. That’s why cloud computing, marketing automation, cloud storage, hybrid cars, and robots, etc. very soon start changing this world again.

The reason is more profit in less time and without humans. It is because everyone is in a hurry to end the journey before competitors.

But then the next problem will come. There are already millions of people around the world in different countries are facing an unemployment problem. And after the use of such new technologies, the unemployment problem will grow even more.

That’s why the new generation especially people in between 18 th to 30 th need a practical computer science or technologies related knowledge to get a job after degree.

But what are the solutions at that time? The solution is simple then again someone from us will create the next technology for them. It continues the process, I don’t know when and where it ends.

Another article for business owners and new entrepreneurs: Online Business Courses – Learn Essential Business Skills and Get Inspiration

8. Cloud computing technology and cloud storage are another change that is happening currently:-

I think you know about it. Now you don’t have to store your personal and business data on your hard drive, pen drive or DVDs. You can upload it to clouds and access anywhere, anytime.

Such as Google Drive and one drive. You don’t need to buy the business application and have to spend time on installing on your computer and server. It’s already in the cloud, in which you and your employees can use it anytime from anywhere.

But why cloud computing? because we want to earn money with fewer efforts. And business owners want to solve day to day maintenance problems in the company that’s why it is now growing in the internet market.

Here you can learn more about it: – Benefits of cloud computing for business.

9. The negative and positive impact of technology on workplace productivity:-

Now it’s clear that technology affects our work habits. We want productivity and it’s great for business.

Technology has changed our daily business. You can measure the quality of your products and services. You can speed up the production process. You can reduce your marketing cost by using online marketing automation applications. You can sell products on the internet and can get money from the bank directly. You can research the market, you can analyze your competitors, you can learn about technological equipment.

And such technological changes in business will reduce your production cost and increase profit. Using communication technology you can do video conferencing, chats and online meetings.

Above productivity example, by the use of technologies, you can apply in the workplace that not only decrease cost, time and labor but increase the quality of the product. So, you can fight with your competitors based on quality, speed, and price.

The negative of technologies in business productivity is only when you’re not using correctly.

For example, if you or your employees are busy watching YouTube videos and using Facebook in-office time then it will reduce the productivity level at some point. If you have more IT infrastructure or fewer quality products then it cost you time.

Another negative is that these new technologies are more advanced and highly practical. If you don’t have enough budget, knowledge and don’t know how to implement a particular technology in your business then it will be risky.

But generally, I don’t think anything is negative to use technologies in the workplace. The best technologies with technically sound staff have not any productivity loss.

10. Technology has changed the behaviors and upgraded human greed.

We are losing our patience. Our behavior is changed in a second. For example, if the internet is slow then you can see your face how it is. If someone is late to reply, then see your reaction.

Technologies changed our patience level from high to low and low to high within the speed of second. As I explained above that in old time’s people have more patience than today. We behave very badly for small things e.g. to the people who are less technical and don’t have high-quality mobile and money.

We changed our mind after seeing likes and comments. We stop doing the most important work when we get a new notification or message. That’s how we are now affected by technology.

We are searching for friends on the internet. But don’t know about neighbors and their problems. We are sharing thousands of motivational and life quotes on the internet. But no one is taking care of animals, plants, water, and nature.

Yes, we can buy mineral water! But why buy things when they are free? But who cares about the natural resources? Is there are any government in India or any country spending their most budget on natural resources?

Not that many, you can count on fingers. They are spending on industrialization. Do you know why? Because they have great minds who think that industrialization can reduce unemployment or it is the development.

But the truth is industrialization can give them more money other than natural resources. And also working in such companies is like a GULAMI. And our institutions are teaching, How to become better Gulam. And you know the truth that after spending 4-5 lacs, we’re not able to get 10000 as a salary after degree. This is the truth.

This can be the reasons, why engineers are doing suicide? Why employees working in IT companies, are frustrated with their job?

While I think government investment in nature will give a great return, and the returns are clean water, less pollution, fresh air.

Our youth have the power to develop and invent technologies that the world will follow for the right cause. But youth needs the freedom to choose the subject and they want practical teachers.

But who cares about the talent? Even our society is influenced by Bhakts on the Internet, mobile messages and technologies. Election Commission is an external examiner for politicians. Then who cares? No matter if criminals are giving the exams (fighting election). What will society do in this changing world? I don’t know.

But I think if society is well educated about the technologies then it will be great. But in our society, every third person that you meet know about politics, they even able to criticize PM and CM. But if check common sense, you will get very bad results.

If our society and social leaders spend some time to learn and educate about the technologies to common people then our India will grow fast. After the knowledge of technologies, we can be number 1 in many things.

But our society is influenced by Politicians more than technologies. But soon, our society will be educated then we can hope our next generation will live in a developed country. I and you can help to educate society about such technologies.

I have more to say but let it be, I know you’re not interested! And we can’t find a solution here. But implementing technologies in political structures can be a solution especially in India.

We‘re now more addicted to technologies, not a productive use. We don’t use technology based on our work, home, business, and job duties. We use whenever, whatever it is, no matter it is doing loss.

Such above changes are greed or human behaviors and technology speed up the process of corruption, data privacy and security to solve it, use it and to improve it.

Technology is always depending on the user and it will be. I hope and believe that our computer scientist or scientist from various fields created the control button of technologies. Else, you can imagine the future.

11. Technology is influencing youth:-

The youth is totally depending on the technology for everything. Even from school. A piece of small information or practical knowledge is not produced by the teachers in the class but by the use of the internet.

Technology is making children older. They are learning and watching the things that were not possible to watch 25 years ago. This is how technology is affecting our youth negatively.

Our youth is ready to fight on social media but unable to run on the road and even can’t able to take a bucket of water in the home. But on the internet, he/she is not less than the army. This is positive or negative?

Youth patriotism is well known on the internet and at the same time, some of these people don’t care to save water, clean atmosphere and girl’s safety.

There are so many positives of technologies. And its technology that has increased career opportunities for the youth. Anyone with some sort of skill can start an online business. They can become a programmer, designer and able to provide services remotely.

Technology has a positive and negative impact but it’s really important for our teachers, teaching in colleges and schools of computer science to teach them productive and environment-friendly uses of technologies.

12. Future is unpredictable for small business owners:-

Technology changed the speed of time. Now technology is updating every second. It’s tough for small business owners and entrepreneurs to run their business at this speed.

Technology connected the countries on one platform. It’s not hyperbole if you say that technology has changed this world into a village. The village in which most powerful people rule the other such as artificial intelligence has changed this world and in future, it is growing at double speed. That way it is tough, but not impossible for small business owners to become powerful and successful in business.

They only need the knowledge and courage to implement their ideas with full dedication and hard work. Because technology is not limited to powerful it’s also for common too. If he/she knows how to use it and have a tech guide.

Tech powers are collected by big companies and then changing the world where ever they want to move it.

But the following are the solutions:

The most powerful people in this world are those who have the knowledge, wealth and the support of people. Anyone can become powerful by gaining knowledge, earning wealth by doing business and helping people in solving their problems. I think that is possible through the use of technologies.

Technology is like fuel for big companies. They have ideas, knowledge and they are creating new tech solutions. They are earning money by collecting data from internet users, and they hire great talents.

That’s why power is collected by only big companies. After that, they are manufacturing driverless cars. They are building robots. Investing in cloud computing etc. And they have the power to change the world. They can do anything.

But nothing is permanent neither we or nor our technologies. We all are human, but our thinking is inspired by technologies. That’s why I call it a robotic human.

Don’t allow technology to control you, make a technological decision that is helpful for low-income people and has positive effects on the environment.

Sorry about that if anything hurt you in this article, I just mean to share my technical feelings.

That’s it! What I know about the technologies.

If you haven’t subscribed yet, Please Enter your email id here & subscribe then verify it by visiting verification link in your mailbox, after that you will get great articles like this one automatically to your email inbox.

AI, Robotics, and the Future of Jobs

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade.

We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet. (For more details, please see the section “About this Report and Survey.”)

Key themes: reasons to be hopeful

  1. Advances in technology may displace certain types of work, but historically they have been a net creator of jobs.
  2. We will adapt to these changes by inventing entirely new types of work, and by taking advantage of uniquely human capabilities.
  3. Technology will free us from day-to-day drudgery, and allow us to define our relationship with “work” in a more positive and socially beneficial way.
  4. Ultimately, we as a society control our own destiny through the choices we make.

Key themes: reasons to be concerned

  1. Impacts from automation have thus far impacted mostly blue-collar employment the coming wave of innovation threatens to upend white-collar work as well.
  2. Certain highly-skilled workers will succeed wildly in this new environment—but far more may be displaced into lower paying service industry jobs at best, or permanent unemployment at worst.
  3. Our educational system is not adequately preparing us for work of the future, and our political and economic institutions are poorly equipped to handle these hard choices.

Some 1,896 experts responded to the following question:

The economic impact of robotic advances and AISelf-driving cars, intelligent digital agents that can act for you, and robots are advancing rapidly. Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?

Half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.

The other half of the experts who responded to this survey (52%) expect that technology will not displace more jobs than it creates by 2025. To be sure, this group anticipates that many jobs currently performed by humans will be substantially taken over by robots or digital agents by 2025. But they have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution.

These two groups also share certain hopes and concerns about the impact of technology on employment. For instance, many are concerned that our existing social structures—and especially our educational institutions—are not adequately preparing people for the skills that will be needed in the job market of the future. Conversely, others have hope that the coming changes will be an opportunity to reassess our society’s relationship to employment itself—by returning to a focus on small-scale or artisanal modes of production, or by giving people more time to spend on leisure, self-improvement, or time with loved ones.

A number of themes ran through the responses to this question: those that are unique to either group, and those that were mentioned by members of both groups.

The view from those who expect AI and robotics to have a positive or neutral impact on jobs by 2025

JP Rangaswami, chief scientist for, offered a number of reasons for his belief that automation will not be a net displacer of jobs in the next decade: “The effects will be different in different economies (which themselves may look different from today’s political boundaries). Driven by revolutions in education and in technology, the very nature of work will have changed radically—but only in economies that have chosen to invest in education, technology, and related infrastructure. Some classes of jobs will be handed over to the ‘immigrants’ of AI and Robotics, but more will have been generated in creative and curating activities as demand for their services grows exponentially while barriers to entry continue to fall. For many classes of jobs, robots will continue to be poor labor substitutes.”

Rangaswami’s prediction incorporates a number of arguments made by those in this canvassing who took his side of this question.

Argument #1: Throughout history, technology has been a job creator—not a job destroyer

Vint Cerf, vice president and chief Internet evangelist for Google, said, “Historically, technology has created more jobs than it destroys and there is no reason to think otherwise in this case. Someone has to make and service all these advanced devices.”

Jonathan Grudin, principal researcher for Microsoft, concurred: “Technology will continue to disrupt jobs, but more jobs seem likely to be created. When the world population was a few hundred million people there were hundreds of millions of jobs. Although there have always been unemployed people, when we reached a few billion people there were billions of jobs. There is no shortage of things that need to be done and that will not change.”

Michael Kende, the economist for a major Internet-oriented nonprofit organization, wrote, “In general, every wave of automation and computerization has increased productivity without depressing employment, and there is no reason to think the same will not be true this time. In particular, the new wave is likely to increase our personal or professional productivity (e.g. self-driving car) but not necessarily directly displace a job (e.g. chauffeur). While robots may displace some manual jobs, the impact should not be different than previous waves of automation in factories and elsewhere. On the other hand, someone will have to code and build the new tools, which will also likely lead to a new wave of innovations and jobs.”

Fred Baker, Internet pioneer, longtime leader in the IETF and Cisco Systems Fellow, responded, “My observation of advances in automation has been that they change jobs, but they don’t reduce them. A car that can guide itself on a striped street has more difficulty with an unstriped street, for example, and any automated system can handle events that it is designed for, but not events (such as a child chasing a ball into a street) for which it is not designed. Yes, I expect a lot of change. I don’t think the human race can retire en masse by 2025.”

Argument #2: Advances in technology create new jobs and industries even as they displace some of the older ones

Ben Shneiderman, professor of computer science at the University of Maryland, wrote, “Robots and AI make compelling stories for journalists, but they are a false vision of the major economic changes. Journalists lost their jobs because of changes to advertising, professors are threatened by MOOCs, and store salespeople are losing jobs to Internet sales people. Improved user interfaces, electronic delivery (videos, music, etc.), and more self-reliant customers reduce job needs. At the same time someone is building new websites, managing corporate social media plans, creating new products, etc. Improved user interfaces, novel services, and fresh ideas will create more jobs.”

Amy Webb, CEO of strategy firm Webbmedia Group, wrote, “There is a general concern that the robots are taking over. I disagree that our emerging technologies will permanently displace most of the workforce, though I’d argue that jobs will shift into other sectors. Now more than ever, an army of talented coders is needed to help our technology advance. But we will still need folks to do packaging, assembly, sales, and outreach. The collar of the future is a hoodie.”

John Markoff, senior writer for the Science section of the New York Times, responded, “You didn’t allow the answer that I feel strongly is accurate—too hard to predict. There will be a vast displacement of labor over the next decade. That is true. But, if we had gone back 15 years who would have thought that ‘search engine optimization’ would be a significant job category?”

Marjory Blumenthal, a science and technology policy analyst, wrote, “In a given context, automated devices like robots may displace more than they create. But they also generate new categories of work, giving rise to second- and third-order effects. Also, there is likely to be more human-robot collaboration—a change in the kind of work opportunities available. The wider impacts are the hardest to predict they may not be strictly attributable to the uses of automation but they are related…what the middle of the 20th century shows us is how dramatic major economic changes are—like the 1970s OPEC-driven increases of the price of oil—and how those changes can dwarf the effects of technology.”

Argument #3: There are certain jobs that only humans have the capacity to do

A number of respondents argued that many jobs require uniquely human characteristics such as empathy, creativity, judgment, or critical thinking—and that jobs of this nature will never succumb to widespread automation.

David Hughes, a retired U.S. Army Colonel who, from 1972, was a pioneer in individual to/from digital telecommunications, responded, “For all the automation and AI, I think the ‘human hand’ will have to be involved on a large scale. Just as aircraft have to have pilots and copilots, I don’t think all ‘self-driving’ cars will be totally unmanned. The human’s ability to detect unexpected circumstances, and take action overriding automatic driving will be needed as long and individually owned ‘cars’ are on the road.”

Pamela Rutledge, PhD and director of the Media Psychology Research Center, responded, “There will be many things that machines can’t do, such as services that require thinking, creativity, synthesizing, problem-solving, and innovating…Advances in AI and robotics allow people to cognitively offload repetitive tasks and invest their attention and energy in things where humans can make a difference. We already have cars that talk to us, a phone we can talk to, robots that lift the elderly out of bed, and apps that remind us to call Mom. An app can dial Mom’s number and even send flowers, but an app can’t do that most human of all things: emotionally connect with her.”

Michael Glassman, associate professor at the Ohio State University, wrote, “I think AI will do a few more things, but people are going to be surprised how limited it is. There will be greater differentiation between what AI does and what humans do, but also much more realization that AI will not be able to engage the critical tasks that humans do.”

Argument #4: The technology will not advance enough in the next decade to substantially impact the job market

Another group of experts feels that the impact on employment is likely to be minimal for the simple reason that 10 years is too short a timeframe for automation to move substantially beyond the factory floor. David Clark, a senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, noted, “The larger trend to consider is the penetration of automation into service jobs. This trend will require new skills for the service industry, which may challenge some of the lower-tier workers, but in 12 years I do not think autonomous devices will be truly autonomous. I think they will allow us to deliver a higher level of service with the same level of human involvement.”

Jari Arkko, Internet expert for Ericsson and chair of the Internet Engineering Task Force, wrote, “There is no doubt that these technologies affect the types of jobs that need to be done. But there are only 12 years to 2025, some of these technologies will take a long time to deploy in significant scale… We’ve been living a relatively slow but certain progress in these fields from the 1960s.”

Christopher Wilkinson, a retired European Union official, board member for, and Internet Society leader said, “The vast majority of the population will be untouched by these technologies for the foreseeable future. AI and robotics will be a niche, with a few leading applications such as banking, retailing, and transport. The risks of error and the imputation of liability remain major constraints to the application of these technologies to the ordinary landscape.”

Argument #5: Our social, legal, and regulatory structures will minimize the impact on employment

A final group suspects that economic, political, and social concerns will prevent the widespread displacement of jobs. Glenn Edens, a director of research in networking, security, and distributed systems within the Computer Science Laboratory at PARC, a Xerox Company, wrote, “There are significant technical and policy issues yet to resolve, however there is a relentless march on the part of commercial interests (businesses) to increase productivity so if the technical advances are reliable and have a positive ROI then there is a risk that workers will be displaced. Ultimately we need a broad and large base of employed population, otherwise there will be no one to pay for all of this new world.”

Andrew Rens, chief council at the Shuttleworth Foundation, wrote, “A fundamental insight of economics is that an entrepreneur will only supply goods or services if there is a demand, and those who demand the good can pay. Therefore any country that wants a competitive economy will ensure that most of its citizens are employed so that in turn they can pay for goods and services. If a country doesn’t ensure employment driven demand it will become increasingly less competitive.”

Geoff Livingston, author and president of Tenacity5 Media, wrote, “I see the movement towards AI and robotics as evolutionary, in large part because it is such a sociological leap. The technology may be ready, but we are not—at least, not yet.”

The view from those who expect AI and robotics to displace more jobs than they create by 2025

An equally large group of experts takes a diametrically opposed view of technology’s impact on employment. In their reading of history, job displacement as a result of technological advancement is clearly in evidence today, and can only be expected to get worse as automation comes to the white-collar world.

Argument #1: Displacement of workers from automation is already happening—and about to get much worse

Jerry Michalski, founder of REX, the Relationship Economy eXpedition, sees the logic of the slow and unrelenting movement in the direction of more automation: “Automation is Voldemort: the terrifying force nobody is willing to name. Oh sure, we talk about it now and then, but usually in passing. We hardly dwell on the fact that someone trying to pick a career path that is not likely to be automated will have a very hard time making that choice. X-ray technician? Outsourced already, and automation in progress. The race between automation and human work is won by automation, and as long as we need fiat currency to pay the rent/mortgage, humans will fall out of the system in droves as this shift takes place…The safe zones are services that require local human effort (gardening, painting, babysitting), distant human effort (editing, coaching, coordinating), and high-level thinking/relationship building. Everything else falls in the target-rich environment of automation.”

Mike Roberts, Internet pioneer and Hall of Fame member and longtime leader with ICANN and the Internet Society, shares this view: “Electronic human avatars with substantial work capability are years, not decades away. The situation is exacerbated by total failure of the economics community to address to any serious degree sustainability issues that are destroying the modern ‘consumerist’ model and undermining the early 20th century notion of ‘a fair day’s pay for a fair day’s work.’ There is great pain down the road for everyone as new realities are addressed. The only question is how soon.”

Robert Cannon, Internet law and policy expert, predicts, “Everything that can be automated will be automated. Non-skilled jobs lacking in ‘human contribution’ will be replaced by automation when the economics are favorable. At the hardware store, the guy who used to cut keys has been replaced by a robot. In the law office, the clerks who used to prepare discovery have been replaced by software. IBM Watson is replacing researchers by reading every report ever written anywhere. This begs the question: What can the human contribute? The short answer is that if the job is one where that question cannot be answered positively, that job is not likely to exist.”

Tom Standage, digital editor for The Economist, makes the point that the next wave of technology is likely to have a more profound impact than those that came before it: “Previous technological revolutions happened much more slowly, so people had longer to retrain, and [also] moved people from one kind of unskilled work to another. Robots and AI threaten to make even some kinds of skilled work obsolete (e.g., legal clerks). This will displace people into service roles, and the income gap between skilled workers whose jobs cannot be automated and everyone else will widen. This is a recipe for instability.”

Mark Nall, a program manager for NASA, noted, “Unlike previous disruptions such as when farming machinery displaced farm workers but created factory jobs making the machines, robotics and AI are different. Due to their versatility and growing capabilities, not just a few economic sectors will be affected, but whole swaths will be. This is already being seen now in areas from robocalls to lights-out manufacturing. Economic efficiency will be the driver. The social consequence is that good-paying jobs will be increasingly scarce.”

Argument #2: The consequences for income inequality will be profound

For those who expect AI and robotics to significantly displace human employment, these displacements seem certain to lead to an increase in income inequality, a continued hollowing out of the middle class, and even riots, social unrest, and/or the creation of a permanent, unemployable “underclass”.

Justin Reich, a fellow at Harvard University’s Berkman Center for Internet & Society, said, “Robots and AI will increasingly replace routine kinds of work—even the complex routines performed by artisans, factory workers, lawyers, and accountants. There will be a labor market in the service sector for non-routine tasks that can be performed interchangeably by just about anyone—and these will not pay a living wage—and there will be some new opportunities created for complex non-routine work, but the gains at this top of the labor market will not be offset by losses in the middle and gains of terrible jobs at the bottom. I’m not sure that jobs will disappear altogether, though that seems possible, but the jobs that are left will be lower paying and less secure than those that exist now. The middle is moving to the bottom.”

Stowe Boyd, lead researcher at GigaOM Research, said, “As just one aspect of the rise of robots and AI, widespread use of autonomous cars and trucks will be the immediate end of taxi drivers and truck drivers truck driver is the number-one occupation for men in the U.S.. Just as importantly, autonomous cars will radically decrease car ownership, which will impact the automotive industry. Perhaps 70% of cars in urban areas would go away. Autonomous robots and systems could impact up to 50% of jobs, according to recent analysis by Frey and Osborne at Oxford, leaving only jobs that require the ‘application of heuristics’ or creativity…An increasing proportion of the world’s population will be outside of the world of work—either living on the dole, or benefiting from the dramatically decreased costs of goods to eke out a subsistence lifestyle. The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?”

Nilofer Merchant, author of a book on new forms of advantage, wrote, “Just today, the guy who drives the service car I take to go to the airport [said that he] does this job because his last blue-collar job disappeared from automation. Driverless cars displace him. Where does he go? What does he do for society? The gaps between the haves and have-nots will grow larger. I’m reminded of the line from Henry Ford, who understood he does no good to his business if his own people can’t afford to buy the car.”

Alex Howard, a writer and editor based in Washington, D.C., said, “I expect that automation and AI will have had a substantial impact on white-collar jobs, particularly back-office functions in clinics, in law firms, like medical secretaries, transcriptionists, or paralegals. Governments will have to collaborate effectively with technology companies and academic institutions to provide massive retraining efforts over the next decade to prevent massive social disruption from these changes.”

Point of agreement: The educational system is doing a poor job of preparing the next generation of workers

A consistent theme among both groups is that our existing social institutions—especially the educational system—are not up to the challenge of preparing workers for the technology- and robotics-centric nature of employment in the future.

Howard Rheingold, a pioneering Internet sociologist and self-employed writer, consultant, and educator, noted, “The jobs that the robots will leave for humans will be those that require thought and knowledge. In other words, only the best-educated humans will compete with machines. And education systems in the U.S. and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorize what is told to them, preparing them for life in a 20th century factory.”

Bryan Alexander, technology consultant, futurist, and senior fellow at the National Institute for Technology in Liberal Education, wrote, “The education system is not well positioned to transform itself to help shape graduates who can ‘race against the machines.’ Not in time, and not at scale. Autodidacts will do well, as they always have done, but the broad masses of people are being prepared for the wrong economy.”

Point of agreement: The concept of “work” may change significantly in the coming decade

On a more hopeful note, a number of experts expressed a belief that the coming changes will allow us to renegotiate the existing social compact around work and employment.

Possibility #1: We will experience less drudgery and more leisure time

Hal Varian, chief economist for Google, envisions a future with fewer ‘jobs’ but a more equitable distribution of labor and leisure time. “If ‘displace more jobs’ means ‘eliminate dull, repetitive, and unpleasant work,’ the answer would be yes. How unhappy are you that your dishwasher has replaced washing dishes by hand, your washing machine has displaced washing clothes by hand, or your vacuum cleaner has replaced hand cleaning? My guess is this ‘job displacement’ has been very welcome, as will the ‘job displacement’ that will occur over the next 10 years. The work week has fallen from 70 hours a week to about 37 hours now, and I expect that it will continue to fall. This is a good thing. Everyone wants more jobs and less work. Robots of various forms will result in less work, but the conventional work week will decrease, so there will be the same number of jobs (adjusted for demographics, of course). This is what has been going on for the last 300 years so I see no reason that it will stop in the decade.”

Tiffany Shlain, filmmaker, host of the AOL series The Future Starts Here, and founder of The Webby Awards, responded, “Robots that collaborate with humans over the cloud will be in full realization by 2025. Robots will assist humans in tasks thus allowing humans to use their intelligence in new ways, freeing us up from menial tasks.”

Francois-Dominique Armingaud, retired computer software engineer from IBM and now giving security courses to major engineering schools, responded, “The main purpose of progress now is to allow people to spend more life with their loved ones instead of spoiling it with overtime while others are struggling in order to access work.”

Possibility #2: It will free us from the industrial age notion of what a “job” is

A notable number of experts take it for granted that many of tomorrow’s jobs will be held by robots or digital agents—and express hope that this will inspire us as a society to completely redefine our notions of work and employment.

Peter and Trudy Johnson-Lenz, founders of the online community Awakening Technology, based in Portland, Oregon, wrote, “Many things need to be done to care for, teach, feed, and heal others that are difficult to monetize. If technologies replace people in some jobs and roles, what kinds of social support or safety nets will make it possible for them to contribute to the common good through other means? Think outside the job.”

Bob Frankston, an Internet pioneer and technology innovator whose work helped allow people to have control of the networking (internet) within their homes, wrote, “We’ll need to evolve the concept of a job as a means of wealth distribution as we did in response to the invention of the sewing machine displacing seamstressing as welfare.”

Jim Hendler, an architect of the evolution of the World Wide Web and professor of computer science at Rensselaer Polytechnic Institute, wrote, “The notion of work as a necessity for life cannot be sustained if the great bulk of manufacturing and such moves to machines—but humans will adapt by finding new models of payment as they did in the industrial revolution (after much upheaval).”

Tim Bray, an active participant in the IETF and technology industry veteran, wrote, “It seems inevitable to me that the proportion of the population that needs to engage in traditional full-time employment, in order to keep us fed, supplied, healthy, and safe, will decrease. I hope this leads to a humane restructuring of the general social contract around employment.”

Possibility #3: We will see a return to uniquely “human” forms of production

Another group of experts anticipates that pushback against expanding automation will lead to a revolution in small-scale, artisanal, and handmade modes of production.

Kevin Carson, a senior fellow at the Center for a Stateless Society and contributor to the P2P Foundation blog, wrote, “I believe the concept of ‘jobs’ and ‘employment’ will be far less meaningful, because the main direction of technological advance is toward cheap production tools (e.g., desktop information processing tools or open-source CNC garage machine tools) that undermine the material basis of the wage system. The real change will not be the stereotypical model of ‘technological unemployment,’ with robots displacing workers in the factories, but increased employment in small shops, increased project-based work on the construction industry model, and increased provisioning in the informal and household economies and production for gift, sharing, and barter.”

Tony Siesfeld, director of the Monitor Institute, wrote, “I anticipate that there will be a backlash and we’ll see a continued growth of artisanal products and small-scale [efforts], done myself or with a small group of others, that reject robotics and digital technology.”

A network scientist for BBN Technologies wrote, “To some degree, this is already happening. In terms of the large-scale, mass-produced economy, the utility of low-skill human workers is rapidly diminishing, as many blue-collar jobs (e.g., in manufacturing) and white-collar jobs (e.g., processing insurance paperwork) can be handled much more cheaply by automated systems. And we can already see some hints of reaction to this trend in the current economy: entrepreneurially-minded unemployed and underemployed people are taking advantages of sites like Etsy and TaskRabbit to market quintessentially human skills. And in response, there is increasing demand for ‘artisanal’ or ‘hand-crafted’ products that were made by a human. In the long run this trend will actually push toward the re-localization and re-humanization of the economy, with the 19th- and 20th-century economies of scale exploited where they make sense (cheap, identical, disposable goods), and human-oriented techniques (both older and newer) increasingly accounting for goods and services that are valuable, customized, or long-lasting.”

Point of agreement: Technology is not destiny … we control the future we will inhabit

In the end, a number of these experts took pains to note that none of these potential outcomes—from the most utopian to most dystopian—are etched in stone. Although technological advancement often seems to take on a mind of its own, humans are in control of the political, social, and economic systems that will ultimately determine whether the coming wave of technological change has a positive or negative impact on jobs and employment.

Seth Finkelstein, a programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner, responded, “The technodeterminist-negative view, that automation means jobs loss, end of story, versus the technodeterminist-positive view, that more and better jobs will result, both seem to me to make the error of confusing potential outcomes with inevitability. Thus, a technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole….this is not a technological consequence rather it’s a political choice.”

Jason Pontin, editor in chief and publisher of the MIT Technology Review, responded, “There’s no economic law that says the jobs eliminated by new technologies will inevitably be replaced by new jobs in new markets… All of this is manageable by states and economies: but it will require wrestling with ideologically fraught solutions, such as a guaranteed minimum income, and a broadening of our social sense of what is valuable work.”

After Coronavirus the World Will Never Be the Same. But Maybe, It Can Be Better

Life has changed a lot in the past few days, weeks, or months, depending where you live. As efforts to contain the novel coronavirus ramp up, it’s likely going to change even more. But we’re already sick of being at home all the time, we miss our friends and families, everything’s been canceled, the economy is tanking, and we feel anxious and scared about what’s ahead.

We just want this to be over, and we figure it’s only a matter of time. We’re making plans for what we’ll do when things go back to normal—and banking on that happening.

But what if life never fully goes back to how it was pre-coronavirus? What if this epidemic is a turning point, and after it the world is never the same?

More importantly—or, at least, more optimistically—what if the world could come out of this crisis better than it was before?

Jamie Metzl, technology and healthcare futurist, geopolitical expert, entrepreneur, author of Hacking Darwin: Genetic Engineering and the Future of Humanity, and Senior Fellow of the Atlantic Council, thinks this is possible—but it all depends on what we do and how we behave right now. In a talk at Singularity University’s virtual summit on COVID-19 last week, Metzl explained why he believes that we’re never going “back to normal”—and what we should be doing now to make the new normal a good one.

Marks of History

For many of us, the most impactful geopolitical event that’s happened during our lifetime was the terrorist attacks of September 11, 2001. The world changed that day, and it’s never returned to how it was before.

A flu-like pandemic with a relatively low mortality rate may seem minor compared to the deliberate murder of thousands of innocent people. But, Metzl said, “It’s my contention that this isn’t a 2001 moment, this is something much bigger. I think of this as a 1941 moment.”

1941 was the thick of World War II. Nobody knew what the outcome of the war was going to be, everybody was terrified, and the US and its allies were losing the war. “But even in the height of those darkest of times,” Metzl said, “people began imagining what the future world would look.”

It was 1941 when President Roosevelt gave his famous Four Freedoms speech, and when American and British leadership issued the Atlantic Charter, which set out their vision for the post-war international order. To this day, our lives exist within that order.

The situation we’re in right now is, of course, different it’s not a war. It is, in Metzl’s words, “a convergence of the worlds of science and biology and the world of geopolitics.” And as the coronavirus crisis continues to play out, its geopolitical implications are going to become much greater.

The Old World Is Dying

Metzl shared a quote from Italian Communist theorist Antonio Gramsci, written in the 1930s: “The old world is dying and the new world struggles to be born. Now is the time of monsters.”

Metzl deconstructed it. For starters, he said, the post-WWII order that we’ve all grown up with was dying before this virus appeared.

Post-WWII planners envisioned a world that shared sovereignty and curbed nationalism. But we’re now in a period of dramatic re-nationalization of the world, with populist, extremist, or authoritarian leaders in power from Brazil to the US to China, and many countries in between.

Institutions intended to foster global cooperation (like the World Bank, the International Monetary Fund, the United Nations, and the World Health Organization) have been starved in the context of this re-nationalization, and as a result we don’t have effective structures in place to address global crises—and not just coronavirus. Think of climate change, protecting the oceans, preparing for a future of automation and AI no country can independently take on or solve these massive challenges.

Not all is lost, though. “There are some positive pieces of this globalization story that we also need to be mindful of,” Metzl said.

When the Spanish flu pandemic hit in 1918, there were only 2 billion people on Earth, and of those 2 billion only 30 percent were literate the “brain pool” for solving problems was about 600 million people.

Now we have a global population of 7.5 billion and an 86 percent literacy rate, which means over 6.5 billion people can be part of the effort to fix what’s broken. Just as crucially, we’re more connected to each other than we’ve ever been. It used to take thousands of years for knowledge to transfer now it can fly across the world over the internet in minutes. “The pandemic moves at the speed of globalization, but so does the response,” Metzl said. “The tools we’re bringing to this fight are greater than anything our ancestors could have possibly imagined.”

But at the same time we’re experiencing this incredible bottom-up energy and connectivity, we’re also experiencing an abysmal failure of our top-down institutions.

Now Is the Time of Monsters

Have you felt afraid these last few days and weeks? I sure have. The stock market has plummeted, some people are losing their jobs, others are getting sick, and we don’t know the way out or how how long it’s going to last. In the meantime, a lot of unexpected things will happen.

There will be an economic slowdown or recession, and there will be issues with our healthcare systems—and these are just the predictable things. Metzl believes we’ll also see significant second and third-order effects. If the poorer parts of the world get hit hard by the virus, we may see fragile states collapsing, and multi-lateral states like the European Union unable to support the strain. “Our democracies are going to be challenged, and there may be soft coups even here in the US,” Metzl said. Speaking of challenges to democracy, there are actors whose desires and aspirations are very different from our own, and this could be a moment of opportunity for them.

“The world is not going to snap back to being exactly like it was before this crisis happened,” Metzl said. “We’re going to come out of this into a different world.”

The New World Struggles to Be Born

We don’t know exactly what that world will look like, but we can imagine some of it. Basically, take the trends that were already in motion and hit the fast-forward button. Virtualization of events, activities, and interactions. Automation of processes and services. Political and economic decentralization.

But for the pieces of the future that we’re unsure of, now is 1941. “Now is the time when we need to think about what we would like the new world to look like, and start planning for it and building it,” Metzl said.

In hindsight, it’s easy to picture a far better response and outcome to the COVID-19 outbreak. What if, three months ago, there’d been a global surveillance system in place, and at the first signs of the outbreak, an international emergency team led by the World Health Organization had immediately gone to Wuhan?

“We—all of us—need to re-invigorate a global system that can engage people inclusively across differences and across countries,” Metzl said. “We need to be articulating our long-term vision now so that we can evaluate everything against that standard.”

There’s not a total lack of a positive long-term vision now the UN sustainable development goals, for example, call for gender equality, no poverty, no hunger, decent work, climate action, and justice (among other goals) around the world.

The problem is that we don’t have institutions meaningful enough or strong enough to effect realization of these principles there’s a mismatch between the global nature of the problems we’re facing and the structure of national politics.

Building the New Normal

Just as our old normal was the new normal for our grandparents in the mid-1900s, this new normal that feels so shocking to us right now will simply be normal for our children and grandchildren. But there are some critical—and wonderful—differences between the mid-1900s and now.

We have more educated people, stronger connections, faster sharing of information, and more technological tools and scientific knowledge than ever before in history. “The number of people who can be part of this conversation is unprecedented,” Metzl said. “We couldn’t have done this in the industrial age or even the nuclear age. There’s never been this kind of motivation combined with this capacity around the world.”

In 1941, the global planning process was top-down: a small group of powerful, smart people decided how things would be then took steps to make their vision a reality. But this time will be different to succeed, the new global plan will need to have meaningful drive from the bottom up.

“We need to recognize a new locus of power,” Metzl said. “And it’s us. Nobody is going to solve this for us. This is our moment to really come together.”

Watch the video: Το λείμμα των Χριστιανών που θα σωθεί σε κάθε εποχή και στα έσχατα - Μυτιληναίος (January 2022).