Wednesday, January 30, 2013

Chicago murders top Afghanistan death toll

CHICAGO MURDERS TOP AFGHANISTAN DEATH TOLL

WND EXCLUSIVE

City where no handgun purchases allowed sets pace for violence

The death toll by murder in Chicago over the past decade is greater than the number of American forces who have died in Afghanistan since the beginning of Operation Enduring Freedom, according to a police analysis.

In addition, police reports in Chicago – where President Obama once worked as a community organizer and where his former chief of staff, Rahm Emanuel, now serves as mayor – show most of the city's massive murder mayhem is black-on-black crime.

A WND review of the Chicago Police Department Murder Analysis reports from 2003 to 2011 provides a statistical breakdown of the demographics of both the victims and offenders in the 4,265 murders in Chicago over that time period.

Of the victims of murder in Chicago from 2003 to 2011, an average of 77 percent had a prior arrest history, with a high of 79 percent of the 436 murdered in Chicago in 2010 having arrest histories.

For the same 2003-2011 period, blacks were the victims of 75 percent of 4,265 murders. Blacks also were the offenders in 75 percent of the murders.

According to 2010 U.S. Census information, Chicago has a population of 2,695,598 people. The city is 33 percent black, 32 percent white (not Hispanic), and 30 percent Hispanic or Latino in origin.

For the 2003-2011 period, whites were nearly 6 percent of the victims and accused of carrying out 4 percent of the murders.

For the 2003-2011 period, Hispanics or Latinos were 19 percent of the victims and 20 percent of the offenders.

Between 2003 and 2011, 4,265 people were murdered in the city of Chicago. In 2012 alone, 512 people were murdered in the city.

Operation Enduring Freedom, the name for the war in Afghanistan, which started Oct. 7, 2001, has seen a total of 2,166 killed. The war has been ongoing for 11 years, 3 months and one week.

Operation Iraqi Freedom, the name for the war in Iraq, which started March 20, 2003, and ended Dec. 15, 2011, saw a total of 4,422 killed.

In a city with some of the toughest gun control laws in America, where a handgun cannot be purchased, Fox News reported that Chicago Police Supt. Garry McCarthy "acknowledged aiming at assault weapons misses the mark when dealing with Chicago's gang violence."

"The weapon used is generally a handgun, and rarely is it purchased through legal channels," he said.

A WND review of the Chicago Police Department Murder Analysis reports from 2003 to 2011 provides a statistical breakdown of the manners in which people were murdered in Chicago.

Of the 4,251 people murdered, 3,371 died from being shot, with 98 percent of the murder weapons being a handgun. Thirty-seven people were killed with a rifle (caliber of bullet not specified), and 40 were killed with a shotgun.

In 27 of the murders, the type of gun used could not be determined by the Chicago Police Department.

Murders by stabbing in Chicago accounted for 9 percent of the total between 2003 and 2011; 7 percent of the people murdered in Chicago between 2003 and 2011 died from what the Chicago Police Department classifies as "assault"; 92 people were killed by strangulation; 27 people by blunt force; 15 by asphyxiation; and 51 people were categorized in the "other" category.

A closer look at the instruments used in some of the 4,251 murders between 2003 and 2011 reveals:

  • In 2011, one person was killed with a pocketknife, one a baseball bat and one was asphyxiated with a pry bar.
  • In 2010, three people were killed with a kitchen knife, two with a baseball bat, one with a wooden board, one with rope/cordage and one with gasoline (burning).
  • In 2009, a pocketknife was used as the murder weapon once, as well as a concrete block/brick and baseball bat. Clothing was also used once in a strangulation murder.
  • In 2008, a baseball bat was used twice, clothing once and gasoline once as murder weapons.
  • In 2007, a baseball bat and a pipe were both used twice. A hammer was used four times. An electrical or phone cord was used once.
  • In 2006, a baseball bat was used four times.
  • In 2005, a screwdriver was used twice, a baseball four times, a bottle once, a hammer once and clothing once.
  • In 2004, a screwdriver was used once, a baseball bat seven times; a pipe, a tire iron, a bottle, and a concrete block/brick were all used once apiece. A pillow and an electrical or phone cord were also used once.
  • In 2003, a screwdriver and pocketknife were used once; a bottle, pipe, and handgun (used as a blunt weapon), concrete block/brick were used once. A baseball bat was used four times as a weapon in murder.

Less than 1 percent of the murders in Chicago between 2003 and 2011 were committed with what the Chicago Police Department classifies as a "rifle," the classification for an AR-15.

WND previously reported that Chicago Police Superintendent Garry McCarthy had voiced opposition to concealed carry in a city where last year more people were murdered than in Afghanistan.

"When people say concealed carry, I say Trayvon Martin," McCarthy said.

He was speaking to a largely black audience at a Rainbow PUSH event in Chicago.

The Chicago Tribune reported McCarthy, Rev. Jesse Jackson Sr., local radio host Cliff Kelley and others discussed gun laws, Chicago's homicide rate and recent mass killings in Newtown, Conn., and Aurora, Colo., to a few hundred people.

McCarthy emphasized opposition to concealed carry, even though Illinois is the only state that doesn't permit the practice, the Chicago paper noted.

"'Just because it's 49 to one doesn't mean that Illinois is wrong,' McCarthy said.

He insisted supporters of concealed carry don't understand the consequences.

"'When people say concealed carry, I say Trayvon Martin,' McCarthy said, referring to the unarmed 17-year-old who was shot and killed last February by a neighborhood watch volunteer in Florida, sparking controversy across the country.

"I say Trayvon Martin," McCarthy continued, according to the Tribune, "because the answer to guns is not more guns, and just simply putting guns in people's hands is going to lead to more tragedy."

THE FULL STORY ON OBAMA'S MASSIVE GUN GRAB:

Obama plan: 'Assault-weapon' ban, universal background checks

Poll: Seeds of tyranny present in America

47 states revolt against Obama gun control

Rush Limbaugh: Obama 'wants people to snap'

'Obama has dramatically overshot'

Oops! Gun-map hate mail goes to wrong paper

Constitution 'no impediment' to Obama

Chicago murders top Afghanistan death toll

Virginia's solution to guns in school

See WND's latest columns on gun control:

Guns don't kill people, the mentally ill do by Ann Coulter

What happened to Lanza's 4 handguns? by Jack Cashill

The consequences of volatile speech by Phil Elmore

It's all about safety by Craige McMillan

Guns and government by Andrew Napolitano

'Gun Culture' – what about the 'Fatherless Culture'? by Larry Elder



Sent from my iPhone

Tuesday, January 29, 2013

Works and Days » California at Twilight

California at Twilight

We keep trying to understand the enigma of California, mostly why it still breathes for a while longer, given the efforts to destroy the sources of its success. Let's try to navigate through its sociology and politics to grasp why something that should not survive is surviving quite well — at least in some places.

Conservati delendi sunt

The old blue/red war for California is over. Conservatives lost. Liberals won — by a combination of flooding the state with government-supplied stuff, and welcoming millions in while showing the exit to others. The only mystery is how Carthaginian will be the victor's peace, e.g., how high will taxes go, how many will leave, how happy will the majority be at their departure?

The state of Pat Brown, Ronald Reagan, Pete Wilson, and George Deukmejian is long dead due to the most radical demographic shifts of any one state in recent American history — as far away as Cicero was to Nero. One minor, but telling example: Salinas, in Monterey County where the murder rate is the highest in the state, just — at least I think the news story is not a prank — named its new middle school after Tiburcio Vasquez.

A convicted murderer.

He was the legendary 19th-century robber and murderer who was hanged for his crimes. But who is to say that Vasquez is a killer, and Henry Huntington a visionary?

The New Demography

California has changed not due to race but due to culture, most prominently because the recent generation of immigrants from Latin America did not — as in the past, for the most part — come legally in manageable numbers and integrate under the host's assimilationist paradigm. Instead, in the last three decades huge arrivals of illegal aliens from Mexico and Latin America saw Democrats as the party of multiculturalism, separatism, entitlements, open borders, non-enforcement of immigration laws, and eventually plentiful state employment.

Given the numbers, the multicultural paradigm of the salad bowl that focused on "diversity" rather than unity, and the massive new government assistance, how could the old American tonic of assimilation, intermarriage, and integration keep up with the new influxes? It could not.

Finally, we live in an era of untruth and Orwellian censorship. It is absolutely taboo to write about the above, or to talk about the ever more weird artifacts of illegal immigration — the war now on black families in demographically changing areas of Los Angeles, the statistics behind DUI arrests, or the burgeoning profile of Medi-Cal recipients. I recall of the serial dissimulation in California my high school memorization of Sir Walter Raleigh:

Tell potentates, they live/Acting by others' action/Not loved unless they give; Not strong but by affection; If potentates reply/Give potentates the lie.

There were, of course, other parallel demographic developments. Hundreds of thousands of the working and upper-middle class, mostly from the interior of the state, have fled — maybe four million in all over the last thirty years, taking with them $1 trillion in capital and income-producing education and expertise. Apparently, they tired of high taxes, poor schools, crime, and the culture of serial blame-gaming and victimhood. In this reverse Dust Bowl migration, a barren no-tax Nevada or humid Texas was a bargain.

Their California is long gone ("Lo, all our pomp and of yesterday/Is one with Nineveh and Tyre"), and a Stockton, Fresno, or Visalia misses their presence, because they had skills, education, and were net pluses to the California economy.

Add in a hip, youth, and gay influx to the Bay Area, Silicon Valley, and coastal Los Angeles that saw California as a sort of upscale, metrosexual lifestyle (rule of thumb: conservatives always find better restaurants in liberal locales), and California now has an enormous number of single-person households, childless couples, and one-child families. Without the lifetime obligation to raise $1 million in capital to pay for bringing up and educating two kids from birth to 21 (if you're lucky), the non-traditional classes have plenty of disposable income for entertainment, housing, and high taxes. For examples, read Petronius, especially the visit to Croton.

Finally, there is our huge affluent public work force. It is the new aristocracy; landing a job with the state is like hitting the lottery. Californians have discovered that, in today's low/non-interest economy, a $70,000 salary with defined benefit public pension for life is far better than having the income from a lifetime savings of $3 million.

Or, look at it another way: with passbooks paying 0.5-1%, the successful private accountant or lawyer could put away $10,000 a month for thirty years of his productive career and still not match the monthly retirement income of the Caltrans worker who quit at 60 with modest contributions to PERS.

And with money came political clout. To freeze the pension contribution of a highway patrolman is a mortal sin; but no one worries much about the private security's guard minimum wage and zero retirement, whose nightly duties are often just as dangerous. The former is sacrosanct; the latter a mere loser.

The result of 30 years of illegal immigration, the reigning culture of the coastal childless households, the exodus of the overtaxed, and the rule of public employees is not just Democratic, but hyper-liberal supermajorities in the legislature. In the most naturally wealthy state in the union with a rich endowment from prior generations, California is serially broke — the master now of its own fate. It has the highest menu of income, sales, and gas taxes in the nation, and about the worst infrastructure, business climate, and public education. Is the latter fact despite or because of the former?

How, then, does California continue? Read on, but in a nutshell, natural and inherited wealth are so great on the coast that a destructive state government must work overtime to ruin what others wrought.

Also, when you say, "My God, one of every three welfare recipients lives in California," or "California schools are terrible," you mean really, "Not in Newport or Carmel. So who cares about Fresno, or Tulare — they might as well be in Alabama for all the times I have been there."



Sent from my iPhone

Big Government 101: Federal Subsidies Create Glut Of College Grads - Investors.com

Subsidies Create Glut Of College Grads

Higher Education: A new study finds almost half of Americans with college degrees are working at jobs that don't require one. It's the latest example of how federal subsidies are creating a massive higher-education bubble.

The study, by the Center for College Affordability and Productivity, found that an incredible 48% of college graduates — about 13 million of them — hold jobs that don't require a bachelor's degree. About 5 million have jobs that don't even require a high school diploma.

There are, for example, roughly a million sales clerks, 300,000 waiters and 100,000 janitors with college degrees.

This mismatch is up sharply from four decades ago, the study found. While 1% of taxi drivers had a college degree in 1970, to take one example, 15% do today. Back in 1967, fewer than 11% of college grads were overqualified for their jobs.

And this problem is not likely to get better anytime soon. Only seven of the 30 occupations projected to see the biggest growth over the next decade require any post-secondary education, the study found.

As a result, while colleges will churn about roughly 19 million college graduates between 2010 and 2020, the market will likely create fewer than 7 million new jobs that require at least a bachelor's degree.

Weirdly, at the same time, the country faces a shortage of skilled labor — plumbers, electricians, carpenters and the like.

A study by Deloitte out last year found manufacturers couldn't fill as many as 600,000 jobs due to the lack of skilled workers.

Another by the Boston Consulting Group found that this shortage could reach 875,000 by the end of the decade.

In a normal market, a huge excess in supply would send a signal back up the chain to produce less, and as a result, demand for expensive college degrees would drop. At the same time, the number of people learning a skilled trade would increase to fill that shortage.

But while college enrollment has declined slightly in the past year, it continues near historic levels. College costs continue to skyrocket, shooting up at three times the rate of inflation, which in turn has led to a doubling of average student debt.

In fact, the amount of student loan debt now tops $1 trillion, and is growing fast, while default rates are climbing.

If this all looks strange and mysterious, it is. Until, that is, you realize a big reason for all these distortions is the massive federal effort to encourage and subsidize college education.



Sent from my iPhone

Sunday, January 27, 2013

It is liberalism that got small: A GOP opening

It is liberalism that got small: A GOP opening

With apologies to Norma Desmond, President Obama is big, it's liberalism that got small. Certainly the cult of personality and Obama's disdain for his opponents have never been larger. But in his championing of a collectivist agenda, the left's vision is, if one looks closely, reactionary and small.

Silent film star Buster Keaton in "The General" (AFI)

Obama ignores economic growth and job creation. He is stoutly opposed to entitlement reform. His tax deal with Republicans was stuffed full of tax goodies for big business. In the mind-numbing repetition of identity politics and the endless lists of spending items there is nothing that contributes to the prosperity of the 21st-century economy. Regulations (Dodd-Frank, gun proposals) become ends unto themselves, disconnected from well-intentioned ends (a more stable financial system, reduced mass violence by mentally disturbed people). To regulate and to spend mean to be virtuous, regardless of whether people are helped. (To oppose useless or unhelpful regulation and to curb spending mean that one is heartless or downright evil.)

Obama's big initiatives are bureaucratic exercises (his big idea on guns shrinks every day as liberal pundits refuse to acknowledge his assault-weapons ban is another old, failed policy). Is filling out forms for private gun sales really the apex of liberalism? "Climate control" is more government regulation dressed up with a bow and high-minded condescension toward coal, natural-gas and oil workers and businesses. In foreign policy, his world view is self-delusion (peace in our time!) in the service of retrenchment.

What vast endeavor, what shining future does that all offer? It's embarrassing but inevitable that a self-described collectivist can't really find anything grand to undertake with his collectivism. The really big ideas — entitlement reform, tax reform, education reform, energy independence and even immigration reform — are coming from the right, offering to expand opportunity, increase upward mobility and ignite economic growth. Meanwhile, like California, Obama's welfare state takes more in taxes, delivers worse services, curbs economic growth, and shovels money into vast health and pension systems, the liabilities of which dwarf the citizenry's productive activities. The highest goal? Don't reform Medicare! For all that money he's borrowed and spent, you'd think we would have ended illiteracy, cured cancer or built that moon station.

This leaves miles of running room for conservative reformers. In contrast to the liberal vision, they should seek nimble, effective government and dynamic, empowering policies that emphasize choice, upward mobility, and community and individual self-determination.

Those are two sides of the same coin. We can't have a dynamic economy with a gargantuan debt and a deadening regulatory state. We won't have resources for education and technology if discretionary spending is gobbled up by entitlement programs. Republicans can't ignore federal fiscal reform or be indifferent to debt because, as students of  conservatism know, it is essential to limit government (in size and cost) to open up space for private-sector dynamism, civic institutions, local and state government,  and family choice.

Conservatives cannot realistically be anti-government (the market for libertarianism is small), but they should be in favor of limited, energetic government that does what it does well and supports and encourages a vibrant America beyond the Beltway. And it is in the states where the most dynamic governors in a generation are innovating and improving government and the lives of their residents.

Red-state governors, especially those with substandard schools and other services, do themselves and conservatives a disservice in claiming as their highest calling the elimination of taxes. Really, that's what it is all about? Instead, they should follow the lead of current and past GOP governors such as Indiana's Mitch Daniels, Ohio's John Kasich and Virginia's Bob McDonnell in keeping taxes modest but in crafting pro-business policies, improving services and making possible a higher quality of life. (No one running for the presidency, by the way, is going to show that wiping out income tax is a good foundation for national leadership and realistic policymaking.)

Republicans these days say that they don't need to change their principles, just their message. That's partially right. They certainly need to change messengers, focus and rhetoric. But they also have to reconsider what is a core principle and which policies are central to those principles and which are outmoded, unwise or better left to states. As Sen. Marco Rubio (R-Fla.) has shown, there is nothing "conservative" about keeping our current lawless, unfair immigration system. Once a battle has been lost (e.g., gay marriage, a federal Department of Education) a party that rails at reality soon seems looks clueless and obtuse. There is no conservative "principle" in fruitless battles and unattainable ends (any state that recognizes gay marriage is never going to unrecognize it).

The dreary statism that defines the most liberal president to hold office reminds us that there is still room for an innovative, forward-looking party. The country needs at least one party dedicated to improving the lives of all Americans by empowering them to achieve their ambitions and lead fulfilling lives. There is still a need for a party to prune government so it does not strangle the economy and civil institutions. If the GOP can be that party, it can win key policy debates and many elections in the years ahead.



Sent from my iPad

Saturday, January 26, 2013

Donald Boudreaux and Mark Perry: The Myth of a Stagnant Middle Class - WSJ.com

Donald Boudreaux and Mark Perry: The Myth of a Stagnant Middle Class

By DONALD J. BOUDREAUX
AND MARK J. PERRY

A favorite "progressive" trope is that America's middle class has stagnated economically since the 1970s. One version of this claim, made by Robert Reich, President Clinton's labor secretary, is typical: "After three decades of flat wages during which almost all the gains of growth have gone to the very top," he wrote in 2010, "the middle class no longer has the buying power to keep the economy going."

This trope is spectacularly wrong.

It is true enough that, when adjusted for inflation using the Consumer Price Index, the average hourly wage of nonsupervisory workers in America has remained about the same. But not just for three decades. The average hourly wage in real dollars has remained largely unchanged from at least 1964—when the Bureau of Labor Statistics (BLS) started reporting it.

Moreover, there are several problems with this measurement of wages. First, the CPI overestimates inflation by underestimating the value of improvements in product quality and variety. Would you prefer 1980 medical care at 1980 prices, or 2013 care at 2013 prices? Most of us wouldn't hesitate to choose the latter.

image
Chad Crowe

Second, this wage figure ignores the rise over the past few decades in the portion of worker pay taken as (nontaxable) fringe benefits. This is no small matter—health benefits, pensions, paid leave and the rest now amount to an average of almost 31% of total compensation for all civilian workers according to the BLS.

Third and most important, the average hourly wage is held down by the great increase of women and immigrants into the workforce over the past three decades. Precisely because the U.S. economy was flexible and strong, it created millions of jobs for the influx of many often lesser-skilled workers who sought employment during these years.

Since almost all lesser-skilled workers entering the workforce in any given year are paid wages lower than the average, the measured statistic, "average hourly wage," remained stagnant over the years—even while the real wages of actual flesh-and-blood workers employed in any given year rose over time as they gained more experience and skills.

These three factors tell us that flat average wages over time don't necessarily support a narrative of middle-class stagnation. Still, pessimists reject these arguments. Rather than debate esoteric matters such as how to properly adjust for inflation, however, let's examine some other measures of middle-class living standards.

No single measure of well-being is more informative or important than life expectancy. Happily, an American born today can expect to live approximately 79 years—a full five years longer than in 1980 and more than a decade longer than in 1950. These longer life spans aren't just enjoyed by "privileged" Americans. As the New York Times reported this past June 7, "The gap in life expectancy between whites and blacks in America has narrowed, reaching the lowest point ever recorded." This necessarily means that life expectancy for blacks has risen even more impressively than it has for whites.

Americans are also much better able to enjoy their longer lives. According to the Bureau of Economic Analysis, spending by households on many of modern life's "basics"—food at home, automobiles, clothing and footwear, household furnishings and equipment, and housing and utilities—fell from 53% of disposable income in 1950 to 44% in 1970 to 32% today.

One underappreciated result of the dramatic fall in the cost (and rise in the quality) of modern "basics" is that, while income inequality might be rising when measured in dollars, it is falling when reckoned in what's most important—our ability to consume. Before airlines were deregulated, for example, commercial jet travel was a luxury that ordinary Americans seldom enjoyed. Today, air travel for many Americans is as routine as bus travel was during the disco era, thanks to a 50% decline in the real price of airfares since 1980.

Bill Gates in his private jet flies with more personal space than does Joe Six-Pack when making a similar trip on a commercial jetliner. But unlike his 1970s counterpart, Joe routinely travels the same great distances in roughly the same time as do the world's wealthiest tycoons.

What's true for long-distance travel is also true for food, cars, entertainment, electronics, communications and many other aspects of "consumability." Today, the quantities and qualities of what ordinary Americans consume are closer to that of rich Americans than they were in decades past. Consider the electronic products that every middle-class teenager can now afford—iPhones, iPads, iPods and laptop computers. They aren't much inferior to the electronic gadgets now used by the top 1% of American income earners, and often they are exactly the same.

Even though the inflation-adjusted hourly wage hasn't changed much in 50 years, it is unlikely that an average American would trade his wages and benefits in 2013—along with access to the most affordable food, appliances, clothing and cars in history, plus today's cornucopia of modern electronic goods—for the same real wages but with much lower fringe benefits in the 1950s or 1970s, along with those era's higher prices, more limited selection, and inferior products.

Despite assertions by progressives who complain about stagnant wages, inequality and the (always) disappearing middle class, middle-class Americans have more buying power than ever before. They live longer lives and have much greater access to the services and consumer products bought by billionaires.

Mr. Boudreaux is professor of economics at George Mason University and chair for the study of free market capitalism at the Mercatus Center. Mr. Perry is a professor of economics at the University of Michigan-Flint and a resident scholar at the American Enterprise Institute.

A version of this article appeared January 23, 2013, on page A17 in the U.S. edition of The Wall Street Journal, with the headline: The Myth of a Stagnant Middle Class.



Sent from my iPhone

Sunday, January 13, 2013

Friday, January 11, 2013

Better Than Human: Why Robots Will — And Must — Take Our Jobs | Gadget Lab | Wired.com

Better Than Human: Why Robots Will — And Must — Take Our Jobs

Imagine that 7 out of 10 working Americans got fired tomorrow. What would they all do?

It's hard to believe you'd have an economy at all if you gave pink slips to more than half the labor force. But that—in slow motion—is what the industrial revolution did to the workforce of the early 19th century. Two hundred years ago, 70 percent of American workers lived on the farm. Today automation has eliminated all but 1 percent of their jobs, replacing them (and their work animals) with machines. But the displaced workers did not sit idle. Instead, automation created hundreds of millions of jobs in entirely new fields. Those who once farmed were now manning the legions of factories that churned out farm equipment, cars, and other industrial products. Since then, wave upon wave of new occupations have arrived—appliance repairman, offset printer, food chemist, photographer, web designer—each building on previous automation. Today, the vast majority of us are doing jobs that no farmer from the 1800s could have imagined.

It may be hard to believe, but before the end of this century, 70 percent of today's occupations will likewise be replaced by automation. Yes, dear reader, even you will have your job taken away by machines. In other words, robot replacement is just a matter of time. This upheaval is being led by a second wave of automation, one that is centered on artificial cognition, cheap sensors, machine learning, and distributed smarts. This deep automation will touch all jobs, from manual labor to knowledge work.

First, machines will consolidate their gains in already-automated industries. After robots finish replacing assembly line workers, they will replace the workers in warehouses. Speedy bots able to lift 150 pounds all day long will retrieve boxes, sort them, and load them onto trucks. Fruit and vegetable picking will continue to be robotized until no humans pick outside of specialty farms. Pharmacies will feature a single pill-dispensing robot in the back while the pharmacists focus on patient consulting. Next, the more dexterous chores of cleaning in offices and schools will be taken over by late-night robots, starting with easy-to-do floors and windows and eventually getting to toilets. The highway legs of long-haul trucking routes will be driven by robots embedded in truck cabs.

All the while, robots will continue their migration into white-collar work. We already have artificial intelligence in many of our machines; we just don't call it that. Witness one piece of software by Narrative Science (profiled in issue 20.05) that can write newspaper stories about sports games directly from the games' stats or generate a synopsis of a company's stock performance each day from bits of text around the web. Any job dealing with reams of paperwork will be taken over by bots, including much of medicine. Even those areas of medicine not defined by paperwork, such as surgery, are becoming increasingly robotic. The rote tasks of any information-intensive job can be automated. It doesn't matter if you are a doctor, lawyer, architect, reporter, or even programmer: The robot takeover will be epic.

And it has already begun.

Here's why we're at the inflection point: Machines are acquiring smarts.

We have preconceptions about how an intelligent robot should look and act, and these can blind us to what is already happening around us. To demand that artificial intelligence be humanlike is the same flawed logic as demanding that artificial flying be birdlike, with flapping wings. Robots will think different. To see how far artificial intelligence has penetrated our lives, we need to shed the idea that they will be humanlike.

Consider Baxter, a revolutionary new workbot from Rethink Robotics. Designed by Rodney Brooks, the former MIT professor who invented the best-selling Roomba vacuum cleaner and its descendants, Baxter is an early example of a new class of industrial robots created to work alongside humans. Baxter does not look impressive. It's got big strong arms and a flatscreen display like many industrial bots. And Baxter's hands perform repetitive manual tasks, just as factory robots do. But it's different in three significant ways.

First, it can look around and indicate where it is looking by shifting the cartoon eyes on its head. It can perceive humans working near it and avoid injuring them. And workers can see whether it sees them. Previous industrial robots couldn't do this, which means that working robots have to be physically segregated from humans. The typical factory robot is imprisoned within a chain-link fence or caged in a glass case. They are simply too dangerous to be around, because they are oblivious to others. This isolation prevents such robots from working in a small shop, where isolation is not practical. Optimally, workers should be able to get materials to and from the robot or to tweak its controls by hand throughout the workday; isolation makes that difficult. Baxter, however, is aware. Using force-feedback technology to feel if it is colliding with a person or another bot, it is courteous. You can plug it into a wall socket in your garage and easily work right next to it.

Second, anyone can train Baxter. It is not as fast, strong, or precise as other industrial robots, but it is smarter. To train the bot you simply grab its arms and guide them in the correct motions and sequence. It's a kind of "watch me do this" routine. Baxter learns the procedure and then repeats it. Any worker is capable of this show-and-tell; you don't even have to be literate. Previous workbots required highly educated engineers and crack programmers to write thousands of lines of code (and then debug them) in order to instruct the robot in the simplest change of task. The code has to be loaded in batch mode, i.e., in large, infrequent batches, because the robot cannot be reprogrammed while it is being used. Turns out the real cost of the typical industrial robot is not its hardware but its operation. Industrial robots cost $100,000-plus to purchase but can require four times that amount over a lifespan to program, train, and maintain. The costs pile up until the average lifetime bill for an industrial robot is half a million dollars or more.

The third difference, then, is that Baxter is cheap. Priced at $22,000, it's in a different league compared with the $500,000 total bill of its predecessors. It is as if those established robots, with their batch-mode programming, are the mainframe computers of the robot world, and Baxter is the first PC robot. It is likely to be dismissed as a hobbyist toy, missing key features like sub-millimeter precision, and not serious enough. But as with the PC, and unlike the mainframe, the user can interact with it directly, immediately, without waiting for experts to mediate—and use it for nonserious, even frivolous things. It's cheap enough that small-time manufacturers can afford one to package up their wares or custom paint their product or run their 3-D printing machine. Or you could staff up a factory that makes iPhones.

Baxter was invented in a century-old brick building near the Charles River in Boston. In 1895 the building was a manufacturing marvel in the very center of the new manufacturing world. It even generated its own electricity. For a hundred years the factories inside its walls changed the world around us. Now the capabilities of Baxter and the approaching cascade of superior robot workers spur Brooks to speculate on how these robots will shift manufacturing in a disruption greater than the last revolution. Looking out his office window at the former industrial neighborhood, he says, "Right now we think of manufacturing as happening in China. But as manufacturing costs sink because of robots, the costs of transportation become a far greater factor than the cost of production. Nearby will be cheap. So we'll get this network of locally franchised factories, where most things will be made within 5 miles of where they are needed."

That may be true of making stuff, but a lot of jobs left in the world for humans are service jobs. I ask Brooks to walk with me through a local McDonald's and point out the jobs that his kind of robots can replace. He demurs and suggests it might be 30 years before robots will cook for us. "In a fast food place you're not doing the same task very long. You're always changing things on the fly, so you need special solutions. We are not trying to sell a specific solution. We are building a general-purpose machine that other workers can set up themselves and work alongside." And once we can cowork with robots right next to us, it's inevitable that our tasks will bleed together, and soon our old work will become theirs—and our new work will become something we can hardly imagine.

To understand how robot replacement will happen, it's useful to break down our relationship with robots into four categories, as summed up in this chart:

The rows indicate whether robots will take over existing jobs or make new ones, and the columns indicate whether these jobs seem (at first) like jobs for humans or for machines.

Let's begin with quadrant A: jobs humans can do but robots can do even better. Humans can weave cotton cloth with great effort, but automated looms make perfect cloth, by the mile, for a few cents. The only reason to buy handmade cloth today is because you want the imperfections humans introduce. We no longer value irregularities while traveling 70 miles per hour, though—so the fewer humans who touch our car as it is being made, the better.

And yet for more complicated chores, we still tend to believe computers and robots can't be trusted. That's why we've been slow to acknowledge how they've mastered some conceptual routines, in some cases even surpassing their mastery of physical routines. A computerized brain known as the autopilot can fly a 787 jet unaided, but irrationally we place human pilots in the cockpit to babysit the autopilot "just in case." In the 1990s, computerized mortgage appraisals replaced human appraisers wholesale. Much tax preparation has gone to computers, as well as routine x-ray analysis and pretrial evidence-gathering—all once done by highly paid smart people. We've accepted utter reliability in robot manufacturing; soon we'll accept it in robotic intelligence and service.

Next is quadrant B: jobs that humans can't do but robots can. A trivial example: Humans have trouble making a single brass screw unassisted, but automation can produce a thousand exact ones per hour. Without automation, we could not make a single computer chip—a job that requires degrees of precision, control, and unwavering attention that our animal bodies don't possess. Likewise no human, indeed no group of humans, no matter their education, can quickly search through all the web pages in the world to uncover the one page revealing the price of eggs in Katmandu yesterday. Every time you click on the search button you are employing a robot to do something we as a species are unable to do alone.

While the displacement of formerly human jobs gets all the headlines, the greatest benefits bestowed by robots and automation come from their occupation of jobs we are unable to do. We don't have the attention span to inspect every square millimeter of every CAT scan looking for cancer cells. We don't have the millisecond reflexes needed to inflate molten glass into the shape of a bottle. We don't have an infallible memory to keep track of every pitch in Major League Baseball and calculate the probability of the next pitch in real time.

We aren't giving "good jobs" to robots. Most of the time we are giving them jobs we could never do. Without them, these jobs would remain undone.

Now let's consider quadrant C, the new jobs created by automation—including the jobs that we did not know we wanted done. This is the greatest genius of the robot takeover: With the assistance of robots and computerized intelligence, we already can do things we never imagined doing 150 years ago. We can remove a tumor in our gut through our navel, make a talking-picture video of our wedding, drive a cart on Mars, print a pattern on fabric that a friend mailed to us through the air. We are doing, and are sometimes paid for doing, a million new activities that would have dazzled and shocked the farmers of 1850. These new accomplishments are not merely chores that were difficult before. Rather they are dreams that are created chiefly by the capabilities of the machines that can do them. They are jobs the machines make up.

Before we invented automobiles, air-conditioning, flatscreen video displays, and animated cartoons, no one living in ancient Rome wished they could watch cartoons while riding to Athens in climate-controlled comfort. Two hundred years ago not a single citizen of Shanghai would have told you that they would buy a tiny slab that allowed them to talk to faraway friends before they would buy indoor plumbing. Crafty AIs embedded in first-person-shooter games have given millions of teenage boys the urge, the need, to become professional game designers—a dream that no boy in Victorian times ever had. In a very real way our inventions assign us our jobs. Each successful bit of automation generates new occupations—occupations we would not have fantasized about without the prompting of the automation.

To reiterate, the bulk of new tasks created by automation are tasks only other automation can handle. Now that we have search engines like Google, we set the servant upon a thousand new errands. Google, can you tell me where my phone is? Google, can you match the people suffering depression with the doctors selling pills? Google, can you predict when the next viral epidemic will erupt? Technology is indiscriminate this way, piling up possibilities and options for both humans and machines.

It is a safe bet that the highest-earning professions in the year 2050 will depend on automations and machines that have not been invented yet. That is, we can't see these jobs from here, because we can't yet see the machines and technologies that will make them possible. Robots create jobs that we did not even know we wanted done.

Finally, that leaves us with quadrant D, the jobs that only humans can do—at first. The one thing humans can do that robots can't (at least for a long while) is to decide what it is that humans want to do. This is not a trivial trick; our desires are inspired by our previous inventions, making this a circular question.

When robots and automation do our most basic work, making it relatively easy for us to be fed, clothed, and sheltered, then we are free to ask, "What are humans for?" Industrialization did more than just extend the average human lifespan. It led a greater percentage of the population to decide that humans were meant to be ballerinas, full-time musicians, mathematicians, athletes, fashion designers, yoga masters, fan-fiction authors, and folks with one-of-a kind titles on their business cards. With the help of our machines, we could take up these roles; but of course, over time, the machines will do these as well. We'll then be empowered to dream up yet more answers to the question "What should we do?" It will be many generations before a robot can answer that.

This postindustrial economy will keep expanding, even though most of the work is done by bots, because part of your task tomorrow will be to find, make, and complete new things to do, new things that will later become repetitive jobs for the robots. In the coming years robot-driven cars and trucks will become ubiquitous; this automation will spawn the new human occupation of trip optimizer, a person who tweaks the traffic system for optimal energy and time usage. Routine robo-surgery will necessitate the new skills of keeping machines sterile. When automatic self-tracking of all your activities becomes the normal thing to do, a new breed of professional analysts will arise to help you make sense of the data. And of course we will need a whole army of robot nannies, dedicated to keeping your personal bots up and running. Each of these new vocations will in turn be taken over by robots later.

The real revolution erupts when everyone has personal workbots, the descendants of Baxter, at their beck and call. Imagine you run a small organic farm. Your fleet of worker bots do all the weeding, pest control, and harvesting of produce, as directed by an overseer bot, embodied by a mesh of probes in the soil. One day your task might be to research which variety of heirloom tomato to plant; the next day it might be to update your custom labels. The bots perform everything else that can be measured.

Right now it seems unthinkable: We can't imagine a bot that can assemble a stack of ingredients into a gift or manufacture spare parts for our lawn mower or fabricate materials for our new kitchen. We can't imagine our nephews and nieces running a dozen workbots in their garage, churning out inverters for their friend's electric-vehicle startup. We can't imagine our children becoming appliance designers, making custom batches of liquid-nitrogen dessert machines to sell to the millionaires in China. But that's what personal robot automation will enable.

Everyone will have access to a personal robot, but simply owning one will not guarantee success. Rather, success will go to those who innovate in the organization, optimization, and customization of the process of getting work done with bots and machines. Geographical clusters of production will matter, not for any differential in labor costs but because of the differential in human expertise. It's human-robot symbiosis. Our human assignment will be to keep making jobs for robots—and that is a task that will never be finished. So we will always have at least that one "job."

In the coming years our relationships with robots will become ever more complex. But already a recurring pattern is emerging. No matter what your current job or your salary, you will progress through these Seven Stages of Robot Replacement, again and again:

  • 1. A robot/computer cannot possibly do the tasks I do.

    [Later.]

  • 2. OK, it can do a lot of them, but it can't do everything I do.

    [Later.]

  • 3. OK, it can do everything I do, except it needs me when it breaks down, which is often.

    [Later.]

  • 4. OK, it operates flawlessly on routine stuff, but I need to train it for new tasks.

    [Later.]

  • 5. OK, it can have my old boring job, because it's obvious that was not a job that humans were meant to do.

    [Later.]

  • 6. Wow, now that robots are doing my old job, my new job is much more fun and pays more!

    [Later.]

  • 7. I am so glad a robot/computer cannot possibly do what I do now.

  • This is not a race against the machines. If we race against them, we lose. This is a race with the machines. You'll be paid in the future based on how well you work with robots. Ninety percent of your coworkers will be unseen machines. Most of what you do will not be possible without them. And there will be a blurry line between what you do and what they do. You might no longer think of it as a job, at least at first, because anything that seems like drudgery will be done by robots.

    We need to let robots take over. They will do jobs we have been doing, and do them much better than we can. They will do jobs we can't do at all. They will do jobs we never imagined even needed to be done. And they will help us discover new jobs for ourselves, new tasks that expand who we are. They will let us focus on becoming more human than we were.

    Let the robots take the jobs, and let them help us dream up new work that matters.

    Kevin Kelly (kk.org) is senior maverick of Wired and the author, most recently, of What Technology Wants.

    Pages: 1 2 3 View All



Sent from my iPad

Thursday, January 10, 2013

Charts

Our favorite charts of 2012

By Quartz StaffDecember 17, 2012

Quartz is known for being a bit obsessed with charts, so we asked members of our editorial staff to submit their favorite charts produced in 2012. What "favorite" means was left to the beholder. Here are the results…

The world's shifting center of gravity

Screen Shot 2012-12-12 at 5.25.23 PM

In a single visual, this McKinsey map shows the shift of the world's economic center of gravity over the past 1,000 years—and forecasts an accelerating move farther east in the coming 15. It's one of the starkest representations of the intense global change that we're living in, capturing the quiet but steady shifts of economic power over time. —Kevin Delaney

The hottest year ever recorded

YTD_allyears_Nov2012

It's not official yet, but 2012 was undoubtedly the warmest year ever recorded in the United States—and by a wide margin. The situation is similar across the globe. This chart, from the National Oceanic and Atmospheric Administration in the US, shows plainly how hot this year has been. Temperatures through November have been a startling 3.3°F above average. —Zach Seward

The economic impacts of natural disasters

Economic impacts of natural disasters

Talking about climate change in terms of rising sea levels and warmer temperatures still feels rather abstract. Casting it in terms of what it does to economies is another matter. This year saw increasingly solid data about that connection. And while climate scientists are usually cautious about linking any individual storm to global warming, they were able to point to various signs that superstorms like Hurricane Sandy are becoming more likely. The above graphic from Bloomberg Businessweek sums up the economic costs of natural disasters over the past three decades: The number of events costing $1 billion or more since 1996 was double that of the previous 15 years. —Gideon Lichfield

What the Chinese are worried about

chinas-perception-of-very-big-problems3

I like this chart, by Quartz reporter Ritchie King using data from the Pew Research Center, because it not only tells us the top 12 issues Chinese people are becoming more worried about, but what problems we've been missing in our reporting on China. Food safety—a topic that was in the news a lot in 2008 after the Sanlu milk formula scandal but not since—saw the highest increase in anxiety between 2008 and 2012.  While over half of people surveyed think government corruption is a problem, that's only about a 10% increase since 2008. This is a guess, but if anything is going to bring on mass protests and take down China's dictatorial government, it's more likely to be bad food than a massive graft scandal. —Lily Kuo

The rise and fall of the PC

Screen-Shot-2012-01-15-at-1-15-8.45.58-PM

Here's why I like this chart by Horace Dediu: When you recognize that all computing devices—be they phones, tablets, or traditional PCs—are now, somewhat interchangeably, carrying out the same functions, it makes sense to look at the total market share of all of these devices together. Which brings home the point that not only are we shifting computing to devices that some would hesitate to call "computers," but also that this represents a unique moment in the past 15 or so years, when the redefinition of computing itself is allowing for a diversity in operating systems and hardware platforms. And it seems almost inevitable that, a decade hence, Android will be as dominant as Windows was from the late 1980s until 2009. —Christopher Mims

Italy's diminishing enthusiasm for the euro

euro-currency-support-2

While Greece appears to be the problem child of Europe, a far higher percentage of French, Germans, Spaniards, and Italians would actually prefer to leave the euro and return to their national currencies, according to Morgan Stanley. Continued economic stagnation has generated angst, and the probability of a prolonged recession in the euro area is high.

With new elections next year and most likely without the leadership of Italian PM Mario Monti, the survival of the euro could once again come under fire from a new source: Italy. More troublesome still is that Italy is less dependent upon foreign aid than other peripheral countries; though its debt was equal to 120.1% of its GDP in 2011, Italy nearly balances its yearly budget. An Italian departure from the euro is probably unlikely, but an Italian shock to the euro area could be crippling. —Simone Foxman

Americans finally start to borrow again

Screen Shot 2012-12-11 at 5.03.01 PM

This Barclays chart might not be the most dramatic-looking, but it is precisely where the rubber meets the road for the US economy. Since the financial crisis, the US has been mired in what economists like to call a "liquidity trap." It sounds complicated, but basically it means that people and companies are so freaked out about the state of the economy or their own finances—or both—that they have zero appetite for borrowing money. That means that the Fed's main way of prodding the economy out of recession—lowering interest rates to induce borrowing—loses potency. Why? Again, because people don't want to borrow.

An added wrinkle is that the Fed's powers of prodding the economy forward were also weakened during this recession because despite record low interest rates, many people were unable to refinance their mortgages because they were underwater—owed more than the house was worth—making them ineligible. So what this chart shows is refinancing activity picking up, along with mortgage hiring. It suggests both demand for loans and capacity for lending are growing. And it could be the start of a very important turn for the US economy. —Matt Phillips

The history of US debt from 1790 to 2011

debt-and-gdp-main6 (1)

This chart, by Quartz reporter Ritchie King in a post by Matt Phillips, is my favorite because it contextualizes the US economy to show that we're not quite in the financial apocalypse that seems to be upon us. The debt load during World War II was far worse—and it was followed by one of America's periods of greatest prosperity. —Lauren Brown

The invisible bailout of the US economy

screen-shot-2012-11-27-at-4-42-51-pm

This graph, from a January McKinsey report, shows how the United States government has been taking on debt so that consumers and businesses may shed it. Government stimulus measures and transfer payments allow consumers to deleverage. In return, debt accumulates with the public sector, which borrows more cheaply and has a longer time horizon for managing debt than people do. That helps explain why the American economy has been expanding, thanks in part to more consumer demand, even as public debt has swollen to levels unsustainable in the long term. It's an invisible bailout. —Tim Fernholz

The mission to kill Bin Laden, deconstructed

6596304093_119eed552f_o

It ended with the world's most wanted man buried at sea, but it started with a sandwich order to Costco. Bloomberg Businessweek compiled the events of May 1, 2011, into a comprehensive timeline of the actions, rumor, and decisions that surrounded the raid that killed Osama bin Laden. —David Yanofsky

Obama's chances, deconstructed

Screen Shot 2012-12-12 at 12.30.32 PM

What an awesome utility. I spent all evening on the day of US election glued to this thing from The New York Times, answering my own what-if questions, watching different branches disappear as different states were projected. I love how the graphic takes a problem that's way too crazy for mental math—the combinatorics of electoral outcomes in swing states—and makes it tractable, not by simplifying it, but by providing a way to explore the complexity. —Ritchie King

And finally…

card3392

I love an irreverent chart. I love how a few lines can bring an obvious but overlooked observation into sharp focus. Jessica Hagy serves up simple graphs daily on her blog Indexed. This one is called, "Otherwise it's just an argument." Yes, her titles often contain more words than her charts. —Gloria Dawson



Sent from my iPhone

Sunday, January 6, 2013

Why Workers Are Losing the War Against Machines - Atlantic Mobile

Why Workers Are Losing the War Against Machines

In the 21st century war of man vs. machine in the workplace, what if man isn't supposed to prevail?

615 robot reuters 1.jpg

At least since the followers of Ned Ludd smashed mechanized looms in 1811, workers have worried about automation destroying jobs. Economists have reassured them that new jobs would be created even as old ones were eliminated. For over 200 years, the economists were right. Despite massive automation of millions of jobs, more Americans had jobs at the end of each decade up through the end of the 20th century. However, this empirical fact conceals a dirty secret. There is no economic law that says that everyone, or even most people, automatically benefit from technological progress.

People with little economics training intuitively grasp this point. They understand that some human workers may lose out in the race against the machine. Ironically, the best-educated economists are often the most resistant to this idea, as the standard models of economic growth implicitly assume that economic growth benefits all residents of a country. However, just as Nobel Prize-winning economist Paul Samuelson showed that outsourcing and offshoring do not necessarily increase the welfare of all workers, it is also true that technological progress is not a rising tide that automatically raises all incomes. Even as overall wealth increases, there can be, and usually will be, winners and losers. And the losers are not necessarily some small segment of the labor force like buggy whip manufacturers. In principle, they can be a majority or even 90% or more of the population.

If wages can freely adjust, then the losers keep their jobs in exchange for accepting ever-lower compensation as technology continues to improve. But there's a limit to this adjustment. Shortly after the Luddites began smashing the machinery that they thought threatened their jobs, the economist David Ricardo, who initially thought that advances in technology would benefit all, developed an abstract model that showed the possibility of technological unemployment. The basic idea was that at some point, the equilibrium wages for workers might fall below the level needed for subsistence. A rational human would see no point in taking a job at a wage that low, so the worker would go unemployed and the work would be done by a machine instead.

Of course, this was only an abstract model. But in his book A Farewell to Alms, economist Gregory Clark gives an eerie real-world example of this phenomenon in action:

There was a type of employee at the beginning of the Industrial Revolution whose job and livelihood largely vanished in the early twentieth century. This was the horse. The population of working horses actually peaked in England long after the Industrial Revolution, in 1901, when 3.25 million were at work. Though they had been replaced by rail for long-distance haulage and by steam engines for driving machinery, they still plowed fields, hauled wagons and carriages short distances, pulled boats on the canals, toiled in the pits, and carried armies into battle. But the arrival of the internal combustion engine in the late nineteenth century rapidly displaced these workers, so that by 1924 there were fewer than two million. There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.

As technology continues to advance in the second half of the chessboard, taking on jobs and tasks that used to belong only to human workers, one can imagine a time in the future when more and more jobs are more cheaply done by machines than humans. And indeed, the wages of unskilled workers have trended downward for over 30 years, at least in the United States.

We also now understand that technological unemployment can occur even when wages are still well above subsistence if there are downward rigidities that prevent them from falling as quickly as advances in technology reduce the costs of automation. Minimum wage laws, unemployment insurance, health benefits, prevailing wage laws, and long-term contracts--not to mention custom and psychology--make it difficult to rapidly reduce wages. Furthermore, employers will often find wage cuts damaging to morale. As the efficiency wage literature notes, such cuts can be demotivating to employees and cause companies to lose their best people.

But complete wage flexibility would be no panacea, either. Ever-falling wages for significant shares of the workforce is not exactly an appealing solution to the threat of technological employment. Aside from the damage it does to the living standards of the affected workers, lower pay only postpones the day of reckoning. Moore's Law is not a one-time blip but an accelerating exponential trend.

The threat of technological unemployment is real. To understand this threat, we'll define three overlapping sets of winners and losers that technical change creates: (1) high-skilled vs. low-skilled workers, (2) superstars vs. everyone else, and (3) capital vs. labor. Each set has well-documented facts and compelling links to digital technology. What's more, these sets are not mutually exclusive. In fact, the winners in one set are more likely to be winners in the other two sets as well, which concentrates the consequences.

In each case, economic theory is clear. Even when technological progress increases productivity and overall wealth, it can also affect the division of rewards, potentially making some people worse off than they were before the innovation. In a growing economy, the gains to the winners may be larger than the losses of those who are hurt, but this is a small consolation to those who come out on the short end of the bargain.

Ultimately, the effects of technology are an empirical question--one that is best settled by looking at the data. For all three sets of winners and losers, the news is troubling. Let's look at each in turn.

1. High-Skilled vs. Low-Skilled Workers

    We'll start with skill-biased technical change, which is perhaps the most carefully studied of the three phenomena. This is technical change that increases the relative demand for high-skill labor while reducing or eliminating the demand for low-skill labor. A lot of factory automation falls into this category, as routine drudgery is turned over to machines while more complex programming, management, and marketing decisions remain the purview of humans.

    A recent paper by economists Daron Acemoglu and David Autor highlights the growing divergence in earnings between the most-educated and least-educated workers. Over the past 40 years, weekly wages for those with a high school degree have fallen and wages for those with a high school degree and some college have stagnated. On the other hand, college-educated workers have seen significant gains, with the biggest gains going to those who have completed graduate training (Figure 3.5).

    What's more, this increase in the relative price of educated labor--their wages--comes during a period where the supply of educated workers has also increased. The combination of higher pay in the face of growing supply points unmistakably to an increase in the relative demand for skilled labor. Because those with the least education typically already had the lowest wages, this change has increased overall income inequality.

    wages productivity inequality.png

    It's clear from the chart in Figure 3.5 that wage divergence accelerated in the digital era. As documented in careful studies by David Autor, Lawrence Katz, and Alan Krueger, as well as Frank Levy and Richard Murnane and many others, the increase in the relative demand for skilled labor is closely correlated with advances in technology, particularly digital technologies. Hence, the moniker "skill-biased technical change," or SBTC. There are two distinct components to recent SBTC. Technologies like robotics, numerically controlled machines, computerized inventory control, and automatic transcription have been substituting for routine tasks, displacing those workers. Meanwhile other technologies like data visualization, analytics, high-speed communications, and rapid prototyping have augmented the contributions of more abstract and data-driven reasoning, increasing the value of those jobs.

    Skill-biased technical change has also been important in the past. For most of the 19th century, about 25% of all agriculture labor threshed grain. That job was automated in the 1860s. The 20th century was marked by an accelerating mechanization not only of agriculture but also of factory work. Echoing the first Nobel Prize winner in economics, Jan Tinbergen, Harvard economists Claudia Goldin and Larry Katz described the resulting SBTC as a "race between education and technology." Ever-greater investments in education, dramatically increasing the average educational level of the American workforce, helped prevent inequality from soaring as technology automated more and more unskilled work. While education is certainly not synonymous with skill, it is one of the most easily measurable correlates of skill, so this pattern suggests that demand for upskilling has increased faster than its supply.

    Studies by this book's co-author Erik Brynjolfsson along with Timothy Bresnahan, Lorin Hitt, and Shinku Yang found that a key aspect of SBTC was not just the skills of those working with computers, but more importantly the broader changes in work organization that were made possible by information technology. The most productive firms reinvented and reorganized decision rights, incentives systems, information flows, hiring systems, and other aspects of organizational capital to get the most from the technology. This, in turn, required radically different and, generally, higher skill levels in the workforce. It was not so much that those directly working with computers had to be more skilled, but rather that whole production processes, and even industries, were reengineered to exploit powerful new information technologies. What's more, each dollar of computer hardware was often the catalyst for more than $10 of investment in complementary organizational capital. The intangible organizational assets are typically much harder to change, but they are also much more important to the success of the organization.

    As the 21st century unfolds, automation is affecting broader swaths of work. Even the low wages earned by factory workers in China have not insulated them from being undercut by new machinery and the complementary organizational and institutional changes. For instance, Terry Gou, the founder and chairman of the electronics manufacturer Foxconn, announced this year a plan to purchase 1 million robots over the next three years to replace much of his workforce. The robots will take over routine jobs like spraying paint, welding, and basic assembly. Foxconn currently has 10,000 robots, with 300,000 expected to be in place by next year.

    2. Superstars vs. Everyone Else

      The second division is between superstars and everyone else. Many industries are winner-take-all or winner-take-most competitions, in which a few individuals get the lion's share of the rewards. Think of pop music, professional athletics, and the market for CEOs. Digital technologies increase the size and scope of these markets. These technologies replicate not only information goods but increasingly business processes as well. As a result, the talents, insights, or decisions of a single person can now dominate a national or even global market. Meanwhile good, but not great, local competitors are increasingly crowded out of their markets. The superstars in each field can now earn much larger rewards than they did in earlier decades.

      The effects are evident at the top of the income distribution. The top 10% of the wage distribution has done much better than the rest of the labor force, but even within this group there has been growing inequality. Income has grown faster for the top 1% than the rest of the top decile. In turn, the top 0.1% and top 0.01% have seen their income grow even faster. This is not run-of-the-mill skill-biased technical change but rather reflects the unique rewards of superstardom. Sherwin Rosen, himself a superstar economist, laid out the economics of superstars in a seminal 1981 article. In many markets, consumers are willing to pay a premium for the very best. If technology exists for a single seller to cheaply replicate his or her services, then the top-quality provider can capture most--or all--of the market. The next-best provider might be almost as good yet get only a tiny fraction of the revenue.

      Technology can convert an ordinary market into one that is characterized by superstars. Before the era of recorded music, the very best singer might have filled a large concert hall but at most would only be able to reach thousands of listeners over the course of a year. Each city might have its own local stars, with a few top performers touring nationally, but even the best singer in the nation could reach only a relatively small fraction of the potential listening audience. Once music could be recorded and distributed at a very low marginal cost, however, a small number of top performers could capture the majority of revenues in every market, from classical music's Yo-Yo Ma to pop's Lady Gaga.

      Economists Robert Frank and Philip Cook documented how winner-take-all markets have proliferated as technology transformed not only recorded music but also software, drama, sports, and every other industry that can be transmitted as digital bits. This trend has accelerated as more of the economy is based on software, either implicitly or explicitly. As we discussed in our 2008 Harvard Business Review article, digital technologies make it possible to replicate not only bits but also processes. For instance, companies like CVS have embedded processes like prescription drug ordering into their enterprise information systems. Each time CVS makes an improvement, it is propagated across 4,000 stores nationwide, amplifying its value. As a result, the reach and impact of an executive decision, like how to organize a process, is correspondingly larger.

      In fact, the ratio of CEO pay to average worker pay has increased from 70 in 1990 to 300 in 2005, and much of this growth is linked to the greater use of IT, according to recent research that Erik did with his student Heekyung Kim. They found that increases in the compensation of other top executives followed a similar, if less extreme, pattern. Aided by digital technologies, entrepreneurs, CEOs, entertainment stars, and financial executives have been able to leverage their talents across global markets and capture reward that would have been unimaginable in earlier times.

      To be sure, technology is not the only factor that affects incomes. Political factors, globalization, changes in asset prices, and, in the case of CEOs and financial executives, corporate governance also plays a role. In particular, the financial services sector has grown dramatically as a share of GDP and even more as a share of profits and compensation, especially at the top of the income distribution. While efficient finance is essential to a modern economy, it appears that a significant share of returns to large human and technological investments in the past decade, such as those in sophisticated computerized program trading, were from rent redistribution rather than genuine wealth creation. Other countries, with different institutions and also slower adoption of IT, have seen less extreme changes in inequality. But the overall changes in the United States have been substantial. According to economist Emmanuel Saez, the top 1% of U.S. households got 65% of all the growth in the economy since 2002. In fact, Saez reports that the top 0.01% of households in the United States--that is, the 14,588 families with income above $11,477,000--saw their share of national income double from 3% to 6% between 1995 and 2007.

      3. Capital vs. Labor

        The third division is between capital and labor. Most types of production require both machinery and human labor. According to bargaining theory, the wealth they generate is divided according to relative bargaining power, which in turn typically reflects the contribution of each input. If the technology decreases the relative importance of human labor in a particular production process, the owners of capital equipment will be able to capture a bigger share of income from the goods and services produced. To be sure, capital owners are also humans--so it's not like the wealth disappears from society--but capital owners are typically a very different and smaller group than the ones doing most of the labor, so the distribution of income will be affected.

        In particular, if technology replaces labor, you might expect that the shares of income earned by equipment owners would rise relative to laborers--the classic bargaining battle between capital and labor. This has been happening increasingly in recent years. As noted by Kathleen Madigan, since the recession ended, real spending on equipment and software has soared by 26% while payrolls have remained essentially flat.

        Furthermore, there is growing evidence that capital has captured a growing share of GDP in recent years. As shown in Figure 3.6, corporate profits have easily surpassed their pre-recession levels.

        corp profit book.png

        According to the recently updated data from the U.S. Commerce Department, recent corporate profits accounted for 23.8% of total domestic corporate income, a record high share that is more than 1 full percentage point above the previous record. Similarly, corporate profits as a share of GDP are at 50-year highs. Meanwhile, compensation to labor in all forms, including wages and benefits, is at a 50-year low. Capital is getting a bigger share of the pie, relative to labor.

        The recession exacerbated this trend, but it's part of long-term change in the economy. As noted by economists Susan Fleck, John Glaser, and Shawn Sprague, the trend line for labor's share of GDP was essentially flat between 1974 and 1983 but has been falling since then. When one thinks about the workers in places like Foxconn's factory being replaced by labor-saving robots, it's easy to imagine a technology-driven story for why the relative shares of income might be changing.

        It's important to note that the "labor" share in the Bureau of Labor Statistics' data includes wages paid to CEOs, finance professionals, professional athletes, and other "superstars" discussed above. In this sense, the declining labor share understates how badly the median worker has fared. It may also understate the division of income between capital and labor, insofar as CEOs and other top executives may have bargaining power to capture some of the "capital's share" that would otherwise accrue to owners of common stock.

        TEMPLATEReadMoreBookExcerpts.jpgThis is the third part of our three-part excerpt from Erik Brynjolfsson and Andrew McAfee's Race Against the Machines (Digital Frontier Press). Read Part 1 here. Read Part 2 here.





        Sent from my iPhone