Saturday, April 15, 2017

News is bad for you – and giving up reading it will make you happier | Media | The Guardian

News is bad for you – and giving up reading it will make you happier

news montage

In the past few decades, the fortunate among us have recognised the hazards of living with an overabundance of food (obesity, diabetes) and have started to change our diets. But most of us do not yet understand that news is to the mind what sugar is to the body. News is easy to digest. The media feeds us small bites of trivial matter, tidbits that don't really concern our lives and don't require thinking. That's why we experience almost no saturation. Unlike reading books and long magazine articles (which require thinking), we can swallow limitless quantities of news flashes, which are bright-coloured candies for the mind. Today, we have reached the same point in relation to information that we faced 20 years ago in regard to food. We are beginning to recognise how toxic news can be.

News misleads. Take the following event (borrowed from Nassim Taleb). A car drives over a bridge, and the bridge collapses. What does the news media focus on? The car. The person in the car. Where he came from. Where he planned to go. How he experienced the crash (if he survived). But that is all irrelevant. What's relevant? The structural stability of the bridge. That's the underlying risk that has been lurking, and could lurk in other bridges. But the car is flashy, it's dramatic, it's a person (non-abstract), and it's news that's cheap to produce. News leads us to walk around with the completely wrong risk map in our heads. So terrorism is over-rated. Chronic stress is under-rated. The collapse of Lehman Brothers is overrated. Fiscal irresponsibility is under-rated. Astronauts are over-rated. Nurses are under-rated.

We are not rational enough to be exposed to the press. Watching an airplane crash on television is going to change your attitude toward that risk, regardless of its real probability. If you think you can compensate with the strength of your own inner contemplation, you are wrong. Bankers and economists – who have powerful incentives to compensate for news-borne hazards – have shown that they cannot. The only solution: cut yourself off from news consumption entirely.

News is irrelevant. Out of the approximately 10,000 news stories you have read in the last 12 months, name one that – because you consumed it – allowed you to make a better decision about a serious matter affecting your life, your career or your business. The point is: the consumption of news is irrelevant to you. But people find it very difficult to recognise what's relevant. It's much easier to recognise what's new. The relevant versus the new is the fundamental battle of the current age. Media organisations want you to believe that news offers you some sort of a competitive advantage. Many fall for that. We get anxious when we're cut off from the flow of news. In reality, news consumption is a competitive disadvantage. The less news you consume, the bigger the advantage you have.

News has no explanatory power. News items are bubbles popping on the surface of a deeper world. Will accumulating facts help you understand the world? Sadly, no. The relationship is inverted. The important stories are non-stories: slow, powerful movements that develop below journalists' radar but have a transforming effect. The more "news factoids" you digest, the less of the big picture you will understand. If more information leads to higher economic success, we'd expect journalists to be at the top of the pyramid. That's not the case.

News is toxic to your body. It constantly triggers the limbic system. Panicky stories spur the release of cascades of glucocorticoid (cortisol). This deregulates your immune system and inhibits the release of growth hormones. In other words, your body finds itself in a state of chronic stress. High glucocorticoid levels cause impaired digestion, lack of growth (cell, hair, bone), nervousness and susceptibility to infections. The other potential side-effects include fear, aggression, tunnel-vision and desensitisation.

News increases cognitive errors. News feeds the mother of all cognitive errors: confirmation bias. In the words of Warren Buffett: "What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact." News exacerbates this flaw. We become prone to overconfidence, take stupid risks and misjudge opportunities. It also exacerbates another cognitive error: the story bias. Our brains crave stories that "make sense" – even if they don't correspond to reality. Any journalist who writes, "The market moved because of X" or "the company went bankrupt because of Y" is an idiot. I am fed up with this cheap way of "explaining" the world.

News inhibits thinking. Thinking requires concentration. Concentration requires uninterrupted time. News pieces are specifically engineered to interrupt you. They are like viruses that steal attention for their own purposes. News makes us shallow thinkers. But it's worse than that. News severely affects memory. There are two types of memory. Long-range memory's capacity is nearly infinite, but working memory is limited to a certain amount of slippery data. The path from short-term to long-term memory is a choke-point in the brain, but anything you want to understand must pass through it. If this passageway is disrupted, nothing gets through. Because news disrupts concentration, it weakens comprehension. Online news has an even worse impact. In a 2001 study two scholars in Canada showed that comprehension declines as the number of hyperlinks in a document increases. Why? Because whenever a link appears, your brain has to at least make the choice not to click, which in itself is distracting. News is an intentional interruption system.

News works like a drug. As stories develop, we want to know how they continue. With hundreds of arbitrary storylines in our heads, this craving is increasingly compelling and hard to ignore. Scientists used to think that the dense connections formed among the 100 billion neurons inside our skulls were largely fixed by the time we reached adulthood. Today we know that this is not the case. Nerve cells routinely break old connections and form new ones. The more news we consume, the more we exercise the neural circuits devoted to skimming and multitasking while ignoring those used for reading deeply and thinking with profound focus. Most news consumers – even if they used to be avid book readers – have lost the ability to absorb lengthy articles or books. After four, five pages they get tired, their concentration vanishes, they become restless. It's not because they got older or their schedules became more onerous. It's because the physical structure of their brains has changed.

News wastes time. If you read the newspaper for 15 minutes each morning, then check the news for 15 minutes during lunch and 15 minutes before you go to bed, then add five minutes here and there when you're at work, then count distraction and refocusing time, you will lose at least half a day every week. Information is no longer a scarce commodity. But attention is. You are not that irresponsible with your money, reputation or health. Why give away your mind?

News makes us passive. News stories are overwhelmingly about things you cannot influence. The daily repetition of news about things we can't act upon makes us passive. It grinds us down until we adopt a worldview that is pessimistic, desensitised, sarcastic and fatalistic. The scientific term is "learned helplessness". It's a bit of a stretch, but I would not be surprised if news consumption, at least partially contributes to the widespread disease of depression.

News kills creativity. Finally, things we already know limit our creativity. This is one reason that mathematicians, novelists, composers and entrepreneurs often produce their most creative works at a young age. Their brains enjoy a wide, uninhabited space that emboldens them to come up with and pursue novel ideas. I don't know a single truly creative mind who is a news junkie – not a writer, not a composer, mathematician, physician, scientist, musician, designer, architect or painter. On the other hand, I know a bunch of viciously uncreative minds who consume news like drugs. If you want to come up with old solutions, read news. If you are looking for new solutions, don't.

Society needs journalism – but in a different way. Investigative journalism is always relevant. We need reporting that polices our institutions and uncovers truth. But important findings don't have to arrive in the form of news. Long journal articles and in-depth books are good, too.

I have now gone without news for four years, so I can see, feel and report the effects of this freedom first-hand: less disruption, less anxiety, deeper thinking, more time, more insights. It's not easy, but it's worth it.

This is an edited extract from an essay first published at The Art of Thinking Clearly: Better Thinking, Better Decisions by Rolf Dobelli is published by Sceptre, £9.99. Buy it for £7.99 at

Saturday, March 18, 2017

Escape to another world | 1843


David Mullings was always a self-starter. Born in Jamaica, he moved to Florida to go to university, and founded his first company – a digital media firm that helped Caribbean content find a wider audience – before finishing business school at the University of Miami. In 2011 he opened a private-equity firm with his brother. In 2013 the two made their first big deal, acquiring an 80% stake in a Tampa-based producer of mobile apps. A year later it blew up in their faces, sinking their firm and their hopes.

Mullings struggled to recover from the blow. The odd consulting gig provided a distraction and some income. Yet depression set in as he found himself asking whether he had anything useful to contribute to the wider world.

Then Destiny called.

Like millions of people of a certain age, the Nintendo Entertainment System (NES) had occupied a crucial place in Mullings's childhood. It introduced him to video gaming, gave him a taste for it, made him aware of the fact that he was good at it: a "born gamer", in his words. Yet the pixelated worlds of the Mario brothers, for all their delights, were nothing like the experiences available to gamers today.

Mullings's friends invited him to join them in playing Destiny, a "massively multi­player online game" (meaning that lots of different people around the world simultaneously play within the Destiny universe) and a "first-person shooter" (meaning that most of the gameplay involves the player looking out through a character's eyes and shooting stuff). The world surrounding the players is vast, filled with great, sweeping vistas rendered in extraordinary and realistic detail. It is a world of its own. Within that world, players, often in teams, take on quests and square off repeatedly in matches against opponents.

Before long Mullings was hooked, playing up to eight hours of Destiny each day. To all appearances, he had fallen into a familiar trap – increasingly common and difficult to escape in the eyes of some scholars studying the phenomenon – in which work gives way to, and is ultimately replaced by, the entrancing power of video games.

Since their earliest days video games have had their critics. Like countless others, I was told to turn off that brain-rotting device and get outside before I ruined my eyes and wits. At various times games have been blamed for contributing to obesity, to violence (including mass shootings), and to misogynistic behaviour – with young men often thought the most at-risk demographic.

Since those days when I would try to sneak in an extra half-hour of forbidden thrill, games have got immeasurably better. They are often beautiful, narratively interesting, enriching and social. Indeed, it is possible that they are too good. Today's games seem to be displacing careers, friendships and families, and thus stopping young people (particularly men) from starting real, adult lives.

Over the last 15 years there has been a steady and disconcerting leak of young people away from the labour force in America. Between 2000 and 2015, the employment rate for men in their 20s without a college education dropped ten percentage points, from 82% to 72%. In 2015, remarkably, 22% of men in this group – a cohort of people in the most consequential years of their working lives – reported to surveyors that they had not worked at all in the prior 12 months. That was in 2015: when the unemployment rate nationwide fell to 5%, and the American economy added 2.7m new jobs. Back in 2000, less than 10% of such men were in similar circumstances.

What these individuals are not doing is clear enough, says Erik Hurst, an economist at the University of Chicago, who has been studying the phenomenon. They are not leaving home; in 2015 more than 50% lived with a parent or close relative. Neither are they getting married. What they are doing, Hurst reckons, is playing video games. As the hours young men spent in work dropped in the 2000s, hours spent in leisure activities rose nearly one-for-one. Of the rise in leisure time, 75% was accounted for by video games. It looks as though some small but meaningful share of the young-adult population is delaying employment or cutting back hours in order to spend more time with their video game of choice.

Unemployment sits differently with Chris than with David. It is, to some extent, an opportunity. "Work is a means to an end," he says. The end is enjoying the finer things life offers: travelling when finances permit, gaming and reading when they don't.

Chris, who is 30, lives in Ipswich, England, where he grew up. He is an it contractor in the health-care sector, working when he gets a contract. The last one expired in July 2016. Thanks to government-imposed spending cuts the pickings have since been rather slim, and Chris has moved back in with his family to save money. He follows the typical job-seeking strategies. He's on LinkedIn and in touch with recruiting agencies. But the jobs tend to go to others: "better candidates", Chris notes philosophically. Investments in training are not on the agenda at the moment.

Games are. Chris is something of a connoisseur; he likes to sample the new wares from high-quality production companies in the way a cinephile might anticipate the latest title from a favourite director. Grand strategy games – like Crusader Kings II, in which players manage a ruling dynasty over the course of centuries – are a particular favourite. Another – Hearts of Iron 4, in which the player controls a nation at war – has absorbed more than 100 hours over the last year. He will play for a few hours, then spend time reading. Old friends, many of whom stayed in Ipswich after leaving school, will join him for a few rounds of a multiplayer game on occasion.

Chris seems content. He has a girlfriend in California, whom he met while on holiday and sees a few times a year. I ask him if he would be bothered if his life were the same in ten years. Not really, he reckons. Not so long as contracts turn up often enough to allow him to buy the games he wants (which don't cost much) and to travel occasionally.

People work for many reasons – to occupy their time, to find purpose in life and to contribute to society, among other things – but the need to earn money typically comes top of the list. Money puts food on the table, clothes in the wardrobe and a roof overhead. Yet these days, satisfying those needs in the most basic way does not take an especially large income, particularly for those with the option of depending on family members for assistance. The reason to work harder and earn more than the minimum needed to survive is, in part, the desire to have something more than the bare necessities – nice meals, rather than the cheapest calories available, a car, holidays abroad, a home full of books and art. Much of the work we do is intended to earn the money to afford a few luxuries to add to our comfort and enrich our lives.

Yet we face a trade-off. The harder we work, the less time we have to enjoy the luxuries our labour affords us. The more lavish the luxuries we seek, the more we must earn to acquire them, and the longer and harder we find ourselves working.

Not all luxuries are tangible. In the autumn of 2016, Hurst released a paper, co-authored with Mark Aguiar, Mark Bils and Kerwin Charles. They define a class of activities they call "leisure luxuries". Economists typically (and reasonably) assume that people tend to buy more things as they earn more money. But as they grow richer, they buy proportionately more of some things and less of others. Spending on necessities, as a share of all consumption, declines as incomes rise. Economists label "luxuries" the things that account for an increased share of spending as income goes up. There is a similar logic to leisure luxuries. As the amount of time people spend at leisure (as opposed to work) rises, some activities (like bathing or sleep) account for a shrinking share of total leisure time. Others – the leisure luxuries – account for more.

Not everyone takes their luxuries in the same way. Tastes differ. Some people might much rather have an excellent meal lasting one hour than a pretty good one lasting two; or a fancy car rather than a year of lazy Saturdays. For those who prefer tangible luxuries, or for whom the quality of an experience is more important than the quantity of it, some additional time off is not especially attractive. Better to work that extra hour and earn a bit more. Those who revel in the leisure luxuries – in the pursuit of a hobby, for instance, or time at gaming – do not need to spend much time on the job each week before the income gain from another hour at work starts to look a poor trade-off for an additional hour away from it.

As games improve, the terms of this trade-off change. Among those predisposed to the leisure-luxury life, better games mean people are quicker to swap working hours for gaming hours; given nes-era gaming technology, a twenty-something might decline an opportunity for overtime work to have a little longer with Mario and Luigi. Now, a part-time job might be all they are willing to do, so good are the worlds and characters waiting at home. For those with the means, any hour on the job is an hour too much.

For 26-year-old Guillaume, the trade-off is all too easy to understand. In May 2016 he finished his graduate-school training in business law. A few months later, he decided he didn't want to work in law after all; he wanted to play video games. Guillaume likes adventure games, which allow players to immerse themselves in fantastic and foreign worlds. During his studies, he could only spare a couple of hours each day for his habit. Now he can slip into his video-game worlds for five or six hours at a time. A law career would have meant more money. Yet it would also have meant much more time spent at law.

For now, financial concerns are not too pressing, as Guillaume's parents support him. He recognises that a lack of financial independence could prove stifling in future. It bothers him enough that he has not given up on the idea of work. But he has never met a lawyer who made him enthusiastic about the career, so he is planning to work in the games industry. He will earn less, but he will be gaming – which is what he has always wanted to do with his time anyway. When the only luxury one desires is the time to enjoy games, working long hours suddenly looks much less sensible.

Many gamers (Guillaume among them) report that they are happy with the decision to work less and game more. Yet economists like Hurst fret about the long-run consequences. Although digital-enter­tainment experiences are both amazingly enjoyable and relatively cheap, other important consumer goods – like houses and medical care, furniture and food – still cost money, sometimes quite a lot of it. People's tastes change as they age. Young men content to remain outside the labour force and play video games – while their parents provide food, shelter and health insurance – may begin to desire something else as the years pass. But, having been out of employment during a crucial period of life – early adulthood, when friendships and contacts are made, experience and skills cultivated – such gamers may find themselves unable to build the lives they come to realise they want.

One hears this regret in talking to older gamers. "Of course gaming has interfered with any attempt to look for or do any serious work," says Arturo, 29, who reckons he has spent 600 hours playing Kerbal Space Program, a space-flight simulator, and possibly more at Starcraft II, a strategy game. He doesn't just miss the forgone income and opportunities; he could have been reading, he laments. But those hours are gone for ever. Between the game reviews and player tips, online forums for gamers are thick with discussions among those who worry their lives are passing them by but cannot find the will to put down their controllers.

Stand back, however, and the implications are far more substantial than this. One can just about spot the vision of a distant, near-workless future in the habits of young gamers. If good things in life can be had for very little money, then working hard to have more than very little money looks less attractive. The history of the industrial era has been one in which technology has reduced the proportion of income devoted to necessities like food while providing vast new possibilities for consumption. As this happened, the hours worked by the typical person declined.

Our instinct, trained to see work as a critical component of adulthood and an obligation of healthy members of society, recoils at the thought of people spending their lives buried in alternate realities. How could society ever value time spent at games as it does time spent on "real" pursuits, on holidays with families or working in the back garden, to say nothing of time on the job? Yet it is possible that just as past generations did not simply normalise the ideal of time off but imbued it with virtue – barbecuing in the garden on weekends or piling the family into the car for a holiday – future generations might make hours spent each day on games something of an institution: an appropriate use of time that is the reward for society's technological wizardry and productive power.

That view hinges, however, on a crucial distinction: are those dropping out to tune in to video-game worlds jumping, lured by the attraction of the games they play, or have they been pushed?

Emily lives in a small town not far from Pittsburgh, Pennsylvania. In 2013 she graduated from university and took a job at a marketing firm – a miserable one which she left after a few months. She applied for entry-level tech jobs but found that even those positions tended to go to people with some experience. As weeks without work turned to months, her mood sank. "I pretty much felt like a piece of shit," she tells me.

Finances were not an immediate worry. She lived with her family while looking for work, but her mother was not happy with the situation: "[she] absolutely made it known that she thought I was lazy and a disappointment," Emily says – not that she needed any help feeling down.

The games were an escape from reality. Emily is a fan of the Fallout franchise: a series of role-playing games set in the future, after a nuclear apocalypse. Gaming lifted her mood, she tells me; achievements within them allowed her to feel that she was getting something right at a time when most things were going wrong. She knew it was only tricking her brain. She would beat herself up sometimes after playing for hours, rueing the potentially productive time lost to games. Now, in hindsight, she says she is glad she had the ability to escape for a while.

After months of unhappy unemployment, Emily found work: as a cashier in a local shop, a position for which she was vastly overqualified. She stayed there for more than a year, earning promotions, but nonetheless stuck in a career very different from what she had expected. In early 2016, her fortunes turned; piles of applications and rounds of interviews finally yielded a job in marketing. She hopes it will work out better than the last one.

For Emily, and for many others, games were not the luxury luring her away from career. They were a comfort blanket and distraction, providing some solace when the working world offered only bitter disappointment.

However one cuts the economic data of the last few decades, the labour market has become harder for the young. The Great Recession and its aftermath were somewhat worse for young workers than the population as a whole. Yet the struggles of younger workers pre-date the crisis. Hourly wages, adjusted for inflation, have stagnated for young college graduates since the 1990s (that is, young graduates now earn roughly the same wage as new graduates did 20 years ago), while pay for new high-school graduates has declined. The shares of young high-school and college graduates not in work or education has risen; in 2014, about 11% of college graduates were apparently idle, compared with 9% in 2004 and 8% in 1994.

"Underemployment" – work in a position for which one is overqualified – has risen steadily since the beginning of the millennium; the share of recent college graduates working in jobs which did not require a college degree rose from just over 30% in the early 2000s to nearly 45% a decade later. As frustrated college students take jobs for which they are overqualified, young people with less education often find themselves competing for still less demanding work, which pays lower wages and offers less security and room for advancement.

One of the most important variables to consider in designing a video game is its difficulty. If a game is too simple, players will quickly get bored and the game will flop. If it is too difficult, gamers will grow frustrated, and the game will likewise prove a failure. Life, for many people, is a big game: the ultimate place to accumulate points and work one's way up the leaderboard. The economists who worry about the seductive power of gaming fear that gamers who miss the scheduled step away from virtual play and into a proper adulthood will never "level up" to that truly immersive competitive experience. Instead, they become stuck at a phase of the game which no longer satisfies, yet which they cannot move beyond.

The designers of the game of life, such as they are, may have erred in structuring the game in a way that encourages young people to seek an alternate reality. They have spread the thrills and valuable items too thinly and have tweaked the settings to reward special skills that cannot be mastered easily even by those prepared to spend long hours doing so. Unsurprisingly, some players are giving up, while others are filling the time not taken up in rewarding, well-compensated work with games painstakingly designed to make them feel good.

It is not always clear when gaming is the refuge of the trapped and when it is the trap. Ashley, aged 37, is certain that gaming is not the source of his problems. He played video games in his youth, but not obsessively; like other teenagers he made plenty of time for football and skateboarding. The games took on a different cast in his 20s, when he spent time abroad teaching English: he played heavily as a way to deal with the loneliness of being in a foreign place. But he was able to let the games go when he returned.

Then he enrolled in graduate school, to become a therapist, in a programme that required him to undertake his own intensive course of therapy. He fell into a deep depression, for which he blames the therapy. Gaming became his coping strategy, "a way of switching off thoughts", he says, and a means to turn away from responsibility. He resisted the label "addict". But that is what he has come to understand he is.

The depression is the problem, Ashley says, not the games, but the hours he spends playing at Pro Evolution Soccer are making things worse. They get in the way of his relationship. "She hates it," he says, when asked how his partner feels about the gaming. The potent combination of depression and gaming has also prevented him from progressing professionally. He has failed to complete his degree and his working life has stalled.

David Mullings's relationship with games is entirely different. He just got a job working for a hedge fund, after spending time volunteering for the Hillary Clinton campaign (in some games you can score more points than the opponent and still lose). Asked whether he regrets the time he spent as a hard-core gamer, he admitted it has costs. His wife frequently grew frustrated with him. She found herself texting him things like "Can I get that back rub?" in order to draw his attention away from the screen. But he could have been down at the bar with the guys, dealing with his disappointment that way. Instead he chose to game.

And what he got from the game was much more than mere distraction. It was fellowship with others. Indeed, his group of friends has become a broader online community, calling itself Dads of Destiny. The men bonded over shared experiences. "Sometimes a player would say, 'Guys, I need to change a baby,' and the other players would provide covering fire while he was gone." They helped each other. Dads would pass around their cvs and connect with each other on LinkedIn. One of their number, a veteran, credits their gaming community with helping him adjust to life after military service and deal with post-traumatic stress. David is pretty sure they have saved at least one marriage.

Other gamers tell similar stories: friends made while playing, skills they discovered or honed, discussions that led to jobs, and hours spent away from the troubles of a world that occasionally needs to be blocked out. Theirs are not the only stories. There is addiction. While some gaming communities are welcoming to all, others are relentlessly hostile to outsiders, and to women in particular. And games become the destructive vice of choice for some sets of players, taking the place of drugs or alcohol in a tragic but familiar narrative. But the game is a symptom of some broader weakness, sometimes of character, occasionally of mental health – and, perhaps, of society too.

Game designers often deploy a technique called "dynamic difficulty adjustment". In many games, the software assesses a player's skill and rebalances various attributes of the game accordingly, to keep the game fun and manageable for those of less ability. Gamers early in their careers, or who are simply struggling to pick up the skills necessary to succeed, are given a helping hand; their world might be more generously strewn with useful power-ups, for instance. As players advance, these helping hands are withdrawn.

There is a downside to such techniques, at least when they are used carelessly. One of my favourite game series has always been Mario Kart, a Nintendo racing game featuring characters from the Mario Brothers franchise. It uses "rubber banding" to keep the game interesting. That is: no matter how good a driver you are, your ai opponents can fall only so far behind; the software will allow them to break the rules of the game, and go faster than their little karts ought to be able to, in order to keep the game interesting. When playing human opponents, those who fall to the rear are showered with the most useful power-ups – such that a leader, after executing a near-perfect race, can be pummelled with misfortunes of one sort or another until a laggard pips him at the post. Clumsy, difficult adjustments like these make the game feel rigged and unfair, which makes it just as unappealing as one that is straightforwardly too easy, or too hard.

A life spent buried in video games, scraping by on meagre pay from irregular work or dependent on others, might seem empty and sad. Whether it is emptier and sadder than one spent buried in finance, accumulating points during long hours at the office while neglecting other aspects of life, is a matter of perspective. But what does seem clear is that the choices we make in life are shaped by the options available to us. A society that dislikes the idea of young men gaming their days away should perhaps invest in more dynamic difficulty adjustment in real life. And a society which regards such adjustments as fundamentally unfair should be more tolerant of those who choose to spend their time in an alternate reality, enjoying the distractions and the succour it provides to those who feel that the outside world is more rigged than the game.

Tuesday, December 27, 2016

Calafia Beach Pundit: The Fed doesn't control bond yields

The Fed doesn't control bond yields

For years I've been saying that the Fed can't control bond yields, but the myth that the Fed can manipulate yields (e.g., by buying lots of 10-yr Treasuries and/or by buying lots of MBS) persists. Experience tells us that the Fed can only control short-term rates, and even then it is questionable whether the Fed can move rates up or down by more than the market is ready for at any given time. Recall the bond market tantrum earlier this year when the Fed hinted that it might raise rates several times during the course of the year, and the Fed then quickly backed off, raising rates only once (recently). Bond yields are effectively set by market forces, and are heavily influenced by the market's perception of the future of Fed policy, the expected level of inflation, and the outlook for economic growth.

Here are some charts which compare the history of the Fed's purchases of Treasuries and MBS and their corresponding yields. You can judge for yourself whether the Fed has managed to manipulate those yields with its Quantitative Easing programs.

The chart above shows what happened to 10-yr Treasury yields during the Fed's three Quantitative Easing programs. In each period, despite massive bond purchases, the yield on 10-yr Treasuries rose. Ironically, the Fed justified QE by saying it would depress yields, and that in turn would stimulate the economy. What I think really happened to cause yields to rise despite Fed bond purchases was that the Fed's QE efforts supplied badly-needed bank reserves to the system, and that satisfied the market's thirst for safe assets; that in turn resulted in healthier market liquidity, which in turn caused the market to become somewhat more optimistic about future growth, which in turn caused the market to anticipate higher short-term interest rate guidance from the Fed in the future.

The chart above compares the magnitude of the Fed's Treasury purchases with the level of 10-yr yields. It stands to reason that the Fed could potentially manipulate the bond market only if it buys or sells a quantity of bonds that is significant relative to the outstanding supply of those bonds. Thus the rationale for the blue line, which is the ratio of the Fed's holdings of Treasuries relative to the total marketable supply of Treasuries. Several things jump out: 1) during 2008, as Fed holdings of Treasuries were plunging, yields were falling, and 2) in 2013, as Fed holdings of Treasuries was surging as part of QE3, Treasury yields surged (both in contrast to what the Fed promised QE would do), and 3) the Fed currently holds about 18% of outstanding Treasury debt, and that is about the same amount it held at the end of 2004 and less than the 20% of Treasury debt it held at the end of 2002, yet yields today are much lower than they were back then. Tough to see any convincing or enduring correlation between these lines.

The chart above compares the magnitude of the Fed's MBS purchases with the level of MBS yields. Here we see that despite huge MBS purchases in 2009, MBS yields were relatively unchanged. More recently, with Fed holdings of MBS holding relatively steady, yields have surged. No convincing causality or correlation that I can see between these two lines.

The recent surge in bond yields has almost nothing to do with Fed policy, and very little to do with increased inflation expectations. It's mostly about an improving outlook for growth assuming that Trump is able to reduce the tax and regulatory burdens that have been holding back growth for the past decade.