Month: May 2016

America Returns To Cuba

Barack Obama’s visit to Cuba is the first by a US president since Calvin Coolidge went in 1928. American investors, expat Cubans, tourists, scholars, and scam artists will follow in Obama’s wake. Normalization of the bilateral relationship will pose opportunities and perils for Cuba, and a giant test of maturity for the United States.

The Cuban Revolution led by Fidel Castro 57 years ago was a profound affront to the US psyche. Since the founding of the US, its leaders have staked a claim to American exceptionalism. So compelling is the US model, according to its leaders, that every decent country must surely choose to follow America’s lead. When foreign governments are foolish enough to reject the American way, they should expect retribution for harming US interests (seen to align with universal interests) and thereby threatening US security.

With Havana a mere 90 miles from the Florida Keys, American meddling in Cuba has been incessant. Thomas Jefferson opined in 1820 that the US “ought, at the first possible opportunity, to take Cuba.” It finally did so in 1898, when the US intervened in a Cuban rebellion against Spain to assert effective US economic and political hegemony over the island.

In the fighting that ensued, the US grabbed Guantánamo as a naval base and asserted (in the now infamous Platt Amendment) a future right to intervene in Cuba. US Marines repeatedly occupied Cuba thereafter, and Americans quickly took ownership of most of Cuba’s lucrative sugar plantations, the economic aim of America’s intervention. General Fulgencio Batista, who was eventually overthrown by Castro, was the last of a long line of repressive rulers installed and maintained in power by the US.

The US kept Cuba under its thumb, and, in accordance with US investor interests, the export economy remained little more than sugar and tobacco plantations throughout the first half of the twentieth century. Castro’s revolution to topple Batista aimed to create a modern, diversified economy. Given the lack of a clear strategy, however, that goal was not to be achieved.

Castro’s agrarian reforms and nationalization, which began in 1959, alarmed US sugar interests and led the US to introduce new trade restrictions. These escalated to cuts in Cuba’s allowable sugar exports to the US and an embargo on US oil and food exports to Cuba. When Castro turned to the Soviet Union to fill the gap, President Dwight Eisenhower issued a secret order to the CIA to topple the new regime, leading to the disastrous Bay of Pigs invasion in 1961, in the first months of John F. Kennedy’s administration.

Later, the CIA was given the green light to assassinate Castro. In 1962, Soviet leader Nikita Khrushchev decided to forestall another US invasion – and teach the US a lesson – by surreptitiously installing nuclear missiles in Cuba, thereby triggering the October 1962 Cuban missile crisis, which brought the world to the brink of nuclear annihilation.

Through dazzling restraint by both Kennedy and Khrushchev, and no small measure of good luck, humanity was spared; the Soviet missiles were removed, and the US pledged not to launch another invasion. Instead, the US doubled down on the trade embargo, demanded restitution for nationalized properties, and pushed Cuba irrevocably into the Soviet Union’s waiting arms. Cuba’s sugar monoculture remained in place, though its output now headed to the Soviet Union rather than the US.

The half-century of a Soviet-style economy, exacerbated by the US trade embargo and related policies, took a heavy toll. In purchasing-power terms, Cuba’s per capita income stands at roughly one-fifth of the US level. Yet Cuba’s achievements in boosting literacy and public health are substantial. Life expectancy in Cuba equals that of the US, and is much higher than in most of Latin America. Cuban doctors have played an important role in disease control in Africa in recent years.

Normalization of diplomatic relations creates two very different scenarios for US-Cuba relations. In the first, the US reverts to its bad old ways, demanding draconian policy measures by Cuba in exchange for “normal” bilateral economic relations. Congress might, for example, uncompromisingly demand the restitution of property that was nationalized during the revolution; the unrestricted right of Americans to buy Cuban land and other property; privatization of state-owned enterprises at fire-sale prices; and the end of progressive social policies such as the public health system. It could get ugly.

In the second scenario, which would constitute a historic break with precedent, the US would exercise self-restraint. Congress would restore trade relations with Cuba, without insisting that Cuba remake itself in America’s image or forcing Cuba to revisit the post-revolution nationalizations. Cuba would not be pressured to abandon state-financed health care or to open the health sector to private American investors. Cubans look forward to such a mutually respectful relationship, but bristle at the prospect of renewed subservience.

This is not to say that Cuba should move slowly on its own reforms. Cuba should quickly make its currency convertible for trade, expand property rights, and (with considerable care and transparency) privatize some enterprises.

Such market-based reforms, combined with robust public investment, could speed economic growth and diversification, while protecting Cuba’s achievements in health, education, and social services. Cuba can and should aim for Costa Rican-style social democracy, rather than the cruder capitalism of the US. (The first author here believed the same about Poland 25 years ago: It should aim for Scandinavian-style social democracy, rather than the neo-liberalism of Ronald Reagan and Margaret Thatcher.)

The resumption of economic relations between the US and Cuba is therefore a test for both countries. Cuba needs significant reforms to meet its economic potential without jeopardizing its great social achievements. The US needs to exercise unprecedented and unaccustomed self-control, to allow Cuba the time and freedom of maneuver it needs to forge a modern and diversified economy that is mostly owned and operated by the Cuban people themselves rather than their northern neighbors.

Financing Health And Education For All

Our world is immensely wealthy and could easily finance a healthy start in life for every child on the planet. A small shift of financing from wasteful US military spending to global funds for health and education, or a very small levy on tax havens’ deposits, would make the world vastly fairer, safer, and more productive.

In 2015, around 5.9 million children under the age of five, almost all in developing countries, died from easily preventable or treatable causes. And up to 200 million young children and adolescents do not attend primary or secondary school, owing to poverty, including 110 million through the lower-secondary level, according to a recent estimate. In both cases, massive suffering could be ended with a modest amount of global funding.

Children in poor countries die from causes – such as unsafe childbirth, vaccine-preventable diseases, infections such as malaria for which low-cost treatments exist, and nutritional deficiencies – that have been almost totally eliminated in the rich countries. In a moral world, we would devote our  to end such deaths.

In fact, the world has made a half-hearted effort. Deaths of young children have fallen to slightly under half the 12.7 million recorded in 1990, thanks to additional global funding for disease control, channeled through new institutions such as the Global Fund to Fight AIDS, Tuberculosis, and Malaria.

When I first recommended such a fund in 2000, skeptics said that more money would not save lives. Yet the Global Fund proved the doubters wrong: More money prevented millions of deaths from AIDS, TB, and malaria. It was well used.

The reason that child deaths fell to 5.9 million, rather to near zero, is that the world gave only about half the funding necessary. While most countries can cover their health needs with their own budgets, the poorest countries cannot. They need about $50 billion per year of global help to close the financing gap. Current global aid for health runs at about $25 billion per year. While these numbers are only approximate, we need roughly an additional $25 billion per year to help prevent up to six million deaths per year. It’s hard to imagine a better bargain.

Similar calculations help us to estimate the global funding needed to enable all children to complete at least a high-school education. UNESCO recently calculated the global education “financing gap” to cover the incremental costs – of classrooms, teachers, and supplies – of universal completion of secondary school at roughly $39 billion. With current global funding for education at around $10-15 billion per year, the gap is again roughly $25 billion, similar to health care. And, as with health care, such increased global funding could effectively flow through a new Global Fund for Education.

Thus, an extra $50 billion or so per year could help ensure that children everywhere have access to basic health care and schooling. The world’s governments have already adopted these two objectives – universal health care and universal quality education – in the new Sustainable Development Goals.

An extra $50 billion per year is not hard to find. One option targets my own country, the United States, which currently gives only around 0.17% of gross national income for development aid, or roughly one-quarter of the international target of 0.7% of GNI for development assistance.

Sweden, Denmark, Norway, the Netherlands, Luxembourg, and the United Kingdom each give at least 0.7% of GNI; the US can and should do so as well. If it did, that extra 0.53% of GNI would add roughly $90 billion per year of global funding.

The US currently spends around 5% of GDP, or roughly $900 billion per year, on military-related spending (for the Pentagon, the CIA, veterans, and others). It could and should transfer at least $90 billion of that to development aid. Such a shift in focus from war to development would greatly bolster US and global security; the recent US wars in North Africa and the Middle East have cost trillions of dollars and yet have weakened, not strengthened, national security.

A second option would tax the global rich, who often hide their money in tax havens in the Caribbean and elsewhere. Many of these tax havens are UK overseas territories. Most are closely connected with Wall Street and the City of London. The US and British governments have protected the tax havens mainly because the rich people who put their money there also put their money into campaign contributions or into hiring politicians’ family members.

The tax havens should be called upon to impose a small tax on their deposits, which total at least $21 trillion. The rich countries could enforce such a tax by threatening to cut off noncompliant havens’ access to global financial markets. Of course, the havens should also ensure transparency and crack down on tax evasion and corporate secrecy. Even a deposit tax as low as 0.25% per year on $21 trillion of deposits would raise around $50 billion per year.

Both solutions would be feasible and relatively straightforward to implement. They would underpin the new global commitments contained in the SDGs. At the recent Astana Economic Forum, Kazakhstan’s President Nursultan Nazarbayev wisely called for some way to tax offshore deposits to fund global health and education. Other world leaders should rally to his call to action.

Our world is immensely wealthy and could easily finance a healthy start in life for every child on the planet through global funds for health and education. A small shift of funds from wasteful US military spending, or a very small levy on tax havens’ deposits – or similar measures to make the super-rich pay their way – could quickly and dramatically improve poor children’s life chances and make the world vastly fairer, safer, and more productive. There is no excuse for delay.

Raise Wages, Kills Jobs?

Since the passage of the Fair Labor Standards Act in 1938, business interests and conservative politicians have warned that raising the minimum wage would be ruinous. Even modest increases, they’ve asserted, will cause the U.S. economy to hemorrhage jobs, shutter businesses, reduce labor hours, and disproportionately cast young people, so-called low-skilled workers, and workers of color to the bread lines. As recently as this year, the same claims have been repeated, nearly verbatim.

“Raise wages, lose jobs”, the refrain seems to go.

If the claims of minimum-wage opponents are akin to saying “the sky is falling,” this report is an effort to check whether the sky did indeed fall. In this report, we examine the historical data relating to the 22 increases in the federal minimum wage between 1938 and 2009 to determine whether or not these claims—that if you raise wages, you will lose jobs—can be substantiated. We examine employment trends before and after minimum-wage increases, looking both at the overall labor market and at key indicator sectors that are most affected by minimum-wage increases. Rather than an academic study that seeks to measure causal effects using techniques such as regression analysis, this report assesses opponents’ claims about raising the minimum wage on their own terms by examining simple indicators and job trends.

The results were clear: these basic economic indicators show no correlation between federal minimum-wage increases and lower employment levels, even in the industries that are most impacted by higher minimum wages. To the contrary, in the substantial majority of instances (68 percent) overall employment increased after a federal minimum-wage increase. In the most substantially affected industries, the rates were even higher: in the leisure and hospitality sector employment rose 82 percent of the time following a federal wage increase, and in the retail sector it was 73 percent of the time. Moreover, the small minority of instances in which employment—either overall or in the indicator sectors—declined following a federal minimum-wage increase all occurred during periods of recession or near recession. That pattern strongly suggests that the few instances of such declines in employment are better explained by the overall national business cycle than by the minimum wage.

These employment trends after federal minimum-wage increases are not surprising, as they are in line with the findings of the substantial majority of modern minimum-wage research. As Goldman Sachs analysts recently noted, citing a state-of-the-art 2010 study by University of California economists that examined job-growth patterns across every border in the U.S. where one county had a higher wage than a neighboring county, “the economic literature has typically found no effect on employment” from recent U.S. minimum-wage increases.1 This report’s findings mirror decades of more sophisticated academic research, providing simple confirmation that opponents’ perennial predictions of job losses when minimum-wage increases are proposed are rooted in ideology, not evidence.


The federal minimum wage has been raised a total of 22 times since its enactment in 1938. The simplest way to assess the claim that raising the minimum wage costs jobs is to treat each minimum-wage increase as a distinct event and look and see what happened to employment or other indicators one year later.

While opponents often broadly charge that raising the minimum wage “will cause job losses,” such increases disproportionately affect a select few employment sectors. The bulk of workers receiving raises as the result of minimum-wage increases are concentrated in a group of service industries—the two largest being restaurants and retail. For that reason, we examine employment trends, both overall and with a special focus on these indicator industries in which any adverse impact resulting from a higher minimum wage would most likely be evident.

Our findings are quite clear: in the nearly two dozen instances when the federal minimum wage has been increased, employment the following year has increased in the substantial majority of instances.

This pattern of increased job growth following minimum-wage increases holds true both for general labor-market indicators as well as those for industries heavily affected by minimumwage increases:

  • In the 22 instances when the federal minimum wage went up, the change in total private employment after one year was positive 15 out of 22 times (68.2 percent).
  • In the 16 instances when the federal minimum wage was increased since 1964 (the earliest year for which this data is available), total hours worked increased 10 out of 16 times (62.5 percent). ?
  • In the leisure and hospitality sector, which includes restaurants, hotels, and amusement parks, employment rose one year after a minimum-wage increase 18 out of 22 times (81.8 percent). ?
  • In retail employment, positive changes occurred 72.7 percent of the time after an increase.

What’s more, looking more closely at the relative handful of instances in which employment decreased—whether total employment, or employment in our key indicator sectors—it is also clear that those declines were likely driven by factors other than the higher minimum wage.

Specifically, in five out of eight instances where either total or industry-specific employment took a negative turn during the one-year period following a minimum-wage increase, the employment decreases happened during periods when the U.S. economy was officially in recession.

In the remaining three instances, a recession was just around the corner (the decline shown during the period of March 1956-1957 would be swiftly followed by the Recession of 1958, during which time the U.S. automotive industry saw its worst year since World War II), or the economy was still recovering from a recessionary period (decreases in both the April 1991-1992 and July 2009-2010 periods occurred just months after a recessionary period had technically ended, but the economy was still feeling the effects).

In each of the relatively few instances in which employment declined following a federal minimum-wage increase, the economy was either in recession or near a recession. In all other instances, employment grew after the federal minimum wage went up. These patterns show that federal minimum-wage increases have not correlated with reductions in jobs, and in the few instances where they have, the decline was better explained by the business cycle than by the minimum wage.

This is not to say that individual firms may not change their employment decisions in response to higher minimum wages. But it shows that in the aggregate, across firms and across the economy, there is no pattern of reduced employment when the federal minimum wage goes up—and the few instances when employment has gone down after wage increases have been during recessions or near recessions—circumstances that much more plausibly explain the observed employment reductions than the modest minimum-wage increases.


For decades, minimum-wage opponents have been doom-saying about the likely impact of higher wages on the economy. But review of the best evidence makes clear that their predictions have not been borne out by real-world results. Our analysis of simple job-growth data—both economy-wide and in the industries most affected by higher minimum wages— shows that there is no correlation between minimum-wage increases and reduced employment levels. As those results mirror the findings of decades of more sophisticated academic research, they provide simple confirmation that opponents’ perennial predictions of job losses are rooted in ideology, not evidence.