Monday, March 30, 2015

Mandatory Voting: What's Obama Thinking?

America: land of the free and home of a slim majority that bothers to vote. A country with a two-party system and an increasingly large amount of people who can hardly see a significant difference between the two parties, is it at all a surprise that voter turnout in 2014 hasn't been this low since WWII? Not quite two weeks ago, Obama had a brilliant idea to counter voter apathy: mandatory voting. With mandatory voting, we could have a composition of politicians that are more representative of the peoples' interests, thereby creating a "consent of the people." Is this really the case? Would it ultimately matter if voting were compulsory or not?

We live in a world in which 22 countries currently have mandatory voting laws, which is to say that you either vote or face a penalty. Most of the countries, including Egypt, the Congo, and Greece, are hardly countries whose public policy I would hardly want to emulate. (As a side note, the participation rate between countries with such laws and without is not as big as one would think) A right to vote has an implicit right to not vote. If you changed "right to vote" to "freedom of religion," and mandated that everyone have a religion, wouldn't that create issues, especially in a free society that bases its more on lower-case-"d" democratic values? If soldiers fought for our freedoms, then imbedded within that is the right to choose whether to vote or not. A true sense of civic participation is based on voluntarism, not conscription. This is equally valid when we consider that some might abstain from voting not simply out of apathy, but because work, health, or the costs of traveling to the polling booth might legitimately get in the way.

But let's sidestep the philosophical issues of mandatory voting for a second. Is it a good idea for better governance? I would have to contend in the negative. One of the main issues is that of voter ignorance. Political scientists have found that those who do not vote are more likely to be ignorant of the most basic facts about politics, such as the name of the current president, vice president, or Congressman. It's not just an issue of who the players are or what they have accomplished (or in many cases, not accomplished). It's about people who don't have a basic grasp of economics, sociology, or public policy to even make informed decisions. How does their forced input, along with the donkey votes, random votes, protest votes, and abstentions, improve the political system? And do we really think that mandatory voting is going to get voters to become more informed, especially since the statistical likelihood of their vote actually making a difference is next to nil?




This brings me to an alternative theory about the impact of mandatory voting. In spite of what impact that mandatory voting could theoretically have on the influence of money in elections (although I would argue the contrary since political campaigning is more important in terms of easily swaying the ignorant since they are now forced to vote), there is a more-than-distinct possibility that mandatory voting would do very little to nothing to change the outcome in the aggregate. By simulating mandatory voting scenarios, political scientists like John Sides have found that very the outcome of very few elections would actually be changed. If this theory about mandatory voting is true and it would do essentially nothing to change outcomes or even mitigate that vote ignorance I had mentioned earlier, then why take on the enforcement costs, administrative costs, or the social cost of creating a less free society?

Mandatory voting is bad policy, an insult to voters, and an idea that goes against the ideas of freedom and liberty upon which this country was founded. Low political turnout is representative of the political will in this country. If you want people to be more engaged in the political process, civic education and voter registration modernization, along with government transparency and accountability would go a long way in making the populace less cynical about the political process. I am glad that Obama's idea was merely a hypothetical and not one that he would actualize with executive order.

Friday, March 27, 2015

Death to the Estate Tax!

Death and taxes are two certainties in life, yet the government seems to combine the two in the estate tax, which is colloquially known as the death tax. This tax was only meant to be a temporary tax back in 1916 to raise tax revenue for WWI. However, once a government entity, regulation, or tax is in place, it ends up being difficult, if not nigh impossible to reverse. We might be seeing an exception here because it could very well end up six feet under, especially after the House of Ways and Means Committee met this week to consider passing H.R. 1105, which is the Death Tax Repeal Act of 2015. It would be interesting to speculate on whether it will be passed, but I'm more interesting to go into why the estate tax should be repealed once and for all.

I could say that we should get rid of the estate tax simply because it's a tax. However, I am a consequentialist libertarian. As soon as people start interacting with each other, the need for government become inevitable. To quote Thomas Paine from Common Sense, "government, even in its best state, is a necessary evil." With that being said, in order to function, the government needs a revenue base. Since there is going to be a government, we might as well call for the most economically efficient taxes out there, and I can tell you that the estate tax does not qualify.

Before delving into why the estate tax makes for bad economics, a bit about the estate tax, particularly why it's also called the death tax. When an individual passes away, the government levies a tax on the estate prior to the heirs splitting the inheritance. In United States tax law, a tax return is required upon death. If the total assets exceed the amount of the tax exemption, which was $5.25 million in 2013, then 40 percent of the estate's value has to be paid in taxes to the federal government.

First, it is hardly fair that even in death, the government can't leave people in peace. Shouldn't we respect the wishes of the individual who just passed instead of finding a way to fill up government coffers? Life is hardly fair, that much I know. But does the tax really fill up government coffers that much? Not really. The estate tax only generated 0.6 percent of 2014 federal tax revenue (which is a slight improvement over the 0.46 percent in 2013). With $3.021T in federal tax revenue, that would total to $18.21B generated by the estate tax in 2014. To be fair, part of the historical decline in estate tax revenues can be attributed to the increase in the exemption and the fact that the statutory estate tax rate used to be 55 percent.

Even if we are to concede those points, it still doesn't negate the fact that the estate tax both creates a tax incidence on savers [as opposed to those who spend immediately] and primarily acts as a levy on domestic capital stock, that very thing that generates wealth in the first place. Countries like Hong Kong, Norway, Sweden, and Russia have eliminated their estate taxes. Perhaps that is the reason that many countries do not use the estate tax as a source of revenue generation. But what about the notion that the estate tax doesn't affect most people? As the Left-leaning Center on Budget and Policy Priorities points out, only 2 out of 1,000 estates will actually owe money on the estate tax. That might sound like it doesn't affect the other 998, but that's not the case because there are still compliance costs. And much like would be the case with a wealth tax, assessing the value of assets is difficult, time-consuming, and expensive. Statistically speaking, the IRS is going on a wild goose chase, which is a waste of taxpayer dollars and a boon for estate-tax lawyers and life insurance companies. As a side note, the estate tax is not as much as a "tax for the super-wealthy" as one would think because people like Bill Gates can tie that money up in a tax-exempt foundation, which puts a damper on the "tax progressiveness" argument.

Wealth passed on from one generation to the next is one of the primary impetuses of economic growth. Stunting that growth in the name of tax revenue (which is feeble to begin with because we should be more concerned with the defects driving debt) or reducing inequality (which is also ridiculous [e.g., Cagetti and de Nardi, 2007], given how little revenue it actually generates, not to mention that most millionaires created their own wealth) is fiscally irresponsible, to say the least. The Tax Foundation ran a model showing that repealing the estate tax would increase capital stock by 1.68 percent and the GDP by 0.58 percent per annum, which would provide a small, but positive boost to the economy and federal government revenues. The Heritage Foundation also found that repeal would create a $46B boost to the economy over the next decade. While I don't think the estate tax is as heinous as the corporate tax, I still think it should have gone six feet under ages ago.

Monday, March 23, 2015

Parsha Tzav: How to Address Others' Misdeeds

There are some who can't help but derive pleasure from the misfortune of others in what psychologists call Schadenfreude. Not only that, there are also some who try to one-up others by making others' misdeeds apparent while deflecting their own. After all, it's much easier to point out everyone else's faults than one's own faults. What does this have to do with this week's Torah portion?

While describing the sacrificial system, G-d makes the following directive in the Torah: 

And G-d spoke to Moses saying: "Speak to Aaron and his sons, saying: 'This is the law of the sin offering. In the place where the burnt offering is slaughtered shall the sin offering be slaughtered before G-d; it is most holy.'" -Leviticus 6:17-18

The Jerusalem Talmud (Yevamot 8:3) states that the sin offering and the burnt offering were conducted in the same place in order to save sinners from embarrassment. With this policy in place, one does not know whether the individual was making a donation in the form of a burnt offering or an actual sin offering. The aforementioned Levitical text ends up being used as a prooftext for the Sages to institute the halacha that prayer, particularly when one's sins are confessed, be recited silently because it avoids the embarrassment for those who wish to confess their sins to G-d during prayer (Talmud, Sotah 32b). What R. Zelig Pliskin teaches in his book, Love Your Neighbor, is that the verse ultimately illustrates the lesson that "we must be very careful not to cause someone embarrassment or discomfort because of past misdeeds (p. 228)." 

Perhaps this sagely advice is incorrect. It is possible that by airing your dirty laundry in public, it would provide people with an incentive to to behave better in the future. Using the stick could very well work better than the carrot. However, not only is this assumption un-Jewish in nature, but it also ends up being a counterproductive approach. 

Ego makes it difficult enough for one to admit their shortcomings, never mind actually doing something to overcome them. If one is operating in a system that publicly humiliates people for erring, the most probable incentive is that one would become even more prone to hiding one's wrongdoings instead of confronting them. This perverse incentive would create a community of façades and moral devolution. If publicly embarrassing others is not the way to go, then what is?

I think the approach needs to be twofold. The first has to do with mentality. "There is no individual on earth so righteous that he doesn't sin (Ecclesiastes 7:20)." As tempting as it is to think that we can perform optimally 100 percent of the time, the truth is that we all have our off-days. Whether intentionally or not, we all make mistakes. Pirkei Avot gives sound advice when approaching others' misdeeds, such as "judge the entirety of a person in their merit (1:6)" and "don't judge someone until you have been in their place (2:4)." To quote King Solomon again, "a just person may fall seven times and rise (Proverbs 24:16)."

If we are to love our neighbor like we love ourself (Leviticus 19:18), we would want individuals to treat us in a loving manner with regards to our mistakes, which brings me to my second point. Rebuke is a necessary component of love, which is why rebuke is juxtaposed with love (Leviticus 19:17). "Love not accompanied by criticism is not really love (Genesis Rabbah 54:3)." Even so, there is a certain way of going about it. First, make sure that you're not guilty for the same wrongdoing. As the Talmud opines, "Correct yourself before correcting others (Bava Batra 60a-60b; Bava Metzia 107b)." Even if you are not guilty of the same wrongdoing, there is something to be said for doing it lovingly, patiently, and in private (Rambam, Hilchot Da'ot, 6:7), all of which increase the probability that the individual will change their behavior. 

While it's true that the sacrificial system is not currently instituted, the sacrificial system can still teach us basic, yet important, lessons about the human condition. With the right type of encouragement, we can both preserve human dignity and continue to foster the very sort of constructive character development that was one of the primary intents of the sacrificial system all those years ago.

Sunday, March 22, 2015

The Blurred Line With Intellectual Property Rights in Music: Do We Need Copyright Law Reform?

Copyright law and intellectual property rights can be some ambiguous territory because of the more abstract nature of intellectual property. This idea played out with the recent ruling that Robin Thicke's 2013 song, Blurred Lines, was a copyright violation of a Marvin Gaye song called Got to Give It Up. When listening to the two songs, there are certain similarities, such as tempo, cowbell, and the falsetto. Regardless of how one feels about Robin Thicke as an artist, we still have to ask ourselves whether the ruling was proper or whether the court system has gone too far in the other direction.

First, let's define copyright. Copyright is a legal right that grants the owner of immaterial property, such as music, the exclusive rights of its usage, sale, and distribution. In United States law, the call for such protection exists under Article I, Section 8, Clause 8 under the Constitution. The reason why copyright laws exist in the first place is because those who making their living off of such abstract, immaterial goods and services need a way to make a living. Otherwise, there is a disincentive for such individuals to produce artistic work, especially since developing technology makes it easier to copy innovations than in the past. A world without creative and intellectual work would be a deprived world, and we should make sure that we have a system that fosters such work. However, I do worry not only because the problem could be overstated because derivative work is an imperfect substitute or because of the rent-seeking that creates inefficiencies and unintended consequences (e.g., Buccafusco and Heald, 2012).

The Marvin Gaye ruling is problematic because it was not even based on plagiarizing lyrics or melodies. It was passed because the two songs had a similar feel. If having certain elements from previously recorded songs or using other musicians is now a crime, I would hate to see what this will do to the music industry. What worries me the most is that it will open the floodgates of litigation for anyone who feels like suing another musician simply because their latest hit sounds like that of another artist. Much like I had explained with software patents a few months ago, there is a legitimate worry that a competitive market might be stifled if well-off individuals or entities can threaten upcoming stars with litigation.

How does one protect intellectual property rights without discouraging new works? Decreasing the length of copyright terms, encouraging added-value industries by relaxing copyright laws, creating clearer and more comprehensive language in the laws surrounding Fair Use and copyright law in general, punishing those who make false copyright claims, allowing for the usage of orphan works, implementing private ordering, creating centralized databases to reduce transaction costs, and updating the tort reform to reduce statutory fines and the incentive to induce litigious behavior in society can be good places to start. Whatever reform takes place after this ruling, I hope it's in favor of a flourishing marketplace.

Tuesday, March 17, 2015

Criticism of Islam Is Not Islamophobic: Why We Should Criticize Islam [and Everything Else]

Some things truly bear repeating. That was the reminder I received from Reason Magazine today when they published an article entitled "Stop Smearing Critics of Islam as Islamophobes." The author started off by pointing out that the Islamic Human Rights Commission labeled murdered Charlie Hebdo staff "Islamophobe of the Year," which is tomfoolery. That introduction led to illustrating how people have reached the point of pathologizing dissent. You know what? They're right. We have reached a low in which "polite society" means that we cannot risk offending others, particularly if they are of the Islamic faith.

If the Charlie Hebdo attacks back in January taught us anything, it's the importance of free speech. As I brought up shortly after the attacks, society benefits from freedom of speech because it enables good governance, a sense of self-empowerment, and a more educated populace, all of which help reduce poverty. If freedom of speech is so good for society, then what do practitioners or sympathizers of Islam have to fear?

Answer: not wanting to have to deal with legitimate concerns with Islam and the way it is practiced. There is a world of difference between actual Islamophobia and bringing up valid criticisms of Islam and Sharia law. I actually pointed out this distinction a few months ago in a previous blog entry. Islamophobia consists of an irrational fear simply because you either don't like the unknown or anything that is different from yourself. With Islamophobia, it's about hatred and/or ignorance, plain and simple. Aside from being part of a historically persecuted religious minority, I also have a wide diversity of friends, whether the difference is in religion, political ideology, race, gender, sexual orientation, or ethnicity. I have been exposed to a wide variety of people and their experiences, and as such, I don't suffer from anything remotely resembling Islamophobia. What I can tell you, though, is that I have my valid, well-supported criticisms of Islam.

And why should Islam get a free pass in the criticism department? What makes Islam so special that it is beyond reproach? Personally speaking, I have criticized Christianity (see here and here). I have even criticized my own religion for some of the ridiculousness that goes on within Judaism (e.g., here, here, herehere, and here). Look at my blog, and you'll see that there's plenty of criticism to go around. Bad ideas are bad ideas, regardless of their origin.

As long as it's not excessive, criticism is a healthy thing. Criticism is what keeps people honest. It's what makes people aware of shortcomings in the hopes that they will be fixed. It's no accident that the biblical verse of rebuking your neighbor (Leviticus 19:17) is shortly before the famous verse of loving your neighbor (Leviticus 19:18). Criticism helps us grow, and it is what keeps a robust marketplace of ideas. If it is to survive in the marketplace of ideas, Islam should be able to stand on its own two feet and fend off criticism either of Islamic theology, culture, or how Islam's practitioners act in Allah's name. No one should be exempt from criticism, which is precisely the point I am trying to make.

Anyone who is intellectually mature enough will address shortcomings that are experienced. Practitioners and sympathizers of Islam should be able to explain why women, homosexuals, apostates, or non-Muslims in general have been and still are treated poorly under Islamic rule, as well as other shortcomings that are within the Islamic religion. We shouldn't be shielding Islam from criticism simply by calling the other side racist, bigoted, ignorant, or spiteful. If one uses that sort of ad hominem attack to exempt Islam from criticism, how are they all that different from what they're accusing the critics of? Instead of succumbing to hypocrisy, what should be done is the fostering of honest debates in which we can discuss the finer points surrounding the debates on Islam. This would be the sign of a well-developed, mature society. Anything less makes me wonder what practitioners and sympathizers of Islam are afraid to confront.

Friday, March 13, 2015

Robots, Automation, and Computerization Will Kill Some Jobs, and That's Not a Bad Thing

"They took our jobs." The takeover is not coming from those crossing the U.S.-Mexican border. Apparently, it's the robots that are taking over human jobs. It sounds like something from a science fiction film, does it not? Apparently, some think it's only a matter of time before robots will be performing the vast majority of jobs that us humans once did. This neo-Luddite way of thinking assumes that the exponential increase in digital technology is causing our labor market woes, woes which will only get worse over time. One of the features that makes it such an alluring argument is its plausibility. If they can make robots that can replace fast food workers because fast food workers asking for a $15 per hour minimum wage backfired, then it should only be a matter of time before they make robots to replace everyone else, right? If we keep automating our jobs, it will lead to structural unemployment and greater income inequality, or so goes the argument.

The idea of technological unemployment goes back to the Luddites. The Luddites were a group of 19th-century textile artisans that feared the technological development of automated looms because it would cause massive unemployment. Both Karl Marx and John Maynard Keynes picked up on the idea of technological unemployment. The Government Accountability Office (GAO) wrote a report on it back in 1982. Books and articles have been written about how we'll lose our jobs to machines. Yet we've had disruptive technology before. The cotton gin, the printing press, the automobile, the dishwasher, the Internet: these are all examples of technologies that disrupted labor markets in some way, shape, or form. How did the Luddites end up faring? How does anyone whose job is potentially or actually affected by technological development?

They were somewhat right, but it's a similar phenomenon we see with immigration or outsourcing because looking at the layoffs and resulting short-term job displacement (read: short-term pain for longer-term gain) is only part of the equation when considering the effects on labor markets. Not only did new jobs develop in the textile industry as a result of automated looms, but society as a whole prospered by having access to better, cheaper clothing. This is the sort of trend that has historically taken place with new technology. While there were some individuals in the horse-and-buggy industry lost their jobs in the short-run because of the automobile industry, you're going to find very few individuals who are going to honestly say that the invention of the automobile has been an overall loss. Plus, if you want to decry unemployment, look at the past century of unemployment history. We don't see any significant changes in the unemployment rate that would suggest massive, long-term, structural unemployment. There is no evidence that technology has caused massive unemployment or even the shift in income inequality (also read this report). Any massive shifts in unemployment in the past century-plus have either been caused by financial crises and/or government intervention. If history is an indicator of anything, it's that the amount of people employed overall will increase because the short-term job displacement only leads to labor market pattern shifts in the long-run.

But maybe this time, the effects of automation will be different. Maybe there's too much of a good thing. Perhaps we are in a stage in which massive acceleration in technological development makes the past a useless indicator for how labor markets will fare. It's not the first time this argument has been used, but the New York Times actually brings up a good point as to why it might be more difficult this time around:

Through much of the 20th century, workers moved out of agriculture and into manufacturing jobs. A high school diploma and a basic willingness to work were often enough, at least for white men, because the technologies of those times often relied on accompanying manual labor.

We cannot deny the appeal of behind robots. Faster, cheaper, and better inputs translate into overall improved quality of production, all of which are good for the consumer. This technology creates for a better customer experience, which creates new demand. There are certain advantages that come for the employer, as well. Robots don't need to be trained. There are no issues of overtime, on-the-job theft, or health concerns with a robot. The closest a robot gets to being fired is being decommissioned. Plus, robots are a good way of avoiding issues with minimum wage, liability (read: expensive litigation costs), and anti-discrimination laws. If you want human labor to exist with the labor participation rate that currently exists (or even push for a higher rate), we need policies that encourage the hiring of human labor.

This round of automation is scarier than past rounds because the decoupling between productivity and employment is not just going to affect low-skilled labor; it's going to affect everyone. An Oxford study (Frey and Osborne, 2013) estimated that as much as 47 percent of U.S. jobs could potentially be at high risk of automation in the the next decade or two. Frey and Osborne also wrote a more recent report for Citi to the same effect. However, not everyone is convinced. Some think that the upcoming automation won't be any more disruptive than past technologies (Gordon, 2014; Autor, 2014; Miller and Atkinson, 2013) due to complementarity.

Similar to economic modeling or climate change modeling, certain prognosticators saying that "we are at risk" isn't the same thing as "it's going to happen." Robots and computers will undoubtedly play a part in the future of labor markets. The acceleration also presents unique challenges that our ancestors did not have to contend with when it comes to technological change. If I had to make an educated guess, I would opine that there are going to be some sectors more heavily affected by others. As research from the Brookings Institution points out (Kearney et al., p. 4), being able to automate routine acts will be easy. Jobs with non-routine physical movement or abstract tasks, on the other hand, will be more difficult for robots to perform. Language recognition and in-person interaction will also prove to be elusive for robots. Empathy, communication, problem-solving, persuasion, and creativity are all skills that very well might never be mastered by robots (Autor, 2014). The New York Federal Reserve shows how manual and routine jobs have already been on the decline since 1975, whereas cognitive and non-routine jobs have been on the uprise.

The question here is not about whether certain sectors will experience automation [because it's already happening], but how employees, employers, and governments respond and adapt to the change that will affect the course of the future. If you want to push for a $15 per hour minimum wage at fast food restaurants, it's only going to incentivize McDonald's to automate sooner. If you're worried about corporations making wicked profits off of robots, then we need to take another look at intellectual property rights and patents because it's just a tool for crony capitalism to flourish. We need an education system that doesn't just teach critical thinking and interpersonal skills; we need to teach skills that people can use in the workforce instead of peddling the idea that "everyone should pursue their dream." What's more is that we need a workforce that can continuously develop skills for an increasingly dynamic job market (Kearney et al., p. 5). We also need to focus on policies that will encourage entrepreneurship and business dynamism (ibid., p. 6). To accomplish such dynamism, we need tax reform to encourage competition (e.g., removing the corporate tax, employer-sponsored health insurance), as well as deregulation in areas that don't need regulation (e.g., occupational licensing).

Yes, there are going to need to be some drastic changes as to how we think about job creation and job training. At the end of the day, the fear of losing your job to a machine could very well be nothing more than a case of the Frankenstein complex.


2-27-2016 Addendum: Here is a well-thought out list of five reasons we should not be worried about artificial intelligence.

6-6-2016 Addendum: The Organization for Economic Cooperation and Development (OECD) published a report last month on automation. While most people will experience the automizing of certain facets of their jobs, only 9 percent of U.S. jobs will be fully automized. Rather than worry about robots taking over in the labor market, it's more a matter of how to use robots to help us create jobs that require less automation. Consulting firm McKinsey also estimated last November that only 5 percent of jobs will be fully automated.

12-17-2017 Addendum: This article covers two myths in the automation debate: 1) Production is nowhere high enough to make automation problematic, and 2) job destruction by automation is not high enough because automation will more likely change the nature of a job instead of downright eliminating it.

Monday, March 9, 2015

Is There a Certain Futility to Pursuing Multiculturalism?

"Can't we all just get along?" The world would be a better place if everyone pursued peace and emphasized their similarities over their differences, but all the wars, conflicts, and ethnic nationalism and chauvinism throughout history show otherwise. Being more multicultural, whether in public policy, business protocol, or in our personal lives, has become more and more prevalent over the past century. I recently came across an article at the Council on Foreign Relations talking about the failures of multiculturalism. According to author Kenan Malik, "everywhere, the overarching consequences have been the same: fragmented societies, alienated minorities, and resentful citizens." Although most of what I am going to write here is just a stream of consciousness [that could very well use some fine-tuning down the road], it's going to be revolving around two questions: 1) Has multiculturalism been that much of a failure? 2) Is there a point in even pursuing multiculturalism?

The term "multiculturalism" can be quite nuanced. It can be as simple as the co-existence of individuals of diverse cultures within a given society, which is a good thing because a free society needs to be one in which can tolerate others' differences. The definition can be as complicated as various public policies that encourage a more ethnically and racially diverse society, such as immigration or labor policy. Governmental approaches to multiculturalism can vary, as Malik points out. The United Kingdom gives various ethnic communities an equal opportunity to participate in the political process. Germany encourages immigrants to live their separate lives without even pursuing citizenship or even attempting to integrate. France simply prefers assimilationist policies over multicultural ones because France has a strong sense of nationalism. The United States has a more open, integrative approach to multiculturalism, although if you look at its immigration history, it took a while to get there.

Having a multicultural frame of mind assumes that an individual's cultural background has the potential to frame one's identity and path. There is something to be said for someone to maintain a sense of cultural identity. Although not quite the same, I maintain my Jewish identity while being able to maintain my identity as a citizen of the United States of America. To be an American simply means to be a citizen of the USA, whether by being born here or going through the naturalization process. In France, the standards are much higher. You pretty much have to have Christian, French ancestry dating back centuries. Otherwise, you're not considered "truly French." Jews who have French ancestry aren't considered truly French, so why would Muslim émigrés? Regardless of the extent of nationalism in a country, nation-states have at least some cohesive element that forms a common identity, which is inevitable when people interact with one another in a society, particularly a nation-state.

Societal definitions of nationalism are also tied into the extent to which far-Right, anti-immigrant parties have clout in a given country. Ultra-nationalism never did any favors for pluralism, that much I can tell you. Race and religion play a role, but so does language. If you cannot speak the language, you already have a barrier to fully participating in society. Economic disparities and willingness to participate in society affect an immigrant's ability to integrate.

Also, there is a difference between integrate and assimilate. Assimilate means shedding one's previous culture in order to take on the culture of the country in which one currently resides. Integrate means maintaining some or all aspects of one's culture while still finding a way to function and participate in society. If you separate your immigrant or minority population, such as in Germany or Britain, then there is discontent and discord. For society to work at its most optimal, these individuals need to be integrated into society so they can be active members of society.

A big issue with multiculturalism is the assumption that all cultures are equally valid, which leads to cultural relativism. This is a problem when certain cultures thrive on intolerance or likes to infringe on others' lives. How could we object to honor killings, anti-Semitism, infant sacrifice if we went down the multiculturaist viewpoint? Plus, multiculturalism quashes individualism. Chinese culture, for instance, isn't monolithic. Europeans aren't all the same. Even within a certain nationality or group of people (e.g., women, homosexuals, libertarians, Jews) don't all think the same way.

I don't think having a pluralistic society in which individuals of different races, religions, sexual orientations, political views, and genders is a bad idea. It's actually a great idea. Diversity helps advance society. However, when certain government policies get in the way and pushes a view that all cultures and views are equally valid, I have a problem of that. That should take place in the marketplace of ideas, not be forced by government decree. You can try to legislate tolerance, but acceptance, that's a whole different issue, and you know what? Multiculturalism tends to breed even more resentment. Pluralism breeds goodwill. I know it's not politically correct to say that, but someone has to point out yet another example of "good intentions, bad results."

Sunday, March 8, 2015

Parsha Ki Tisa: Shabbat as an Antidote to Idolatry

The following are thoughts that I had this past Shabbat on the past week's Torah portion of Ki Tisa.

I am sure that many rabbis this past Shabbat gave a d'var Torah on the Golden Calf incident. Considering that it's the highlight of the parshah, it makes sense. However, before the Golden Calf incident, G-d gives Moses instructions on constructing the Tabernacle. The beginning of the parshah (Exodus 30:16-31:17) is actually devoted to that theme. What is interesting is the conclusion of these instructions is the importance of the observance of Shabbat. You can see the Hebrew here [or here], but an English rendition would be as follows:

"And the L-rd said to Moses: Speak to the Israelite people and say, 'Keep my Sabbaths because it is a sign between Me and you throughout the ages, that you may know that I, G-d, have consecrated you. You shall keep the Sabbath because it is holy for you. Anyone who does work on the Sabbath shall be put to death; whoever does work on it shall be cut off from his kinfolk. Six days may work be done, but on the seventh day there shall be a day of complete rest, holy to G-d; whoever does work on the Sabbath shall be put to death. The Israelites shall keep the Sabbath, to observe it throughout the generations as an eternal covenant. It shall be a sigh for all time between Me and the people Israel. For in six days, G-d made the heavens and earth, and on the seventh day, He ceased from work and was refreshed." -Exodus 31:12-31:17

After that, there is a one-verse coda (Exodus 31:18) in which G-d gives Moses the two tablets. Normally, I wouldn't throw such a long verse out there, but I had to illustrate in order to ask the following questions. Why was this decree a part of the instructions of the Tabernacle? Even more importantly, this passage juxtaposes with the Golden Calf incident. Given that juxtaposition is a standard hermeneutical tool in Jewish interpretation, it begs the questions: Why did G-d juxtapose Shabbat with idolatry? In all sincerity, G-d just handed Moses two tablets, one of which said "Remember the Sabbath." Plus, G-d was with Moses for forty days up at Mount Sinai. There has to be some reason why this particular decree was put in the text, so what is the connection between Shabbat and idolatry? 

First, it would be prudent to ask what the significance behind Shabbat is. The Exodus passage [above] points to the Creation story in which G-d created for six days and desisted from creation on the seventh day. In Deuteronomy 5:15, we are given a second explanation, which is that the Israelites were once slaves in the land of Egypt and that G-d freed them. We have both the Creation and Exodus narratives as bases for Shabbat observance. Even with this, the connection between Shabbat and idolatry is still unclear, so let's get at idolatry for a bit.

What is idolatry? In the case of the Golden Calf, it was prostrating oneself before a statue and worshiping a physical object either as a representation or actually as a deity. However, worshiping idols is more than statues. We can worship money becoming workaholics or obsessed with material consumerism. Pursuing physical pleasure in a hedonistic fashion is another form of idolatry. So is pursuing glory for the sake of one's ego. Idolatry can take many forms, even in the 21st century.

To tie Shabbat and idolatry together, a lack of Shabbat, at least for a Jew, is a form of idolatry. G-d gave the Jewish people Shabbat. Even partaking in creative acts (which is closer to the Jewish definition of "work") is a form of idolatry because it shows that we, as humans, need to always remain in control. On Shabbat, we relinquish our need to control, our need to do, and instead, we just are. As R. Aryeh Kaplan brings up, "Man's act of asserting his dominance over nature makes him a slave to it. All week long, man is rule by his need to dominate the world....but somehow, his most basic humanity is submerged by his occupation. On the Sabbath, all this is changed. Every man is a king [and every woman a queen], ruling his own destiny..."

Part of the gift of Shabbat was to make sure that we did not become our occupations because when we work all the time, that's exactly what happens. It's all too tempting to keep going and say "I'm never tired." Even if you don't realize it, having an established day of rest actually nurtures both the body and the spirit. While some think it's slavery to not be able to use their phones or do work for a day, it's by far a bigger form of slavery to think that you can't do without it for a day.

Don't take this as "anti-work" or a spiel to justify a non-productive life. The passage in Exodus 31 still very much says "you shall work for six days." When it's not Shabbat, Jews are meant to be hard-working and productive. It's that G-d has provided the Jewish people with a sense of work-life balance, and Jews should take Him up on that offer.

Thursday, March 5, 2015

King v. Burwell: How to Get Ready to Replace Obamacare

Yesterday, the Supreme Court heard the oral arguments of King v. Burwell, a case that could very well undo most of the Obamacare subsidies. In section 36B of the Affordable Care Act (ACA), also known as Obamacare, it states that subsidies for health insurance under the ACA are acquirable "through an Exchange established by the State [own emphasis added] under section 1311 of the Patient Protection and Affordable Care Act." The unambiguous nature of these words could mean that the IRS overstepped its bounds by taxing individuals without congressional authorization, thereby making the federally-run exchanges for states that opted out running their own exchanges would be rendered unconstitutional. This would mean that about 6.5 million would lose the subsidies that allow for them to have health insurance, which would undoubtedly stymie Obamacare. (I would like to bring up as a side note is that mentioning the rate of those insured is a red herring because being insured is not the same thing as having readily-available access to high-quality health care) Although many are already speculating on the ruling, we won't know for about another three months how this will all play out. In the meantime, we have to consider the real possibility that many who are currently receiving health subsidies through the federally-run exchanges who would no longer be able to do so. Rather than let these individuals fall by the wayside, something should be enacted to replace what was technically illegal for the IRS to do in the first place, but what should that something be?

I'm no Republican, but the myth that there are no alternatives out there needs to stop being perpetuated. The Burr-Upton-Hatch plan was revealed only last month. There are also the Empowering Patients First Act and the American Health Care Reform Act, proposed in 2009 and 2013, respectively. Even Senator Ted Cruz proposed the Health Care Choice Act a couple of days ago. These plans could very well have their flaws, but we should stop buying into the idea that there are no alternatives to Obamacare.

And there certainly should be alternatives to Obamacare because it's only exacerbating problems that already exist in American healthcare. Writing a piece of legislation is outside of the scope of this blog entry, but I can tell you some of the facets that a "repeal and replace bill" should have:


  1. Affordability. Healthcare costs, specifically with premiums, have been increasing too much over the years, even when inflation is considered. Obamacare is only going to drive healthcare costs up over time. Cost containment needs to be an important factor, which brings me to my next point.....
  2. More individualized health care plans. Each person is different, has their own circumstances when it comes to their health, and should be able to choose a health care plan that best reflects those circumstances. Those who are healthier and/or younger do not need extensive coverage. Having plans that are excessively comprehensive cannot only be cost-prohibitive, but also inefficiently utilize scarce resources that could be diverted to those who could actually use them. Flexibility and choice should be offered to the individual. 
  3. Greater competition. Competition is one of the factors that leads to a vibrant marketplace because it provides lower costs for higher care. Constraining network choices, which is what Obamacare does, is not what people need to acquire something as vital as health care. One way to do that is allow for health insurance to be carried across states, but as I have mentioned before, it's quite tricky. If we've learned anything from Medicare Part D, it's that whatever Congress decides to enact, it needs to engender competition in the marketplace. 
  4. A lack of favoritism towards certain companies while creating portability. This is in reference to employer-based health insurance, a tax exemption that benefits companies who spend money on their employees' health insurance plans. Using this WWII relic of price control has been a major obstacle for bringing health care costs down. Plus, health insurance should not be something that is contingent upon having a certain job. I, or anybody else, should be able to have the portability to take the same health insurance plan from job to job. Any plan that does not consider the repeal of employer-based health insurance is, quite frankly, incomplete. 

I could go on and on, but the point remains: health care reform needs to be focused more on the patient and less on how the government intrudes on the lives of the individual with little to no value in return. There needs to be a focus on providing affordability while making sure that it maintains its accessibility. If the ruling is not in the government's favor, it would become a great opportunity for Congress to enact something that would create a healthy marketplace, something which is decidedly lacking with Obamacare.

Monday, March 2, 2015

How Bitter of a Pill is Medicare Part D to Swallow?

Although it's usually fun to write about the public policy topic du jour (e.g., Greece's economy, net neutrality), sometimes it's nice to go off into the more obscure aspects of public policy, such as the wonderful world of Medicare Part D. What even put me in this line of thinking in the first place was a working paper from the Federal Reserve Bank of San Francisco [FRBSF] (Dunn and Hale, 2015) that tries to show that between 19,000 and 27,000 lives were saved thanks to Medicare Part D.

Before even attempting to delve further, we should ask ourselves what Medicare Part D is. Medicare Part D, also known as Medicare Prescription Drug Coverage (enacted in 2003), is the United States federal government's way of subsidizing the costs of prescription drugs for Medicare beneficiaries.

How much does such a program cost the American people? Looking at the Medicare Trustees report, the most recent of which was released back in August, shows that about $70B was spent on Medicare Part D in 2013 (p. 101). This amount currently makes up 0.44 percent of overall GDP, and will increase to 1 percent of the overall GDP by 2050 (p. 115). By 2020, we are expected to pay $134.1B on Medicare Part D (p. 109).

One thing that has the potential to irk me about Part D is that Medicare fraud on the whole is a major issue. Medicare has been on the Government Accountability Office's (GAO) high risk list since 1990, and the GAO has finally given some attention to Part D in the context of fraud. The actual sticker price of Part D, the increased amount of cost-sharing since the implementation of Part D, and the money lost in fraud are one thing, but something else bothers me about the Federal Reserve study, and that's not addressing the idea of the crowding-out effect.

In economic theory, crowding out essentially means the government spends money that would have been otherwise spent in the private sector without government intervention. This is important because if the crowding out effect is high, the cost-benefit analysis (CBA) is altered significantly in terms of efficiency. Jonathan Gruber, the economist of Obamacare fame, actually co-authored a paper (Engelhardt and Gruber, 2010) and used a timeline with considerable overlap of the FRBSF paper. Their findings ended up being that the crowding-out effect was at 80 percent, which is to say that Part D extends prescription coverage to one senior citizen for the price of five.



Taking a look at Figure 1 from the FRBSF study (see above), there not only was there a downward trend in a decline of the cardiovascular mortality rate (which is an important distinction to make since half of Part D expenditures go to cardiovascular-related prescriptions), but it was actually a steeper decline prior to Part D being implemented in 2006. Looking at this correlation, not only do I question the FRBSF's theory, but one could argue that Part D actually slowed down the mortality rate decrease, i.e., crowding out at such a level overstates the benefits in the FRBSF paper.

Even in spite of Part D's flaws, what makes Part D more successful than the rest of Medicare is that it is a voluntary drug benefit program that delivers benefits through stand-alone drug prescription programs, and has come well under budget (see CBO report here). Even if the subsidies are coming from the government, Part D has done a better job of cost containment than the rest of Medicare. Why? In a word: competition. When you encourage competition, even with taxpayer dollars, it has this uncanny ability of making goods and services better.

Tampering with health care in such a manner is not as simple as fine-tuning an engine. Marketplaces work more like intertwined ecosystems. While I would like for a liberalized health care market to be implemented, I know that the government is not going to stop intervening anytime soon (we have Obamacare, remember?), which is why Part D is preferable to some of Medicare's more interventionist approaches. Heavy-handed price regulation is not the key to healthcare success; competition is (Howard and Feyman, 2013), and until there is enough of a societal change in which both the people and its politicians realize that a more liberalized health care market is a better-functioning health care market, competition in Medicare is a facet we should continue to strive for in United States health care policy.