Monday, December 29, 2014

Nowhere Near 1 in 5 Women Were Raped In College: Is "Rape Culture" on Campus Really a Thing?

One in five women will be raped while attending college. It's one of those statistics that illustrates that if you repeat something enough times, people will believe it. This oft-cited statistic comes from the 2007 Campus Sexual Assault study. It might sound fancy, but it comes with some major flaws: 1) only two colleges were surveyed, 2) there was a large non-response rate, thereby inflating the figures, 3) the definition of "sexual assault" was very vague, and included such actions as forced kissing, and 4) the survey questions were also vague, thereby leading them open to interpretation where one could assume the worst.

Aside from shoddy statistical analysis, I bring this up because the Bureau of Justice Statistics (BJS) released a much more thorough study earlier this month entitled "Rape and Sexual Assault Victimization Among College-Age Females, 1995-2013." The BJS uses both longitudinal and cross-sectional data to determine rates of sexual assault. What did they end up finding? Looking from 1995-2013, the number of women raped is not 1 out of 5, but rather 6.1 out of 1,000 women, which is 0.03 out of 5 women. That's an exaggeration of thirty-three fold! And it's more egregious when you figure that the rate of sexual assault on campus has had an overall decline since 1995 (Figure 2).

Is this to say that we should condone this piggish behavior? Of course not! Sexual assault is inexcusable, as are the times when campus tribunals sweep sexual assault under the rug to artificially bolster their campus safety statistics. Forcing someone else to have sexual contact against their own will is a blatant violation of the nonaggression axiom. "No" means "no," and that's no less relevant when we're talking about college students getting drunk at a frat party or if the woman is scantily clad. Alcohol only fuels a man's propensity towards randiness, and it doesn't excuse deplorable behavior. The underreporting that the BJS points out (p. 1) makes a sad statement of the stigma attached to sexual assault, and that should be addressed so more women report when they are sexually assaulted. Nevertheless, 0.03 out of 5 women being sexually assaulted is a far cry from 1 in 5 women.

If the premise behind feminism is gender equality, then colleges should be promoting responsible behavior for both sexes instead of encouraging segmented gender roles that exacerbate the issue. We should help women without knocking men down. There's a fine line between holding men responsible for their misdeeds and demonizing men in a "guilty before proven innocent" mob mentality because believing that women would never lie about something lie this is "politically correct" (FYI: Although it's rare, there are moments when women report false accusations, as was infamously illustrated with the Duke University case back in 2006). Not only is sexual assault lower on campuses, but it has experienced quite the drop since the 1980's. It would be nice to live in a world without sexual assault, but it should still be noteworthy that the problem is nowhere prevalent as we thought, and that it has been on the decline, much like we see with rates of domestic violence and rates of other violent crimes in general. This is something that we should all celebrate, but I anticipate that the hardcore feminists will still advance the idea of a "rape culture," regardless of what statistics or even the people over at the Rape, Abuse, and Incest National Network (RAINN), the largest anti-sexual-assault organization, have to say about there not being a "rape culture." As RAINN points out, "Rape is caused not by cultural factors but by the conscious decisions, of a small percentage of the community to commit a violent crime...[Blaming it on 'rape culture'] has the paradoxical effect of making it harder to stop sexual violence, since it removes the focus from the individual at fault, and seemingly mitigates personal responsibility for his or her own actions."

We should take rape and sexual assault seriously, but bemoaning "rape culture" is not the way to go. Whatever colleges decide to do, what we should stop doing is giving credence to the "rape culture" myth because as Cathy Young over at the libertarian Reason Magazine point out, the anti-"rape culture" movement is one that has "capitalized on laudable sympathy for victims of sexual assault to promote gender warfare, misinformation, and moral panic. It's time for a reassessment."

Friday, December 26, 2014

Parsha Vayigash: Teshuvah and Forgiveness as Signs of Emotional Maturity

Although some people never grow up, many of us have this uncanny ability to handle situations more tactfully than we would have when we were younger. We find this to be the case with Joseph and his brothers in this week's Torah portion. We're at the point in the story where Judah pleads on behalf of his brother, Benjamin. Afterwards, Joseph ordered everyone except the brothers to leave the room, and Joseph reveals his true identity (Genesis 45:1). Instead of throwing the book at the brothers or exacting revenge, Joseph told his brothers not to grieve because G-d had "sent Joseph before them to preserve life (Genesis 45:5)."

Joseph's reaction was remarkable. Why? Joseph's brothers threw him in a pit and sold him into slavery. Before ascending to power, Joseph had done some hard time in prison. Joseph had been put through the ringer. He had every right to be angry, and what's more is that he could have treated his brothers either with the same treatment because in ancient times, might was right. What we see is not a vengeful Joseph, but a Joseph who was longing to reunite with his family. Not only do his actions speak to this desire, but so do his words. In Genesis 45:5, Joseph said "כי למחיה שלחני אלהים לפניכם" (G-d sent me before you to preserve life). There's one problem: it wasn't G-d that sold Joseph into slavery and cause all the subsequent events that led up to that moment. It was his brothers who sold him into slavery. The text clearly says so. So why would Joseph attribute these events to G-d? Even though Joseph's dream/prophecy was correct (Genesis 37), I would postulate that Joseph cared more about family than being right or having prophetic powers. Not only did he miss his family, but he has realized the importance of G-d in his life. Joseph had a hard-knock life, and if it taught him anything, it's that an unfettered ego does not make for a fulfilling life. Rather than being the immature child who rubbed his conceit in his brothers' faces, he has figured out the importance of forgiveness as the beginning of healing his years of angst and frustration, which are illustrated by the loud cry he let out after revealing his identity (Genesis 45:2).

Why was he so overwhelmed with emotion? Why couldn't he keep up the charade anymore? Because if we read the text closely enough, Judah actually went through the stages of the teshuvah process. The brothers admitted their error (Genesis 42:21-23), they confessed and admitted collective responsibility (Genesis 44:16), and showed behavioral change by being willing to become Joseph's slaves (ibid.). This essentially is the teshvuah process (Mishnah Torah, Hilchot Teshuvah, 2). Joseph didn't hand out forgiveness for free. He realized that his brothers were truly repentant because they had shown true changes in their mentality and behavior, and for that, he was able to let them back in his lives. No more grudges. No more living in the past. Joseph was able to live in the present, feel at peace, and share that peace with his loved ones. The Joseph story is not only the first instance of forgiveness in recorded history, but also a wonderful example of the power of forgiveness and reconciliation that can guide us in the relations we have in our life.

Wednesday, December 24, 2014

People Tend Not to Exchange Gifts with Economic Efficiency, So What Gives?

With Chanukah ending and Christmas on its merry way, it got me thinking about the practice of gift-giving. People exchange gifts as a part of the holiday spirit, but the more I think about it, gift-giving doesn't make economic sense. One of the most basic ideas in economics is that the individual knows their consumer preferences better than anyone else. It's one of the reasons I get annoyed when the government assumes they know what individuals want better than the consumer. Whether it's the government spending money on in-kind transfers or individuals spending money to buy gifts, it creates economic inefficiencies because neither fully understands the individual's preferences. Yet people still give presents during the holiday season, so what gives?

The topic of the economics of gift-giving was heavily discussed in a 1993 study called "The Deadweight Loss of Christmas," in which economist Joel Waldfogel calculated a deadweight loss of $4-13B in 1993 dollars, which would be $62-206B in 2014 dollars. Deadweight loss takes place when a certain good or service is not exchanged at economic equilibrium, thereby creating economic inefficiency. The result is a loss to one party without an offset gain to another party. In this case, the assumption is that the gift-giver buys a gift that they think the recipient will value at the very same level of the gift purchased. The issue is that in many instances, the gift-giver does not have an accurate sense of what gift(s) the recipient would want, i.e., there is an information asymmetry. Since the value of the gift is less to the recipient than it is at the level at which the gift-giver perceives, economic welfare is lost in the process. The Secret Santa gift exchange is a classic example of the phenomenon at hand.

As nice as it is to make an impassioned argument about the economic inefficiencies of gift-giving, it fails to account for what economists call utility. Utility is economic jargon for "the fulfillment one receives from a certain good or service." In more layman's terms, the term could be defined as "sentimental value" or the psychological joy felt as a result of receiving or giving gifts (Gneezy and List, 2006). There is also the argument that gift-giving is a "signal of intensity of effort in one's search (see video below on Valentine's Day and gift-giving)," or another way of saying it: "it's the thought that counts" (Yao, 2009). Behavioral economics also postulates that there is an allure and excitement in gift-giving that brings joy to the giver, and it also strengthens the social connection between the giver and recipient. Although one cannot objectively measure it, one has to be able to consider the social and individual utility produced by gift-giving. And who knows? Maybe by exposing the individual to something new, they might actually like it even more (read: "more utility") than a gift that would have kept them in their comfort zone.




Even with social utility, the economic inefficiency is troubling to me. Making the argument of "stimulating the economy" doesn't work because while it might stimulate some consumer spending in the short-run, it leaves us with less resources in the medium-to-long term to help build the economy in the future. Does this mean that I have an inherent problem with gift-giving or think that gift-giving should be banned? Nope! My issue isn't with gift-giving per se, but bad gift-giving. The inefficiencies are created because the giver really doesn't know what the recipient wants. If you know the recipient well (e.g., parents buying for their children, spouses or best friends buying for each other), then economic efficiency is maintained. However, we don't have that sort of close relationship with most people, and since most gift-giving is done with people to whom we are more distant, the economic inefficiencies are still created.

So my advice on more economically efficient gift-giving goes as follows. If you don't know the person that well, either get to know them better or directly ask them what they would like. If you are too uncomfortable asking outright or simply don't want to better know the person, then give in such a way that is beneficial to the ultimate recipient. Cash is the most efficient form of gift-giving. If giving cash comes off as impersonal or you view it as socially unacceptable, a gift card can both personalize a gift while capturing much of the economic efficiency (I say "much" because $45B in gift cards have been unspent since 2005, which comes out to about nearly $5B each year). Charities are also a good idea, although if you want to give to a place like a food pantry, give them $20 instead of $20 worth of food because food drives are just bad economics. Whatever method of gift-giving you decide, I hope that 'tis the season for more economically efficient gift-giving. Happy Holidays!

Monday, December 22, 2014

Eating Cheese on Chanukah and Why Using the Story of Judith As a Basis for This Practice Has as Many Holes as Swiss Cheese

When I was in synagogue this past week, I learned about a peculiar minhag (custom) in Jewish practice: eating cheese on Chanukah. In spite what people might think, latkes, or potato pancakes [commonly eaten on Chanukah], were not originally derived from potatoes, but from cheese. Considering that the potato was a New World crop, this would make sense. But that isn't the disturbing part. It's how the practice of eating cheese on Chanukah began. The origin of this practice is first mentioned in the Shulchan Aruch by the Rema, also known as R. Moshe Isserles. In it, the Rema attributes this practice to the milk that "Judith (יהודית) fed to the enemy."

This made me ask an initial, but important question: who in the world is Judith? The Book of Judith is a deuterocanonical text, which is to say that this text made it into Christian Scripture, but never made it in the Jewish version of the Bible (Tanach). Why was this text not considered for the Jewish canon? The story itself could provide some context.

Although the text was allegedly written in Hebrew, the oldest surviving text is in ancient Greek. What the text depicts is that the Greeks conquered Judea, and the evil general Holofernes declared that all the Jewish virgin females had to sleep with a Greek official or be punished by death. Someone had to stop the madness, so Judith took it upon herself to do so. Essentially, Judith used her good looks to enter the Greek camp and seduce Holofernes. One night, she fed him cheese, which made him thirsty for wine. This was the point where she brought Holofernes to the point of inebriation, after which she decapitated Holofernes. The decapitation eroded the Greek morale, and the Greeks retreated.

Whether it's that Judith decapitated someone or that she used her sexual allure and prowess to get the job done, using Judith as an example of valor was probably something that the rabbis didn't want women emulating. Is the message that religious communities want to send to their daughters that exploiting a situation by using your sexual appeal is acceptable as long as the ends justify the means? Perhaps this is why the Book of Judith never made it into Jewish canon, or perhaps it is due to the historical anachronisms in the text or its possible Greek origin. What's even more ridiculous about using this as a basis for a minhag for Chanukah is that Holofernes wasn't Greek; he was Assyrian. This story took place during the rule of Nebuchadnezzar (6 c. B.C.E.), which was centuries before the Chanukah story, so the connection between Judith and Chanukah is literally inconceivable. It's also interesting to note that the earliest mention of this practice is during the 14th century.

I don't like the fact that a practice in Judaism, even if it's a minor one, is based on an apocryphal, fictional text with historical inaccuracies and a problematic protagonist. Fortunately, I was able to find another explanation for this practice because the primary, traditional one was very perturbing. This insight comes from the Ben Ish Chai. When the Greeks occupied Judea, they banned three specific Jewish institutions: maintaining the Jewish calendar [based on the lunar cycle], Shabbat, and circumcision. The Hebrew word for "month" is חודש, which begins with ח. The second letter of the word Shabbat (שבת) is ב. The third letter in the word מילה (a ברית מילה is the Hebrew term for circumcision) is ל. These three letters spell the world חלב, which is the Hebrew word for "milk," which gives us the basis for eating dairy on Chanukah.

It's a tenuous explanation, but let's go with it. The story of Chanukah took place during a time when the Greek rulers banned practices vital to Jewish observance. Milk is a source of sustenance. Not only does the Bible refer to Israel as the "land of milk and honey" (e.g., Exodus 3:8, 33:3; Deuteronomy 31:20), but milk symbolizes life in Judaism, as is observed by the prohibition of mixing meat and dairy. Much like milk can nurture life, Jewish rituals and practices nourish the Jewish people.

On the one hand, the universalist morals and ethics are a vital part of Judaism. On the other hand, without the ritualistic, particularistic practices, there is nothing to distinguish Judaism from other world religions. If consuming dairy products on Chanukah is to remind us of anything, it is that studying Torah, Shabbat, affixing mezzuzot, and the plethora of Jewish ritualistic practice engenders, vitalizes, and helps define Jewish spirituality.

Thursday, December 18, 2014

Reading About CIA Interrogation Methods Sort of Felt Like Torture

I know, I know. I'm running a tad behind on the news. I just moved to a different part of the country and I'm still getting settled in, so please cut me some slack on catching up here. I heard about the Senate's report on the CIA's detention and interrogation methods last week, and I have wanted to comment ever since, even if briefly.

After 9-11, things haven't been the same with the way the United States approaches national security. Fortunately, we didn't become a police state (Thank G-d!), but at the same time, it became easier to justify doing things in the name of national security, and what's worse is that most Americans are okay with their liberties being violated for security's sake. Didn't Benjamin Franklin say something about those who are willing to give up freedom for security deserve neither? I'm not just talking about starting two wars in the Middle East or passing the Patriot Act. The Senate's report shed a lot of light on what was taking place in the world of intelligence gathering. The CIA's interrogation techniques included "wallings," sleep deprivation, threatening the detainee's family with bodily harm, and the ever-infamous waterboarding.

There's the ethical question of whether we should be torturing people in the first place. There are those who are absolutely opposed in violating one's human rights to acquire national security intelligence. Proponents can certainly provide an extreme enough of a hypothetical where one would be inclined to reluctantly acquiesce, at least from a utilitarian perspective, to the violation of international law if the situation were that dire. Torture is akin to poison: "dosage matters." Given the information I presently have, I'm not quite convinced that the risk were so high that we need to use such methods. The problem with national security issues is that classified information and security clearances cause such an information asymmetry that only the top echelon would have adequate information to assess who is a threat and who is not. Objectively, we cannot know how deep the rabbit hole goes.

However, let's give the CIA the benefit of a doubt for a moment, and let's say that using torture to obtain pertinent national security information is reasonable, and let's also assume that the detainee actually has pertinent information to divulge. The intuition behind torture as an intelligence gathering method seems sound. You put the detainee through physical and psychological pain to get him to talk because he can't take the pain any longer. It has been done for centuries, so it's not like the intuition is anything new. Perhaps there is enough of a gradation in the quality and quantity of interrogation techniques where the CIA is justified in its actions. The problem is what the report illustrates, which is such interrogation methods are counterproductive, which makes intuitive sense, especially if they're just saying what the interrogators want to hear. The CIA has even admitted that at least up until 2013, it had no way of assessing effectiveness of its interrogation methods. If the interrogation methods don't provide the CIA with the information they require in the first place, what good is torture? The lack of oversight from either the legislative or executive branches, or even the CIA's Office of Inspector General for that matter, does not help with situation, either.

I'm about ready to head to work, so although I can say more, I really need to summarize my thoughts. Unsurprisingly, people criticize these methods. Proponents point out that we're nothing like China or North Korea. While it is true that America's methods are mild in comparison to the Middle Ages, if we are going around the world trying to promote democratic values, then America needs to "walk the walk" and act upon what it preaches as a matter of policy. I'm not here to say that America shouldn't have any counterterrorism measures whatsoever. There certainly is room to have a conversation on what the CIA's role should be in providing the social good of national security. What I am trying to say is that if the CIA is to have an active role in national security, the policy alternatives to improve the situation should be done tactfully, with accountability, and should be implemented with a greater context of the threat's overall risks in mind. We should expect the highest quality of governance from all bureaucratic agencies, and national security organizations like the NSA or the CIA are no exception. I hope that this report is a stepping stone to implementing some real national security reform.

Monday, December 15, 2014

Did the Minimum Wage Cause the Great Recession to Last Longer?

Economists and historians will be debating well into the future as to what caused the Great Recession. What is a comparably amusing debate to watch is what caused the Great Recession to linger on as long as it has. My money has been on unemployment benefits being the primary culprit (see here and here), and yet another theory comes along to complement the "unemployment benefits" theory: minimum wage laws. Shortly before the Great Recession began, Congress passed the Fair Minimum Wage Act of 2007, which gradually raised the federal minimum wage from $5.85 to $7.25 per hour. Minimum wage proponents like to think that gradual and "minute" minimum wage increases cause negligible economic harm at best, but recent research continues to add to the evidence that the minimum wage is nowhere as benign as proponents would have us believe. According to Professors Jeff Clemens and Michael Wither of the University of San Diego, the minimum wage hikes caused a net job loss of 1 million (Clemens and Wither, 2014).

Since there were states that were already paying a minimum wage that was higher than the proposed federal minimum wage, Clemens and Wither were able to measure the effects with a legitimate control group, which is no easy task in the world of social sciences. By doing so, the authors found that the employment-population ratio, i.e., the share of employed, working-age adults, decreased by 0.7 percentage points, which accounts for 15 percent of the overall decrease during the Great Recession. This helps make the study more credible because plenty of other minimum wage studies like to focus only on certain demographics (e.g., fast food workers, teenagers) instead of the macro effects of the minimum wage legislation.

This research also points out the significant declines in economic mobility (Clemens and Wither, Table 6), which is important because it reemphasizes the importance that low-skilled work has a stepping stone for upward mobility: five percentage points less likely to acquire a middle-class job. The other point that this research makes is how the minimum wage does not do nearly as good of a job of targeting low-skilled workers as the earned income tax credit does (Clemens and Wither, p. 33). The disemployment effect caused more educated workers to take on internship (p. 26), whereas less-educated workers were subject to increased odds of simply being unemployed (p. 27).

The fact that minimum wage increases unemployment and decreases economic mobility does not shock me in the slightest. While it is true that some individuals have the positive impact of an improved quality of life because of a minimum wage, let's not forget that it comes with the cost of depriving other individuals of the opportunity to gain experience and achieve higher-paid jobs in the long-run, which did nothing to help ameliorate the economic conditions of the Great Recession. This will hardly be the end of the minimum wage debate because it has become such a hot-button topic over the years. Nevertheless, if we want to help the poor, we should come up with policy alternatives that actually helps them, and spoiler alert, the minimum wage is not such an alternative.

Thursday, December 11, 2014

Does Income Inequality Cause Decreased Economic Growth?

The income inequality debate never seems to die. Its most recent revival was due to the Organisation for Economic Co-operation and Development (OECD) and its latest report (summary here) on "Trends in Income Inequality and its Impact on Economic Growth." Although the OECD's analysis has more variables, the essential relationship that the OECD establishes is between the Gini coefficient and the GDP growth rate.

What is the Gini coefficient? It is a form of statistical dispersion used to represent the income distribution of a given nation. It has become the gold standard for measuring income inequality. Although it works nicely because it's relatively easy to compare across countries, there are still some flaws with it. One is that it compares income, and not wealth. Two countries with different amounts of wealth can have the same Gini coefficient, which also means that the Gini coefficient says nothing about quality in a given country. The Gini coefficient can produce the same coefficient for two countries with different income distributions because the Lorenz curve can have different curvatures for different countries. Furthermore, the Gini coefficient does not account for utility or economic opportunity.

Much like with the GDP, until we can come up with a better metric, we need to do the best we have. Even if the OECD uses the GDP as the metric for economic success, I still take issue with the temporal comparison because over time, a more developing country is going to experience an overall decline in GDP growth rate with reasons having nothing to do with income inequality. Correlation has suddenly turned into causation, and that fact that the OECD recommends wealth redistribution, a policy that does more than its fair share of harm, based on a correlation that can be easily explained by other factors is most unfortunate. The OECD says that redistribution would work if the government could do so efficiently (OECD, p. 19), which I find to be a highly tenuous assumption.

Although there is enough reason to not to jump to conclusions with the OECD's report, what did the OECD end up finding? The ratio of the income of the richest ten percent to the poorest ten percent increased from 7:1 in the 1980s to 9.5:1. As a result, the OECD's economic analysis suggests that this increased income inequality has had a statistically significant, negative impact on economic growth. Conversely, what the OECD finds that is equally intriguing is that "no evidence is found that those with high incomes pulling away from the rest of the population harms [economic] growth (p. 6)." This is important because the typical income inequality narrative is that the top echelon is gobbling up the resources while the "99 percent" have nothing left.

Looking at the OECD study, the issue is not with the rich getting richer per se, but rather with the poor not having the same level of access to resources in order to develop their human capital. This is especially true when looking at educational attainment for lower-income families (p. 28), which was one of the biggest kvetches of the OECD in this study. If the OECD study is correct, then income inequality only affects those with a lower educational attainment. Those with parents who have medium to high educational attainment are not affected by income inequality (p. 25-26). 

The OECD focuses on the bottom of income distribution, as it well should. Anti-poverty initiatives are not enough, according to the OECD (p. 29), but they might not be enough because the current programs are not sufficient at accomplishing the task at hand. It very well could be because many anti-poverty initiatives are handled by government bureaucracies, which makes me wonder whether the government intervening to reduce income inequality will actually increase economic growth. There are many ways to revive economic growth, and I honestly don't think simply redistributing wealth is going to help. The IMF actually published a report, and showed that at best, redistribution is negligible, but it can also very well make things worse (Ostry et al., 2014, p. 23). There is no need to knock rich people down a peg with poor policy like the wealth tax because by the OECD's own admission, the "one percent" isn't de facto causing the issues at hand. I've discussed education and anti-poverty initiatives in the past, but it should go without saying that we should focus on policies that help make the poor less poor and provide them with the opportunity to access the tools they need to succeed in life. Whatever those policies may end up being, we should improve the quality of education and encourage entrepreneurship instead of going after the ever-intangible and elusive "income inequality."

Tuesday, December 9, 2014

The Fiscal Costs of the Death Penalty and How It Costs More Than an Arm and a Leg

The death penalty has caused much debate in this country. Does the death penalty deter crime? Should the government have the power over life and death? Is the death penalty appropriate if even one innocent person is executed? These are questions that typically surround the debate, but there is one I would like to cover: does the death penalty cost more than life in prison? This was a question the state of Nevada's Legislative Auditor seemed to answer in its audit released recently.

Looking at 28 death penalty cases in Nevada, the average death penalty case costs $532,000 more than a case when the death penalty is not sought (p. 10), which is nearly twice as much as a murder case for life without parole. Although incarceration costs were less for cases that sought the death penalty (Exhibit 7), what caused the death penalty cases to supersede the non-death penalty cases was average case costs (Exhibit 5). Most of the costs are racked up even before the trial begins (Exhibit 10), which is all the more damning since most cases in which the prosecutor seeks the death penalty does not actually impose the death penalty (Exhibit 2). For death penalty cases, they require more lawyers, more preparation, more investigators, more special motions, more witnesses, more experts, a longer jury selection, not to mention a longer appeals process (Exhibit 6).

Many other states, such as California, Indiana, Maryland, Louisiana, New JerseyMontana, Connecticut, North Carolina, Ohio, and Kansas, have attempted to capture the costs and have come to the same conclusion: the death penalty costs way more than life without parole. The money that was spent on the death penalty could have been spent on real crime control measures, such as solving, preventing, or prosecuting other crimes. The evidence is clear. If one wants to make an argument for the death penalty, trying to make the argument based on cost savings is not the way to go.

Friday, December 5, 2014

The FDA's Lifetime Ban on Gay Men Donating Blood Makes My Blood Boil

AIDS has been a frightening virus since its discovery in 1983. Since men who have sex with other men, also known as MSM, were the predominant carriers of AIDS, the Food and Drug Administration (FDA) decided to ban these men from donating blood. In some countries, deferrals allow for MSM to donate after a certain period of time. In the United States, however, no such deferral is allowed. This ban has been FDA policy for over thirty years, but the FDA has decided to revisit the topic and possibly change the law where MSM would have a deferral one year after their latest male-to-male sexual encounter. Part of the change of heart is because we have realized that AIDS is not a "gay disease." Part of it is because we have developed technology to better screen for HIV, the virus that causes AIDS. Has the ban outlived its usefulness, or should it still be in force?

According to the FDA, the purpose of this ban to use "multiple layers of safeguards in its approach to ensuring blood safety....A history of male-to-male sex is associated with an increased risk for exposure to and transmission of certain infectious diseases, including HIV, the virus that causes AIDS. Men who have had sex with other men represent approximately 2% of the US population, yet are the population most severely affected by HIV." Essentially, the FDA's concern is with safety and making sure that donated blood is not contaminated with HIV. Let's see how valid the FDA's concern really is.

According to CDC statistics, the most common transmission category of HIV (CDC, 2012, Table 1a) is male-to-male sexual contact. This accounted for 64 percent of overall diagnoses of HIV, which totaled at 30,695 estimated diagnoses in 2012. As for number of individuals who carry the virus, MSM account for 52 percent of the subpopulation, totaling at 451,656 men (Table 14a). Undiagnosed individuals have a comparable result (CDC, 2011, Table 9a): 596,600 MSM out of 1,144,500 persons living with HIV, i.e., 52 percent.

So 596,600 MSM with HIV make up 0.18 percent of the 316 million American populace. Even if you want to filter out the 23.3 percent of Americans who are under 18, these individuals are only 0.25 percent of the American population. Assuming that gay men make up six percent of the overall male population, that makes for 7.27 million gay men over 18. Even if one makes the highly tenuous assumptions that a) only gay men are MSM, and b) all gay men are MSM, then that would still mean that only eight percent of gay men have HIV. Even if we were to take this unreasonably high estimation at face value, is the ban justifiably based on science? In short, no.

Not only has our understanding of how it is transmitted changed, but treatment and detection have also developed since 1983. Nucleic acid tests can diagnose HIV within two weeks of infection (FDA, p. 3), but the window period lasts from three to six months. Additionally, federal laws require that the blood be tested for diseases, including HIV. The odds of HIV infection through a blood transfusion, 1 in 2,000,000, is so small that it is almost non-existent. This is why many countries have changed their policies from a lifetime ban to relatively short deferral periods. Australia found no increased rates of transmission of HIV when it switched from a five-year deferral to a one-year deferral (Seed et al., 2010). Many countries, including the UK, Sweden, and Japan have switched to one-year deferral periods. Although a one-year deferral is an improvement over a lifetime ban, it is still arbitrary and discriminatory.

Even if switched to a one-year deferral, it still makes the mistake of identifying high-risk groups instead of high-risk behaviors. Go back to the CDC statistics (Table 1a) and you'll see that 48 percent of those newly diagnosed with HIV are African-American. Does anyone hear clamoring for African-Americans to be barred from donating blood? No, because that would be discriminatory, and it wouldn't target the issue at hand. After all, why should a high-risk heterosexual male who has unprotected sex with multiple partners get a free pass while a homosexual male in a committed relationship and doesn't have anal intercourse get punished? Looking at a potential donor's behaviors is more accurate of a proxy than targeting homosexual males. Italy went from a lifetime ban to an individualized risk assessment, which had no adverse impact on the incident rate of HIV (Suligoi et al., 2013).

The American Osteopathic Association and the American Medical Association have all realized that the science does not support such prohibitions. I know the FDA is trying to be risk-averse as humanly possible, but there's a fine line between justifiable, precautionary measures and counterproductive measures with nothing to show for it except blood banks experiencing a shortage of donated blood. If the ban were lifted, it would mean 615,300 additional pints of blood. A one-year deferral would still mean an extra 317,000 pints of donated blood (Miyashita and Gates, 2014). Whatever minimal risks that exist are considerably outweighed by the clear benefit of helping close the shortage of donated blood so people can receive the medical services they need and deserve. I hope the FDA realizes that its policies are causing more harm than good, and that they use science-based evidence to overturn this ban that can only be described as bloody idiotic.

Wednesday, December 3, 2014

Focusing on Police Body Cameras and Best Practices for Law Enforcement

What has going on in Ferguson, Missouri has had the country quite riled up about race relations in America. It has become politicized enough where shortly after the release of a White House review on law enforcement practices on Monday, President Obama has recommended appropriating $75M to purchase 50,000 police body-mounted cameras. It should be no surprise that the events in Ferguson would elicit such a response. Personally speaking, I'm more perturbed by the increased police militarization in America that the Ferguson situation exemplified, which is something the White House review addresses. Regardless, it gets me wondering if equipping police officers with body cameras is such a good idea or not.

If one had to summarize the case for police body cameras in a single word, it would be "accountability." Since the shooting of Michael Brown on August 9, there has been considerable clamoring for police officers to wear body cameras to capture footage of police officers on the job. Not only are these cameras supposed to hold police officers accountable for their actions in order to reduce complaints of police misconduct, but it is also supposed to protect officers from false accusations of wrongdoing. Humans tend to behave better when they think they are being watched, which is the reasoning behind the body cameras and their efficacy.

While body cameras have the potential to alter behavior for the better, skeptics are worried about it can adversely affect law enforcement, as is illustrated by this Madison Police Department report. Do you think a confidential informant is going to want to talk to a cop with a camera streaming footage? Can a camera be turned off if the citizen requests it? Can this new technology be abused? Should police camera footage become public record? How much would these body cameras infringe upon the Fourth Amendment? Issues of privacy either for citizens or police officers set aside for a moment, there are also technological impediments.

As of date, the battery life on a camera can be as short as a couple of hours, but can be long as twelve hours. The technology can always improve, but it questions the ability of the camera to capture everything. Even if we assume that the camera never malfunctions during the entire tour of duty and the video is never tampered with, the camera is still not going to be a completely accurate telling of events because given the limits by the scope of the lens, it cannot capture everything. A video without context can be misinterpreted.

None of this even touches upon the dollar amount for such equipment. Obama is looking to spend $75M on 50,000 cameras, which comes out to $1,500 per camera. Considering that cameras range from $119-$1,000 per camera, I'm not sure why Obama is asking for this much money. Even so, this amount would only cover a fraction of the nearly 630,000 law enforcement officers. Also, the cost that is even bigger than the initial cost of buying the camera is video data storage. According to a recent Department of Justice study on police body cameras, the bulk of costs for body cameras goes to data storage, as the New Orleans Police Department has already discovered (p. 32). Again, technology can always improve, but considering the budget cuts that have been taking place since the Great Recession, it is going to be more difficult to fund such an initiative, even with federal funding assistance.

As for whether body cameras work, since they are a relatively nascent technology, the empirical evidence is scant (see Office of Justice Programs assessment here). Aside from the Department of Justice study cited above, a case study that has shown promising success is Rilato Police Department case study. In this case study, use of force by officers decreased by nearly two-thirds, and citizen complaints decreased by 88 percent. There are some other case studies out there, not to mention the UK Home Office's report on the topic, but there is still a lack of a causal link because it's not sure whether the citizens, officers, or both behave better as a result of being videotaped. Additionally, implementing the cameras is still new enough where we don't have anything close to a complete cost-benefit analysis. For instance, while the cameras cost money to purchase and maintain, there is the question of how help they prevent the costs of police misconduct. For instance, the NYPD paid out $152M last year as a result of claims of police misconduct, which is a lot more than body cameras would have cost. Do body cameras improve or erode relations between law enforcement officers and the citizenry? Do they have the ability to intimidate victims or even suspects, thereby altering their testimony?

Aside from it being new technology, I have my ethical and legal qualms about such technology. Even so, if the intuition behind the body cameras is correct, I have to agree with the American Civil Liberties Union (ACLU) in its 2013 report by saying that it will be an overall improvement over not having cameras. To affirm that assertion, more cities, such as Washington DC and New York City, should experiment to see if body cameras work. That being said, we should not treat this as a catch-all or a silver bullet in law enforcement reform. Body cameras can help with law enforcement, but this policy would have to work in conjunction in other policies if we want to improve upon the overall state of local law enforcement.




10-15-2015 Addendum: The University of South Florida just released a case study showing that body cameras are indeed effective.

Monday, December 1, 2014

A Blessing On Your Head: Why Should a Jew Keep a Kippah on His Head?

For those who wear religious garb, it can be a spiritually rewarding experience. I wish I can say the same for me when I wear the kippah. Although it has certainly had its rewarding moments, on the whole, wearing a kippah has been something I have struggled with both in theory and practice. What I would like to do here is highlight both the reasons for wearing [or not wearing] the kippah while illustrating some of my personal struggles with this Jewish practice.

The kippah (כיפה, literally meaning "dome"), or what is alternatively called the yarmulke (יארמולקע; Yiddish word with its origins from the Aramaic "awe of the King," i.e., G-d) is a head covering that primarily observant Jewish males wear, although there are some non-Orthodox females that also wear a kippah. It is the most identifiable mark of a Jew, yet its origins are non-biblical in nature, which is where part of my frustration of wearing the kippah originates (The High Priest wore a head covering in Exodus 28:4, 7, 30, but this is a far cry from the obligation to wear a kippah). Why should that be frustrating? When you look at the history of Jewish practices, many of them were either created or evolved during the post-biblical era, which is fine. Religion is meant to evolve. Where I get frustrated is when we blur the line between custom and law, which at least for me, is what the kippah exemplifies. The kippah is first mentioned in the Talmud (more on that momentarily), and it was pretty much a practice for the particularly pious (חסידים). Only later did it turn into a widely accepted practice. Keep in mind that the Talmud does not forbid one to walk around bareheaded. Even during the Geonic period (6-11 c.), only those participating in services wore a head covering. The best way I can find to describe the legal status is the following: the Shulchan Aruch (Orach Chayim 91:3) and Mishneh Torah (Ahava, Hilchot Tefilah 5:5) require it in when praying in synagogue, studying Torah, and the like. Rabbis in pre-modern times had considerable debate as to whether there was an actual obligation (Orach Chayim 2:6 says there is an obligation), and that ambiguity leads me to believe that it is not a de jure obligation, although it is still a highly encouraged measure of piety. Still, many observant Jews treat it as if it were a de jure obligation. Even putting that debate to the side for a moment, what are some of the explanations for wearing the kippah?

  1. Fear and awe of G-d. The Talmud does not make many references to obligatorily wearing a head covering, but one general reference to wearing a kippah is in Tractate Shabbat 156b. The parable in the tractate goes as such: The mother of Rav Nachman bar Yitzchak was told by astrologers that her son would become a thief unless the son changed his ways. Upon hearing this, the mother told her son to cover his so he will feel the reverence of Heaven (יראת הי). When using this parable we should consider a couple of points. First, should we base a practice on astrology? While astrology was considered to be science back then, we now know that it is hokum. Second, it is possible that it is an extreme case rather than a norm. Jews already had tzitzit and tefillin, amongst other signs to remind them of G-d's presence. Do we need more? Perhaps human spirituality is fragile enough where more mitzvahs to remind us of His presence is not a bad idea. 
  2. Piety.  According to the Talmud (Kiddushin 31a), R. Huna did not walk four amot (אמות; cubit. Four cubits was the Talmudic definition of one's personal space) without having his head covered because he was humbled enough to be constantly reminded that there is always something above us. This talmudic passage could be why rabbinic authorities, including ChidaMagen Avraham, and Vilna Gaon (also see here) viewed the kippah as a measure of piety instead of a halachic obligation. If the kippah inculcated piety, I would be more appreciative of how such an external action could translate to internal awareness. It's hard to consistently have that level of piety. What happens when the desired effect is no longer there, or was never there in the first place? Has it lost its meaning of piety when Jewish individuals who commit reprehensible acts wear the kippah as a façade? I do wonder time from time how much the kippah has lost its meaning, at least in a sociological sense, when bad men commit wrongdoings while wearing the kippah.
  3. Jewish pride. Even in the Middle Ages, Jews were more and more insistent on wearing the kippah to distinguish themselves from their bareheaded, non-Jewish neighbors because they were fed up with their Christian oppressors. Since Christians would take their hats off in reverence, Jews did the exact opposite (Taz 8,3). (This could explain why Sephardic Jews have not developed the same level of universal practice with the kippah.) Thankfully, we don't live in that world anymore. In an American context, Jews have great religious freedom, freedom that is unprecedented in the history of the Jewish diaspora. Rather than wear the kippah as a response to oppression, Jews in America, at least, can proudly wear the kippah as an expression not only of one's religiosity and Jewish identity, but as a sign of how far American society has developed in terms of religious tolerance
  4. Jewish identity. 
    • This one I would like to point out based on a recent study session with a chevruta of mine when we were studying Kitzur Shulchan Aruch. Jewish law dictates that Jews are supposed to keep certain particularistic practices to distinguish the Jew from the non-Jew/gentile (Leviticus 18:3) in order to prevent assimilation (Deuteronomy 12:30). The Kitzur Shulchan Aruch discusses this in the specific context of wearing clothing. Here's an important question: which non-Jews are we not supposed to emulate? We don't live in the same world that our ancestors did. Jews don't live in ghettos or shtetls. The world is much more globalized than it used to be, and there is a lot of heterogeneity when it comes to what non-Jews wear. Given the wide variety of clothing in modern times, it's all the more difficult to say that wearing a certain style of clothing is a prima facie form of assimilation. We don't live in a world in which Jews wear distinctively Jewish clothing. Since many observant Jews buy their clothes from non-Jewish distributors, Jews have less ways to distinguish themselves in terms of clothing. Aside from wearing tzitzit, something which a sizable amount of observant male Jews tuck in anyways, the only way the Jew has to externally identify himself as Jewish, at least in terms of apparel, is the kippah. As such, the kippah has more relevance in terms of symbolism than it did in the past.   
    • Wearing a kippah in everyday situations not only shows that you are a Jew who observes Jewish laws and practices. There are enough identity politics in the type of kippah that one wears that one can determine the political or religious affiliation simply by looking at the type of kippah. For instance, the black velvet kippah is worn by yeshivish types. A crocheted kippah is typically worn by Modern Orthodox or religious Zionists, although there are some non-Orthodox Jews that wear them. The satin ones are worn by non-Orthodox Jews who borrow it from the synagogue on a one-time basis because they don't normally wear their own. A lot can be said about the type of Judaism one practices based on what wears on one's head. 

Postscript: Should the kippah be something worn by all Jewish males or return to being a status symbol of true piety? This question gets at the true struggle I feel with the kippah because I think that the kippah should not simply symbolize one's Jewishness, but one's level of internal piety. The vast majority of us do not feel a 24-7 closeness to G-d (דבקות). Because I do not consistently have that sense of piety or humility, there are times that I feel out of place or dishonest wearing the kippah. There is that part of me that feels that the kippah is for the truly pious. Conversely, it very well could be precisely because the דבקות is not always there is when I need the kippah as a reminder that G-d is above me. I will continue to have that internal conflict as to whether the kippah is meant to be a status symbol of one's inner piety or if it is meant to engender a further sense of inner piety. Whichever approach I take, I'm just glad that I can continue having this struggle over something that isn't unambiguously accepted under Jewish law.

Friday, November 28, 2014

Is Pursuing a College Education Worth It? Depends On Who You Are and the Type of Degree

Educational attainment is one of the best indicators of one's success in professional development and overall social mobility. Having a solid education, and more specifically, going to college (which for purposes of this discussion, we will define as a four-year college that results in [at least one] Bachelor's degree), is key to one's future. However, this conventional wisdom has been tested in recent years in light of the crushing student debt. Student loan debt has not only exceeded $1T, an amount that has exceeded credit card debt, but there is something wrong with the way we finance, price, and value a college education. Yet somehow, conventional wisdom continues to prevail. Do the facts tell us it is still worthwhile to go to college, or should we as a society start looking to alternatives to conventional wisdom?

First of all, there is a significant difference between going to college and completing a four-year program resulting in a Bachelor's degree. America currently experiences a 41 percent college dropout rate. There are many individuals who pay for tuition, either struggle with the college experience financially or academically, and they end up dropping out with nothing to show for it but student loan debt and having to find a way to pay the debt off as soon as possible. Those who dropout of college are four times more likely to default on their debt because of their inability to pay. Should it be surprising that those who do not finish their college education have the hardest time paying off student loan debt? As a recent Pew Research study shows, not acquiring that four-year education costs a lot (p. 16).

Access to a college education does not guarantee success, but does completing college do the trick? According to the Federal Reserve Bank of San Francisco's recent findings, as well as other academic literature (e.g., Oreopoulos and Petronijevic, 2013), the answer is "yes." Even with soaring tuitions, these findings concluded that the costs of higher education can be recouped by age forty. Conversely, the Federal Reserve Bank of New York (FRBNY) recently found that the amount of time to recoup from the costs of college has dropped from twenty to ten years, which goes to show that financing student loan debt is no more difficult than it was a generation ago (Akers and Chingos, 2014). Even after recouping those losses, the average college-educated worker still earns $800,000 more than the average high school graduate by retirement age. Investing in a four-year college education provides a higher rate of return than investing in stocks or AAA corporate bonds. As a matter of fact, the rate of return on a college education is much higher than it was in the 1970s. That being said, all experiences of college-educated individuals are not equal (Carnevale and Cheah, 2013) because there is no such thing as "the average college student."

For one, if it takes longer than four years to complete college, the rate of return is lower. When the Federal Reserve Bank of New York looked at the average wage for the bottom quarter of wage earners with a Bachelor's degree in comparison to those with just a high school degree, there wasn't a huge difference in wages. There is also a major difference in salary based on the degree one pursues. For instance, the average engineer is going to make a lot more money than the average individual who majors in theater or studio arts. While college graduates have an easier time finding employment, the FRBNY also found that 46 percent of recent college graduates are underemployed (i.e., they're not using their degree), as are 35 percent of college graduates as a whole, which is to say that landing a good job is not easy. This means that out of 100 people that attend college, 41 don't graduate and at least a quarter who do graduate, or 15 people, make a salary comparable to one who has a high school education. For those who decide to attend a four-year college, the odds of succeeding are not quite half. None of this, of course, factors in the networking value of college, the pursuit of learning, the social benefits of being college-educated (e.g., less likely to commit crime, longer lifespan, higher quality of life), the missed opportunity of earning salary while in college, or potential stress caused by college.

There are very few investments that are full-proof. The current higher education system is far from perfect, and here are but a few ideas to improve the situation: Make sure potential college students have better consumer information, stop the government from keeping the interest rates on student aid artificially low, income share agreements (also see here), reforming the antiquated accreditation system, or promoting alternatives to the traditional four-year college, e.g., online learning, for those who could benefit from an Associate's degree or vocational certification and still have a comparable wage premium. There are many who try to acquire a college education and do not succeed. Much like any other investment, one has to be able to assess the possible risks and potential rewards before pursuing it, which is important to consider when most of the fastest-growing jobs require a postsecondary education. It is also true that a four-year college is not for everyone, and some would be better with an alternative postsecondary certification or degree. Even so, the marketplace still values the four-year degree. If you do decide to go on that pursuit and if you successfully acquire a four-year education, it is one of the best investments that you can make.

12-3-2014 Addendum: The American Enterprise Institute just published a report illustrating why it's difficult to have a huge payoff from a college education, particularly for students from low-income families, and what can be done to help young adults have a better future.

Wednesday, November 26, 2014

Parsha Vayetze: Did Jacob Really Try to Bargain with G-d?

I find Jacob to be one of the more peculiar and intriguing characters in the Torah because he comes off as one of those tortured souls who struggles with G-d. Jacob did not have a clean-cut path. Jacob tricked his brother, Esau, in trading his birthright for lentil soup. Jacob also deceived his father, Isaac, in giving him the birthright that technically belonged to Esau. Along the way, Jacob wrestled, Jacob worked seven years only to be tricked by Laban, and he dealt with the loss of Joseph when Joseph was sold into slavery. Although he was one of the Patriarchs, his life was the most tumultuous of them all. Jacob's peculiar story is also illustrated after the famous dream-revelation of the angels ascending and descending the ladder that reached the sky. When Jacob woke up from said dream, he named the sight where he laid Bethel (בית אל), or House of G-d. At this moment, he makes a seemingly odd vow:

וידר יעקב, נדר לאמר: אם הי אלהים עמדי, ושמרני בדרך הזה אשר אנכי הולך, ונתן לי לחם לאכל ובגד ללבש. ושבתי בשלום, אל בית אבי והיה הי לי לאלהים. והאבן הזאת אשר שמתי מצבה הי, בית אלהים וכל אשר תתן לי עשר אעשרנו לך.

And Jacob uttered a vow, saying, "If G-d will be with me, and He will guard me on this way upon which I am going, and He will give me bread to eat and a garment to wear. And if I return in peace to my father's house, and the L-rd will be my G-d; then this stone, which I have placed as a monument, shall be a house of G-d, and everything that You give me, I will surely tithe to You." 
-Genesis 28:20-22


This is not the only time we see this sort of conditionality in a vow (Judges 11:30-31, I Samuel 1:11, 2 Samuel 15:8). What makes Jacob's conditional vow unique is that G-d already promised Jacob the conditions for which he asked (Genesis 28:15). So what's going on?

Perhaps Jacob was wary of the dream's validity. It might have been an actual prophecy, but it just as easily could have been a dream. This could have been an instance in which Jacob was simply hedging his bets (Zohar 1:150b). Let's assume that Jacob was not skeptical of the dream's veracity, and he actually considered it to be a bona fide prophecy. Jacob's conditionality doesn't make sense if G-d already promised these provisions to Jacob.

Some commentators, such as Rashi, assumed Jacob used the word אם (if) because was legitimately unsure as to whether G-d will fulfill His promise. Ramban believed the word אם is used because Jacob feared that he might sin, and thus forfeit what G-d had promised. Radak pointed out that Jacob only asked for necessities, not luxuries, which is the behavior of righteous people. Although it might not seem spiritual to ask for material provisions provided, even if it's the bare minimum to survive, we have to remember that it's difficult, if not impossible, to keep to G-d's ways if we cannot even have the most basic amenities provided. That is why Sforno commented that Jacob's supplication would help ensure that he can follow G-d's will to the fullest and not falter. 

Since the word אם can also mean "when," it is feasible that Jacob was expressing his faith in G-d and simply declaring what he would do once he returned in one piece. This interpretation is implicit in the Midrash (Genesis Rabbah 70:6) because the Midrash discusses how Jacob's vow was meant to be an example for how future generations are to praise G-d. According to this interpretation, Jacob's vow was not one of conditionality, but of the utmost confidence in G-d. Jacob has found faith in G-d, and when I say "found faith in G-d," I don't mean that G-d will literally provide for everything, but that we can be thankful for what we have and have enough of a sense of equanimity to know that we can adapt to whichever difficulties come our way in the future. Rather than be a petty form of spiritual quid pro quo, Jacob was actually on the spiritual path that would help him come to terms with himself and transition from being Jacob to becoming Israel.

Monday, November 24, 2014

How Neutral Is Net Neutrality?: Keep the Government Out of Internet Regulation

Net neutrality has been making the news a lot lately. Both sides make is seem like if things do not go their way, it will be the end of the Internet as we know it. A couple of weeks ago, President Obama reaffirmed his support for net neutrality. Ted Cruz replied that net neutrality is like Obamacare for the Internet. In his rather amusing video below from a few months back, comedian John Oliver humorously called protecting net neutrality "preventing cable company fuckery." What is it about net neutrality that has people so worked up?



Net neutrality is the idea that both Internet service providers (ISPs) and governments should treat all data, content, platforms, and sites on the internet equally. For proponents of net neutrality, no net neutrality means that cable companies act as "content gatekeepers" and essentially gouge consumers by demanding a toll for an "Internet fast lane." I'm no fan of Big Business, and a lot of that has to do with its collusion and rent-seeking with Big Government, but if we're griping about companies like Comcast have such monopolistic power because monopolies are inefficient, why should we entrust the government with the same monopolistic power? Do we think that the Federal Communications Commission (FCC), the agency that censors expletives on television and hardly has a history for impartiality, is going to permit unfettered access to the Internet? Whether it is health care or education, any sector with heavy government regulation has only resulted in stifling failure. A University of Michigan economics professor conducted a study with one of his graduate students, and found that franchising reform to allow for deregulation of the cable industry resulted in lower service prices (Bagchi and Sivadasan, 2013).


Looking at the economics of net neutrality (also see here), net neutrality is tantamount to price regulation. Whether we're discussing price floors, price ceilings, or subsidies, not allowing for price discrimination via price regulation has a way of distorting the market for the worst, as is shown by a study conducted by the New York Law School (Davidson and Swanson, 2010). For instance, net neutrality can impose costs from any where between $10-55 per mensem per client (Stratecast, 2010).



A tiered Internet system seems to goes against the idea of those who view Internet access as a right. Rather than view Internet as a right, how about viewing it as a good that is paid for based on the amount of bandwidth used or number of megabytes consumed? Since the Internet does not transmit data in generic "bits," all data on the Internet are not created equal. Netflix or Hulu should be charged more because they're transferring larger amounts of data. It's hardly unfair to pay for a good or service based on the quantity or quality consumed. After all, that is how markets work.



Advocates of net neutrality present such dire hypotheticals, like less services, higher costs, limited choices, network discrimination, or the end of the Internet as we know it. The problem is that they are just that: hypotheticals. Even if ISPs have the technological capability to block certain websites, they don't because because it's bad business. Blocking certain websites would mean driving current customers to competitors. As for less competition, the FCC provides data (see Figures 1, 5b, Maps 2-3) showing not only that Internet connectivity is improving, but that most counties have access to multiple providers. When looking at Internet download speed by country, America's ranking is still above many developed nations, and even for the countries ranked above the United States, the OECD points out that they have virtually have no open access rules. Plus, we also need to keep in mind that speed is hardly the only metric for determining quality of broadband consumption. There is also internet affordability in terms of access to entry-level high-speed broadband, mobile accessibility, jitter, and latency.  

The Internet is not a monolithic entity, but rather a decentralized network of networks. To adapt to the ever-evolving technology, the Internet needs to remain as competitive as possible. For there to be sensible regulation of any kind, one would need to point out the market failure, such as restricting customer access to certain sites so they can increase their profit margin. Considering that there is a lack of a consensus of whether such a market failure exists (Hazlett and Wright, 2012), there is no need to implement net neutrality. Freezing in place the business models of today with net neutrality regulation would stifle Internet innovation. The deregulated approach for the Internet has served and would continue to serve the Internet well. We don't need further regulations; we need to maintain a competitive market. Repeal local franchising regulations so that they don't act as a barrier to entry to the market. Create the right climate for businesses to invest and the broadband market will expand even more. America needs to get off the net neutrality bandwagon if it wants to still have a thriving Internet.


5-17-2017 Addendum: The Competitive Enterprise Institute provides a nice primer on net neutrality.

8-23-2017 Addendum: The American Enterprise Institute (AEI) released a paper examining net neutrality rules in 53 countries, and found that net neutrality does not spur Internet innovation. 

Thursday, November 13, 2014

Reaching My Limit With Peak Oil Theory Predictions

Back in elementary school, I was taught that oil was a nonrenewable resource, and because of that, we should use our resources wisely. Nothing wrong about that unto itself. It's when you learn more nuanced arguments about the issue later down the road, such as peak oil theory. Just to clarify, this is not a question of "when will we run out of oil?" M. King Hubbert's theory, which dates back to 1956, states that oil will hit a certain flow rate maximum, after which, its production will lull into a decline (see below).


Looking at US oil production, it did hit a peak at 1970, after which, it went on the decline. Then a funny thing happened. In 2009, the number of barrels produced per annum started going up again, and we have seen an upward trend ever since. What caused the detour from the bell curve?

In a word: technology. Peak oil theory can be likened to drinking beer. For proponents of the theory, "the glass starts full and ends empty, and the faster you drink it, the quicker it's gone." Innovations in extraction technology have given us such an advantage that we have been able to increase our production over the past few years, which means the metaphorical glass of beer can be refilled. Even better, in its 2014 International Energy Outlook, the U.S. Energy Information Administration finds that petroleum production will be increasing for the foreseeable future (Table A4), and the International Energy Agency finds that oil production will start plateauing in 2040.

For argument's sake, let's say that the simplistic theory is true, even though the International Energy Agency is not on board with the theory. What then? Should we care? After all, oil prices are going to increase, and that is because there is only so much oil, which means that the world supply will eventually shrink. Taxes, environmental policies, geopolitical strife, corruption, and mismanagement of resources can all affect oil supply, which is all the more reason we need to diversify our energy portfolio so we can minimize risk if and when this situation arises. Diversification is simply a good investment strategy. Even so, it doesn't bother me because one of four things will happen: we'll try harder in finding additional reserves, we'll learn to live with less, we'll develop technologies that will entail less oil consumption [per capita], or we'll find energy substitutes for oil, the latter of which would help in making alternative energy more affordable. There's no reason to freak out, even in the short-to-medium-term. Yes, we want to use our resources wisely, but we shouldn't swap alarmism for what technology and innovation can mitigate or even eliminate.

Monday, November 10, 2014

Obmacare's Mandate of Pre-Existing Condition Coverage Doesn't Fix Health Care

Prior to the enactment of the majority of Obamacare provisions on January 1, 2014, there were certain insurance policies that would not cover expenses due to pre-existing conditions. For the purpose of this discussion, a pre-existing condition is a health condition that existed prior to the writing and signing of a contract. Whether the pre-existing condition was covered after a period of time or never, proponents of Obamacare used this "common-sense policy" to as part of the plan to gain sympathy and pass the bill. At first glance, it might seem heartless and cruel to use the pre-existing condition rule to prevent millions of innocent people access to health care. In this scenario, it would at best end up being prohibitively expensive to buy insurance if you are branded with a pre-existing insurance. But is that what's going on here? Why would such a rule exist in the first place? And does this prohibition help the health care system or make it worse?

If I had to summarize the reason for excluding based on pre-existing condition in the first place, I would summarize it in two words: adverse selection. For an insurance agents to most accurately figure out what your monthly premium should be, they need to assess for factors such as age, gender, tobacco usage, geographic location, family history, marital status, profession, and as much as it kills some people, one's current health, without which, why not just set up arbitrary pricing that makes no sense when developing a risk pool? Without allowing underwriters to do their job, all you do is shift the costs to the young and/or healthy, which unsurprisingly has caused health care premiums to greatly increase since Obamacare has been enacted into law. It's equally unsurprising that there is a youth enrollment problem with Obamacare because if Obamacare guarantees coverage of pre-existing conditions, why should healthy people purchase insurance prior to becoming sick, G-d forbid? You know that Obamacare is a raw deal for young and/or healthy people when you have to coerce them to take it with the individual mandate.

American Enterprise Institute scholar Mark Perry puts it quite succinctly as to why not allowing for pre-existing conditions makes no sense. "You call State Farm the day after your car has been in a major accident, and inquire about getting a quote for car insurance, hoping that your extensive 'pre-existing body work' will be covered?" Although Perry uses three other types of insurance as examples, the point remains: If pre-existing conditions aren't covered by other forms insurance, why should health care? Factoring pre-existing conditions into account while determining insurance premiums isn't discriminating against the sick. It's simply the way risk assessment works for insurance.  

Those who pushed for Obamacare wanted to use scary numbers to make you think that so many people would have been denied health care coverage. Former Health and Human Services Secretary (HHS) Kathleen Sebelius went as far as reporting that 129 million would be deprived of health care if we didn't do something about pre-existing conditions. Although that claim can be construed as "technically true" when looking at the raw data (although if you look at the Government Accountability Office report, the number of those considered with pre-existing conditions range from 36 million to 122 million), using that claim is as emotionally charged as it is egregious because it didn't take into account other factors, such as the number of individuals that were actually getting denied coverage because of pre-existing conditions. This is more true when considering that government safety nets such as Medicare, Medicaid, and even employer-based health insurance exist. These data count diseases like asthma, hypertension, back issues, diabetes--all health issues that can theoretically, but practically speaking would not really have been put into play. The vast majority of Americans are covered either by government insurance or employer-based health insurance, the latter of which is covered by regulations for its conditionality to allow for the so-called tax break.

Plus, if this were the case, why hadn't millions upon millions been denied coverage prior to Obamacare based on their "pre-existing condition?" You'd think we would have noticed all these people without health insurance by now, but alas, that's not the case. One in eight individuals applying for health insurance could have potentially been denied health insurance based on pre-existing conditions. Considering that only about 27 million directly purchased health insurance when this was a hullabaloo in 2009 (Census, Table C-1), we're talking 3.4 million people. 3.4 million people is a far cry from 129 million, don't you think? Plus, if pre-existing conditions coverage were that big of an issue, why was it that the peak coverage for Obamacare's Pre-Existing Condition Insurance Plan was only 115,000 individuals prior to the prohibition of such exclusions? I guess the demand for such insurance was not nearly as large as the scare-mongerers wanted us to believe.

Not only was there no real concern for this to affect millions of Americans, but it doesn't get at the heart of the problem, which is twofold: 1) Why is health insurance for the vast majority of Americans tied to one's job? 2) Why bother assessing risk when the federal government has made a de facto promise to cover any losses in the short-term?

Even if leaving 3.4 million individuals "in the dark" is unacceptable, it still does not address the major issues caused by employer-sponsored health insurance, one of which is notably that the importability of health care. The problem with employer-sponsored health insurance is that being covered is very much contingent upon you staying at your current job. If you lose your job or decide to quit, then you have to find another way to be insured, which not only creates headache of having to find and acquire new health insurance, but also triggers the pre-existing condition status. If I currently had a health insurance program I liked and were able to take that insurance with me once I left my current employer, any change in my health status wouldn't translate into anything pre-existing because I would already be covered.

Repealing the tax breaks given employer-sponsored health insurance would be the most sound policy reform to limit pre-existing conditions. The issue is that we would have to contend with the complex system of subsidies and regulations that have created this pain. Until we can untangle this mess, we need to come up with some shorter-term solutions in the interim. One is to have state-funded, high-risk insurance pools. Another is to have "continuous coverage" protection coverage for those in transition between insurance brokers, which at least comes with portability that is required in a functioning marketplace. If you're going to allow for protections in employer-based health insurance, you can allow for those same protections for individual health care until the government can do something to actually reform the system. Or how about the Cato Institute's suggestion of health-status insurance?

Mandating coverage of pre-existing conditions is pretty much like any other price control: a lack of a feedback loop between producers and consumers is a disaster. Until people in people in power realize that a freer health care market, and not more government regulation, is the solution, we're just going to continue to have the same, intertwined, convoluted problems with health care prices skyrocketing in comparison to other developing countries, which I can tell you hardly makes America's health care the envy of the world.

11-16-2015 Addendum: The Mercatus Center recently put out an e-book entitled The Pre-existing Condition: Market Incentives for Broader Coverage.

3-7-2017 Addendum: The Foundation for Economic Education put out a nice article on how the current pre-existing condition rules make the sick worse off.