Monday, December 29, 2014

Nowhere Near 1 in 5 Women Were Raped In College: Is "Rape Culture" on Campus Really a Thing?

One in five women will be raped while attending college. It's one of those statistics that illustrates that if you repeat something enough times, people will believe it. This oft-cited statistic comes from the 2007 Campus Sexual Assault study. It might sound fancy, but it comes with some major flaws: 1) only two colleges were surveyed, 2) there was a large non-response rate, thereby inflating the figures, 3) the definition of "sexual assault" was very vague, and included such actions as forced kissing, and 4) the survey questions were also vague, thereby leading them open to interpretation where one could assume the worst.

Aside from shoddy statistical analysis, I bring this up because the Bureau of Justice Statistics (BJS) released a much more thorough study earlier this month entitled "Rape and Sexual Assault Victimization Among College-Age Females, 1995-2013." The BJS uses both longitudinal and cross-sectional data to determine rates of sexual assault. What did they end up finding? Looking from 1995-2013, the number of women raped is not 1 out of 5, but rather 6.1 out of 1,000 women, which is 0.03 out of 5 women. That's an exaggeration of thirty-three fold! And it's more egregious when you figure that the rate of sexual assault on campus has had an overall decline since 1995 (Figure 2).

Is this to say that we should condone this piggish behavior? Of course not! Sexual assault is inexcusable, as are the times when campus tribunals sweep sexual assault under the rug to artificially bolster their campus safety statistics. Forcing someone else to have sexual contact against their own will is a blatant violation of the nonaggression axiom. "No" means "no," and that's no less relevant when we're talking about college students getting drunk at a frat party or if the woman is scantily clad. Alcohol only fuels a man's propensity towards randiness, and it doesn't excuse deplorable behavior. The underreporting that the BJS points out (p. 1) makes a sad statement of the stigma attached to sexual assault, and that should be addressed so more women report when they are sexually assaulted. Nevertheless, 0.03 out of 5 women being sexually assaulted is a far cry from 1 in 5 women.

If the premise behind feminism is gender equality, then colleges should be promoting responsible behavior for both sexes instead of encouraging segmented gender roles that exacerbate the issue. We should help women without knocking men down. There's a fine line between holding men responsible for their misdeeds and demonizing men in a "guilty before proven innocent" mob mentality because believing that women would never lie about something lie this is "politically correct" (FYI: Although it's rare, there are moments when women report false accusations, as was infamously illustrated with the Duke University case back in 2006). Not only is sexual assault lower on campuses, but it has experienced quite the drop since the 1980's. It would be nice to live in a world without sexual assault, but it should still be noteworthy that the problem is nowhere prevalent as we thought, and that it has been on the decline, much like we see with rates of domestic violence and rates of other violent crimes in general. This is something that we should all celebrate, but I anticipate that the hardcore feminists will still advance the idea of a "rape culture," regardless of what statistics or even the people over at the Rape, Abuse, and Incest National Network (RAINN), the largest anti-sexual-assault organization, have to say about there not being a "rape culture." As RAINN points out, "Rape is caused not by cultural factors but by the conscious decisions, of a small percentage of the community to commit a violent crime...[Blaming it on 'rape culture'] has the paradoxical effect of making it harder to stop sexual violence, since it removes the focus from the individual at fault, and seemingly mitigates personal responsibility for his or her own actions."

We should take rape and sexual assault seriously, but bemoaning "rape culture" is not the way to go. Whatever colleges decide to do, what we should stop doing is giving credence to the "rape culture" myth because as Cathy Young over at the libertarian Reason Magazine point out, the anti-"rape culture" movement is one that has "capitalized on laudable sympathy for victims of sexual assault to promote gender warfare, misinformation, and moral panic. It's time for a reassessment."

Friday, December 26, 2014

Parsha Vayigash: Teshuvah and Forgiveness as Signs of Emotional Maturity

Although some people never grow up, many of us have this uncanny ability to handle situations more tactfully than we would have when we were younger. We find this to be the case with Joseph and his brothers in this week's Torah portion. We're at the point in the story where Judah pleads on behalf of his brother, Benjamin. Afterwards, Joseph ordered everyone except the brothers to leave the room, and Joseph reveals his true identity (Genesis 45:1). Instead of throwing the book at the brothers or exacting revenge, Joseph told his brothers not to grieve because G-d had "sent Joseph before them to preserve life (Genesis 45:5)."

Joseph's reaction was remarkable. Why? Joseph's brothers threw him in a pit and sold him into slavery. Before ascending to power, Joseph had done some hard time in prison. Joseph had been put through the ringer. He had every right to be angry, and what's more is that he could have treated his brothers either with the same treatment because in ancient times, might was right. What we see is not a vengeful Joseph, but a Joseph who was longing to reunite with his family. Not only do his actions speak to this desire, but so do his words. In Genesis 45:5, Joseph said "כי למחיה שלחני אלהים לפניכם" (G-d sent me before you to preserve life). There's one problem: it wasn't G-d that sold Joseph into slavery and cause all the subsequent events that led up to that moment. It was his brothers who sold him into slavery. The text clearly says so. So why would Joseph attribute these events to G-d? Even though Joseph's dream/prophecy was correct (Genesis 37), I would postulate that Joseph cared more about family than being right or having prophetic powers. Not only did he miss his family, but he has realized the importance of G-d in his life. Joseph had a hard-knock life, and if it taught him anything, it's that an unfettered ego does not make for a fulfilling life. Rather than being the immature child who rubbed his conceit in his brothers' faces, he has figured out the importance of forgiveness as the beginning of healing his years of angst and frustration, which are illustrated by the loud cry he let out after revealing his identity (Genesis 45:2).

Why was he so overwhelmed with emotion? Why couldn't he keep up the charade anymore? Because if we read the text closely enough, Judah actually went through the stages of the teshuvah process. The brothers admitted their error (Genesis 42:21-23), they confessed and admitted collective responsibility (Genesis 44:16), and showed behavioral change by being willing to become Joseph's slaves (ibid.). This essentially is the teshvuah process (Mishnah Torah, Hilchot Teshuvah, 2). Joseph didn't hand out forgiveness for free. He realized that his brothers were truly repentant because they had shown true changes in their mentality and behavior, and for that, he was able to let them back in his lives. No more grudges. No more living in the past. Joseph was able to live in the present, feel at peace, and share that peace with his loved ones. The Joseph story is not only the first instance of forgiveness in recorded history, but also a wonderful example of the power of forgiveness and reconciliation that can guide us in the relations we have in our life.

Wednesday, December 24, 2014

People Tend Not to Exchange Gifts with Economic Efficiency, So What Gives?

With Chanukah ending and Christmas on its merry way, it got me thinking about the practice of gift-giving. People exchange gifts as a part of the holiday spirit, but the more I think about it, gift-giving doesn't make economic sense. One of the most basic ideas in economics is that the individual knows their consumer preferences better than anyone else. It's one of the reasons I get annoyed when the government assumes they know what individuals want better than the consumer. Whether it's the government spending money on in-kind transfers or individuals spending money to buy gifts, it creates economic inefficiencies because neither fully understands the individual's preferences. Yet people still give presents during the holiday season, so what gives?

The topic of the economics of gift-giving was heavily discussed in a 1993 study called "The Deadweight Loss of Christmas," in which economist Joel Waldfogel calculated a deadweight loss of $4-13B in 1993 dollars, which would be $62-206B in 2014 dollars. Deadweight loss takes place when a certain good or service is not exchanged at economic equilibrium, thereby creating economic inefficiency. The result is a loss to one party without an offset gain to another party. In this case, the assumption is that the gift-giver buys a gift that they think the recipient will value at the very same level of the gift purchased. The issue is that in many instances, the gift-giver does not have an accurate sense of what gift(s) the recipient would want, i.e., there is an information asymmetry. Since the value of the gift is less to the recipient than it is at the level at which the gift-giver perceives, economic welfare is lost in the process. The Secret Santa gift exchange is a classic example of the phenomenon at hand.

As nice as it is to make an impassioned argument about the economic inefficiencies of gift-giving, it fails to account for what economists call utility. Utility is economic jargon for "the fulfillment one receives from a certain good or service." In more layman's terms, the term could be defined as "sentimental value" or the psychological joy felt as a result of receiving or giving gifts (Gneezy and List, 2006). There is also the argument that gift-giving is a "signal of intensity of effort in one's search (see video below on Valentine's Day and gift-giving)," or another way of saying it: "it's the thought that counts" (Yao, 2009). Behavioral economics also postulates that there is an allure and excitement in gift-giving that brings joy to the giver, and it also strengthens the social connection between the giver and recipient. Although one cannot objectively measure it, one has to be able to consider the social and individual utility produced by gift-giving. And who knows? Maybe by exposing the individual to something new, they might actually like it even more (read: "more utility") than a gift that would have kept them in their comfort zone.




Even with social utility, the economic inefficiency is troubling to me. Making the argument of "stimulating the economy" doesn't work because while it might stimulate some consumer spending in the short-run, it leaves us with less resources in the medium-to-long term to help build the economy in the future. Does this mean that I have an inherent problem with gift-giving or think that gift-giving should be banned? Nope! My issue isn't with gift-giving per se, but bad gift-giving. The inefficiencies are created because the giver really doesn't know what the recipient wants. If you know the recipient well (e.g., parents buying for their children, spouses or best friends buying for each other), then economic efficiency is maintained. However, we don't have that sort of close relationship with most people, and since most gift-giving is done with people to whom we are more distant, the economic inefficiencies are still created.

So my advice on more economically efficient gift-giving goes as follows. If you don't know the person that well, either get to know them better or directly ask them what they would like. If you are too uncomfortable asking outright or simply don't want to better know the person, then give in such a way that is beneficial to the ultimate recipient. Cash is the most efficient form of gift-giving. If giving cash comes off as impersonal or you view it as socially unacceptable, a gift card can both personalize a gift while capturing much of the economic efficiency (I say "much" because $45B in gift cards have been unspent since 2005, which comes out to about nearly $5B each year). Charities are also a good idea, although if you want to give to a place like a food pantry, give them $20 instead of $20 worth of food because food drives are just bad economics. Whatever method of gift-giving you decide, I hope that 'tis the season for more economically efficient gift-giving. Happy Holidays!

Monday, December 22, 2014

Eating Cheese on Chanukah and Why Using the Story of Judith As a Basis for This Practice Has as Many Holes as Swiss Cheese

When I was in synagogue this past week, I learned about a peculiar minhag (custom) in Jewish practice: eating cheese on Chanukah. In spite what people might think, latkes, or potato pancakes [commonly eaten on Chanukah], were not originally derived from potatoes, but from cheese. Considering that the potato was a New World crop, this would make sense. But that isn't the disturbing part. It's how the practice of eating cheese on Chanukah began. The origin of this practice is first mentioned in the Shulchan Aruch by the Rema, also known as R. Moshe Isserles. In it, the Rema attributes this practice to the milk that "Judith (יהודית) fed to the enemy."

This made me ask an initial, but important question: who in the world is Judith? The Book of Judith is a deuterocanonical text, which is to say that this text made it into Christian Scripture, but never made it in the Jewish version of the Bible (Tanach). Why was this text not considered for the Jewish canon? The story itself could provide some context.

Although the text was allegedly written in Hebrew, the oldest surviving text is in ancient Greek. What the text depicts is that the Greeks conquered Judea, and the evil general Holofernes declared that all the Jewish virgin females had to sleep with a Greek official or be punished by death. Someone had to stop the madness, so Judith took it upon herself to do so. Essentially, Judith used her good looks to enter the Greek camp and seduce Holofernes. One night, she fed him cheese, which made him thirsty for wine. This was the point where she brought Holofernes to the point of inebriation, after which she decapitated Holofernes. The decapitation eroded the Greek morale, and the Greeks retreated.

Whether it's that Judith decapitated someone or that she used her sexual allure and prowess to get the job done, using Judith as an example of valor was probably something that the rabbis didn't want women emulating. Is the message that religious communities want to send to their daughters that exploiting a situation by using your sexual appeal is acceptable as long as the ends justify the means? Perhaps this is why the Book of Judith never made it into Jewish canon, or perhaps it is due to the historical anachronisms in the text or its possible Greek origin. What's even more ridiculous about using this as a basis for a minhag for Chanukah is that Holofernes wasn't Greek; he was Assyrian. This story took place during the rule of Nebuchadnezzar (6 c. B.C.E.), which was centuries before the Chanukah story, so the connection between Judith and Chanukah is literally inconceivable. It's also interesting to note that the earliest mention of this practice is during the 14th century.

I don't like the fact that a practice in Judaism, even if it's a minor one, is based on an apocryphal, fictional text with historical inaccuracies and a problematic protagonist. Fortunately, I was able to find another explanation for this practice because the primary, traditional one was very perturbing. This insight comes from the Ben Ish Chai. When the Greeks occupied Judea, they banned three specific Jewish institutions: maintaining the Jewish calendar [based on the lunar cycle], Shabbat, and circumcision. The Hebrew word for "month" is חודש, which begins with ח. The second letter of the word Shabbat (שבת) is ב. The third letter in the word מילה (a ברית מילה is the Hebrew term for circumcision) is ל. These three letters spell the world חלב, which is the Hebrew word for "milk," which gives us the basis for eating dairy on Chanukah.

It's a tenuous explanation, but let's go with it. The story of Chanukah took place during a time when the Greek rulers banned practices vital to Jewish observance. Milk is a source of sustenance. Not only does the Bible refer to Israel as the "land of milk and honey" (e.g., Exodus 3:8, 33:3; Deuteronomy 31:20), but milk symbolizes life in Judaism, as is observed by the prohibition of mixing meat and dairy. Much like milk can nurture life, Jewish rituals and practices nourish the Jewish people.

On the one hand, the universalist morals and ethics are a vital part of Judaism. On the other hand, without the ritualistic, particularistic practices, there is nothing to distinguish Judaism from other world religions. If consuming dairy products on Chanukah is to remind us of anything, it is that studying Torah, Shabbat, affixing mezzuzot, and the plethora of Jewish ritualistic practice engenders, vitalizes, and helps define Jewish spirituality.

Thursday, December 18, 2014

Reading About CIA Interrogation Methods Sort of Felt Like Torture

I know, I know. I'm running a tad behind on the news. I just moved to a different part of the country and I'm still getting settled in, so please cut me some slack on catching up here. I heard about the Senate's report on the CIA's detention and interrogation methods last week, and I have wanted to comment ever since, even if briefly.

After 9-11, things haven't been the same with the way the United States approaches national security. Fortunately, we didn't become a police state (Thank G-d!), but at the same time, it became easier to justify doing things in the name of national security, and what's worse is that most Americans are okay with their liberties being violated for security's sake. Didn't Benjamin Franklin say something about those who are willing to give up freedom for security deserve neither? I'm not just talking about starting two wars in the Middle East or passing the Patriot Act. The Senate's report shed a lot of light on what was taking place in the world of intelligence gathering. The CIA's interrogation techniques included "wallings," sleep deprivation, threatening the detainee's family with bodily harm, and the ever-infamous waterboarding.

There's the ethical question of whether we should be torturing people in the first place. There are those who are absolutely opposed in violating one's human rights to acquire national security intelligence. Proponents can certainly provide an extreme enough of a hypothetical where one would be inclined to reluctantly acquiesce, at least from a utilitarian perspective, to the violation of international law if the situation were that dire. Torture is akin to poison: "dosage matters." Given the information I presently have, I'm not quite convinced that the risk were so high that we need to use such methods. The problem with national security issues is that classified information and security clearances cause such an information asymmetry that only the top echelon would have adequate information to assess who is a threat and who is not. Objectively, we cannot know how deep the rabbit hole goes.

However, let's give the CIA the benefit of a doubt for a moment, and let's say that using torture to obtain pertinent national security information is reasonable, and let's also assume that the detainee actually has pertinent information to divulge. The intuition behind torture as an intelligence gathering method seems sound. You put the detainee through physical and psychological pain to get him to talk because he can't take the pain any longer. It has been done for centuries, so it's not like the intuition is anything new. Perhaps there is enough of a gradation in the quality and quantity of interrogation techniques where the CIA is justified in its actions. The problem is what the report illustrates, which is such interrogation methods are counterproductive, which makes intuitive sense, especially if they're just saying what the interrogators want to hear. The CIA has even admitted that at least up until 2013, it had no way of assessing effectiveness of its interrogation methods. If the interrogation methods don't provide the CIA with the information they require in the first place, what good is torture? The lack of oversight from either the legislative or executive branches, or even the CIA's Office of Inspector General for that matter, does not help with situation, either.

I'm about ready to head to work, so although I can say more, I really need to summarize my thoughts. Unsurprisingly, people criticize these methods. Proponents point out that we're nothing like China or North Korea. While it is true that America's methods are mild in comparison to the Middle Ages, if we are going around the world trying to promote democratic values, then America needs to "walk the walk" and act upon what it preaches as a matter of policy. I'm not here to say that America shouldn't have any counterterrorism measures whatsoever. There certainly is room to have a conversation on what the CIA's role should be in providing the social good of national security. What I am trying to say is that if the CIA is to have an active role in national security, the policy alternatives to improve the situation should be done tactfully, with accountability, and should be implemented with a greater context of the threat's overall risks in mind. We should expect the highest quality of governance from all bureaucratic agencies, and national security organizations like the NSA or the CIA are no exception. I hope that this report is a stepping stone to implementing some real national security reform.

Monday, December 15, 2014

Did the Minimum Wage Cause the Great Recession to Last Longer?

Economists and historians will be debating well into the future as to what caused the Great Recession. What is a comparably amusing debate to watch is what caused the Great Recession to linger on as long as it has. My money has been on unemployment benefits being the primary culprit (see here and here), and yet another theory comes along to complement the "unemployment benefits" theory: minimum wage laws. Shortly before the Great Recession began, Congress passed the Fair Minimum Wage Act of 2007, which gradually raised the federal minimum wage from $5.85 to $7.25 per hour. Minimum wage proponents like to think that gradual and "minute" minimum wage increases cause negligible economic harm at best, but recent research continues to add to the evidence that the minimum wage is nowhere as benign as proponents would have us believe. According to Professors Jeff Clemens and Michael Wither of the University of San Diego, the minimum wage hikes caused a net job loss of 1 million (Clemens and Wither, 2014).

Since there were states that were already paying a minimum wage that was higher than the proposed federal minimum wage, Clemens and Wither were able to measure the effects with a legitimate control group, which is no easy task in the world of social sciences. By doing so, the authors found that the employment-population ratio, i.e., the share of employed, working-age adults, decreased by 0.7 percentage points, which accounts for 15 percent of the overall decrease during the Great Recession. This helps make the study more credible because plenty of other minimum wage studies like to focus only on certain demographics (e.g., fast food workers, teenagers) instead of the macro effects of the minimum wage legislation.

This research also points out the significant declines in economic mobility (Clemens and Wither, Table 6), which is important because it reemphasizes the importance that low-skilled work has a stepping stone for upward mobility: five percentage points less likely to acquire a middle-class job. The other point that this research makes is how the minimum wage does not do nearly as good of a job of targeting low-skilled workers as the earned income tax credit does (Clemens and Wither, p. 33). The disemployment effect caused more educated workers to take on internship (p. 26), whereas less-educated workers were subject to increased odds of simply being unemployed (p. 27).

The fact that minimum wage increases unemployment and decreases economic mobility does not shock me in the slightest. While it is true that some individuals have the positive impact of an improved quality of life because of a minimum wage, let's not forget that it comes with the cost of depriving other individuals of the opportunity to gain experience and achieve higher-paid jobs in the long-run, which did nothing to help ameliorate the economic conditions of the Great Recession. This will hardly be the end of the minimum wage debate because it has become such a hot-button topic over the years. Nevertheless, if we want to help the poor, we should come up with policy alternatives that actually helps them, and spoiler alert, the minimum wage is not such an alternative.

Thursday, December 11, 2014

Does Income Inequality Cause Decreased Economic Growth?

The income inequality debate never seems to die. Its most recent revival was due to the Organisation for Economic Co-operation and Development (OECD) and its latest report (summary here) on "Trends in Income Inequality and its Impact on Economic Growth." Although the OECD's analysis has more variables, the essential relationship that the OECD establishes is between the Gini coefficient and the GDP growth rate.

What is the Gini coefficient? It is a form of statistical dispersion used to represent the income distribution of a given nation. It has become the gold standard for measuring income inequality. Although it works nicely because it's relatively easy to compare across countries, there are still some flaws with it. One is that it compares income, and not wealth. Two countries with different amounts of wealth can have the same Gini coefficient, which also means that the Gini coefficient says nothing about quality in a given country. The Gini coefficient can produce the same coefficient for two countries with different income distributions because the Lorenz curve can have different curvatures for different countries. Furthermore, the Gini coefficient does not account for utility or economic opportunity.

Much like with the GDP, until we can come up with a better metric, we need to do the best we have. Even if the OECD uses the GDP as the metric for economic success, I still take issue with the temporal comparison because over time, a more developing country is going to experience an overall decline in GDP growth rate with reasons having nothing to do with income inequality. Correlation has suddenly turned into causation, and that fact that the OECD recommends wealth redistribution, a policy that does more than its fair share of harm, based on a correlation that can be easily explained by other factors is most unfortunate. The OECD says that redistribution would work if the government could do so efficiently (OECD, p. 19), which I find to be a highly tenuous assumption.

Although there is enough reason to not to jump to conclusions with the OECD's report, what did the OECD end up finding? The ratio of the income of the richest ten percent to the poorest ten percent increased from 7:1 in the 1980s to 9.5:1. As a result, the OECD's economic analysis suggests that this increased income inequality has had a statistically significant, negative impact on economic growth. Conversely, what the OECD finds that is equally intriguing is that "no evidence is found that those with high incomes pulling away from the rest of the population harms [economic] growth (p. 6)." This is important because the typical income inequality narrative is that the top echelon is gobbling up the resources while the "99 percent" have nothing left.

Looking at the OECD study, the issue is not with the rich getting richer per se, but rather with the poor not having the same level of access to resources in order to develop their human capital. This is especially true when looking at educational attainment for lower-income families (p. 28), which was one of the biggest kvetches of the OECD in this study. If the OECD study is correct, then income inequality only affects those with a lower educational attainment. Those with parents who have medium to high educational attainment are not affected by income inequality (p. 25-26). 

The OECD focuses on the bottom of income distribution, as it well should. Anti-poverty initiatives are not enough, according to the OECD (p. 29), but they might not be enough because the current programs are not sufficient at accomplishing the task at hand. It very well could be because many anti-poverty initiatives are handled by government bureaucracies, which makes me wonder whether the government intervening to reduce income inequality will actually increase economic growth. There are many ways to revive economic growth, and I honestly don't think simply redistributing wealth is going to help. The IMF actually published a report, and showed that at best, redistribution is negligible, but it can also very well make things worse (Ostry et al., 2014, p. 23). There is no need to knock rich people down a peg with poor policy like the wealth tax because by the OECD's own admission, the "one percent" isn't de facto causing the issues at hand. I've discussed education and anti-poverty initiatives in the past, but it should go without saying that we should focus on policies that help make the poor less poor and provide them with the opportunity to access the tools they need to succeed in life. Whatever those policies may end up being, we should improve the quality of education and encourage entrepreneurship instead of going after the ever-intangible and elusive "income inequality."

Tuesday, December 9, 2014

The Fiscal Costs of the Death Penalty and How It Costs More Than an Arm and a Leg

The death penalty has caused much debate in this country. Does the death penalty deter crime? Should the government have the power over life and death? Is the death penalty appropriate if even one innocent person is executed? These are questions that typically surround the debate, but there is one I would like to cover: does the death penalty cost more than life in prison? This was a question the state of Nevada's Legislative Auditor seemed to answer in its audit released recently.

Looking at 28 death penalty cases in Nevada, the average death penalty case costs $532,000 more than a case when the death penalty is not sought (p. 10), which is nearly twice as much as a murder case for life without parole. Although incarceration costs were less for cases that sought the death penalty (Exhibit 7), what caused the death penalty cases to supersede the non-death penalty cases was average case costs (Exhibit 5). Most of the costs are racked up even before the trial begins (Exhibit 10), which is all the more damning since most cases in which the prosecutor seeks the death penalty does not actually impose the death penalty (Exhibit 2). For death penalty cases, they require more lawyers, more preparation, more investigators, more special motions, more witnesses, more experts, a longer jury selection, not to mention a longer appeals process (Exhibit 6).

Many other states, such as California, Indiana, Maryland, Louisiana, New JerseyMontana, Connecticut, North Carolina, Ohio, and Kansas, have attempted to capture the costs and have come to the same conclusion: the death penalty costs way more than life without parole. The money that was spent on the death penalty could have been spent on real crime control measures, such as solving, preventing, or prosecuting other crimes. The evidence is clear. If one wants to make an argument for the death penalty, trying to make the argument based on cost savings is not the way to go.

Friday, December 5, 2014

The FDA's Lifetime Ban on Gay Men Donating Blood Makes My Blood Boil

AIDS has been a frightening virus since its discovery in 1983. Since men who have sex with other men, also known as MSM, were the predominant carriers of AIDS, the Food and Drug Administration (FDA) decided to ban these men from donating blood. In some countries, deferrals allow for MSM to donate after a certain period of time. In the United States, however, no such deferral is allowed. This ban has been FDA policy for over thirty years, but the FDA has decided to revisit the topic and possibly change the law where MSM would have a deferral one year after their latest male-to-male sexual encounter. Part of the change of heart is because we have realized that AIDS is not a "gay disease." Part of it is because we have developed technology to better screen for HIV, the virus that causes AIDS. Has the ban outlived its usefulness, or should it still be in force?

According to the FDA, the purpose of this ban to use "multiple layers of safeguards in its approach to ensuring blood safety....A history of male-to-male sex is associated with an increased risk for exposure to and transmission of certain infectious diseases, including HIV, the virus that causes AIDS. Men who have had sex with other men represent approximately 2% of the US population, yet are the population most severely affected by HIV." Essentially, the FDA's concern is with safety and making sure that donated blood is not contaminated with HIV. Let's see how valid the FDA's concern really is.

According to CDC statistics, the most common transmission category of HIV (CDC, 2012, Table 1a) is male-to-male sexual contact. This accounted for 64 percent of overall diagnoses of HIV, which totaled at 30,695 estimated diagnoses in 2012. As for number of individuals who carry the virus, MSM account for 52 percent of the subpopulation, totaling at 451,656 men (Table 14a). Undiagnosed individuals have a comparable result (CDC, 2011, Table 9a): 596,600 MSM out of 1,144,500 persons living with HIV, i.e., 52 percent.

So 596,600 MSM with HIV make up 0.18 percent of the 316 million American populace. Even if you want to filter out the 23.3 percent of Americans who are under 18, these individuals are only 0.25 percent of the American population. Assuming that gay men make up six percent of the overall male population, that makes for 7.27 million gay men over 18. Even if one makes the highly tenuous assumptions that a) only gay men are MSM, and b) all gay men are MSM, then that would still mean that only eight percent of gay men have HIV. Even if we were to take this unreasonably high estimation at face value, is the ban justifiably based on science? In short, no.

Not only has our understanding of how it is transmitted changed, but treatment and detection have also developed since 1983. Nucleic acid tests can diagnose HIV within two weeks of infection (FDA, p. 3), but the window period lasts from three to six months. Additionally, federal laws require that the blood be tested for diseases, including HIV. The odds of HIV infection through a blood transfusion, 1 in 2,000,000, is so small that it is almost non-existent. This is why many countries have changed their policies from a lifetime ban to relatively short deferral periods. Australia found no increased rates of transmission of HIV when it switched from a five-year deferral to a one-year deferral (Seed et al., 2010). Many countries, including the UK, Sweden, and Japan have switched to one-year deferral periods. Although a one-year deferral is an improvement over a lifetime ban, it is still arbitrary and discriminatory.

Even if switched to a one-year deferral, it still makes the mistake of identifying high-risk groups instead of high-risk behaviors. Go back to the CDC statistics (Table 1a) and you'll see that 48 percent of those newly diagnosed with HIV are African-American. Does anyone hear clamoring for African-Americans to be barred from donating blood? No, because that would be discriminatory, and it wouldn't target the issue at hand. After all, why should a high-risk heterosexual male who has unprotected sex with multiple partners get a free pass while a homosexual male in a committed relationship and doesn't have anal intercourse get punished? Looking at a potential donor's behaviors is more accurate of a proxy than targeting homosexual males. Italy went from a lifetime ban to an individualized risk assessment, which had no adverse impact on the incident rate of HIV (Suligoi et al., 2013).

The American Osteopathic Association and the American Medical Association have all realized that the science does not support such prohibitions. I know the FDA is trying to be risk-averse as humanly possible, but there's a fine line between justifiable, precautionary measures and counterproductive measures with nothing to show for it except blood banks experiencing a shortage of donated blood. If the ban were lifted, it would mean 615,300 additional pints of blood. A one-year deferral would still mean an extra 317,000 pints of donated blood (Miyashita and Gates, 2014). Whatever minimal risks that exist are considerably outweighed by the clear benefit of helping close the shortage of donated blood so people can receive the medical services they need and deserve. I hope the FDA realizes that its policies are causing more harm than good, and that they use science-based evidence to overturn this ban that can only be described as bloody idiotic.

Wednesday, December 3, 2014

Focusing on Police Body Cameras and Best Practices for Law Enforcement

What has going on in Ferguson, Missouri has had the country quite riled up about race relations in America. It has become politicized enough where shortly after the release of a White House review on law enforcement practices on Monday, President Obama has recommended appropriating $75M to purchase 50,000 police body-mounted cameras. It should be no surprise that the events in Ferguson would elicit such a response. Personally speaking, I'm more perturbed by the increased police militarization in America that the Ferguson situation exemplified, which is something the White House review addresses. Regardless, it gets me wondering if equipping police officers with body cameras is such a good idea or not.

If one had to summarize the case for police body cameras in a single word, it would be "accountability." Since the shooting of Michael Brown on August 9, there has been considerable clamoring for police officers to wear body cameras to capture footage of police officers on the job. Not only are these cameras supposed to hold police officers accountable for their actions in order to reduce complaints of police misconduct, but it is also supposed to protect officers from false accusations of wrongdoing. Humans tend to behave better when they think they are being watched, which is the reasoning behind the body cameras and their efficacy.

While body cameras have the potential to alter behavior for the better, skeptics are worried about it can adversely affect law enforcement, as is illustrated by this Madison Police Department report. Do you think a confidential informant is going to want to talk to a cop with a camera streaming footage? Can a camera be turned off if the citizen requests it? Can this new technology be abused? Should police camera footage become public record? How much would these body cameras infringe upon the Fourth Amendment? Issues of privacy either for citizens or police officers set aside for a moment, there are also technological impediments.

As of date, the battery life on a camera can be as short as a couple of hours, but can be long as twelve hours. The technology can always improve, but it questions the ability of the camera to capture everything. Even if we assume that the camera never malfunctions during the entire tour of duty and the video is never tampered with, the camera is still not going to be a completely accurate telling of events because given the limits by the scope of the lens, it cannot capture everything. A video without context can be misinterpreted.

None of this even touches upon the dollar amount for such equipment. Obama is looking to spend $75M on 50,000 cameras, which comes out to $1,500 per camera. Considering that cameras range from $119-$1,000 per camera, I'm not sure why Obama is asking for this much money. Even so, this amount would only cover a fraction of the nearly 630,000 law enforcement officers. Also, the cost that is even bigger than the initial cost of buying the camera is video data storage. According to a recent Department of Justice study on police body cameras, the bulk of costs for body cameras goes to data storage, as the New Orleans Police Department has already discovered (p. 32). Again, technology can always improve, but considering the budget cuts that have been taking place since the Great Recession, it is going to be more difficult to fund such an initiative, even with federal funding assistance.

As for whether body cameras work, since they are a relatively nascent technology, the empirical evidence is scant (see Office of Justice Programs assessment here). Aside from the Department of Justice study cited above, a case study that has shown promising success is Rilato Police Department case study. In this case study, use of force by officers decreased by nearly two-thirds, and citizen complaints decreased by 88 percent. There are some other case studies out there, not to mention the UK Home Office's report on the topic, but there is still a lack of a causal link because it's not sure whether the citizens, officers, or both behave better as a result of being videotaped. Additionally, implementing the cameras is still new enough where we don't have anything close to a complete cost-benefit analysis. For instance, while the cameras cost money to purchase and maintain, there is the question of how help they prevent the costs of police misconduct. For instance, the NYPD paid out $152M last year as a result of claims of police misconduct, which is a lot more than body cameras would have cost. Do body cameras improve or erode relations between law enforcement officers and the citizenry? Do they have the ability to intimidate victims or even suspects, thereby altering their testimony?

Aside from it being new technology, I have my ethical and legal qualms about such technology. Even so, if the intuition behind the body cameras is correct, I have to agree with the American Civil Liberties Union (ACLU) in its 2013 report by saying that it will be an overall improvement over not having cameras. To affirm that assertion, more cities, such as Washington DC and New York City, should experiment to see if body cameras work. That being said, we should not treat this as a catch-all or a silver bullet in law enforcement reform. Body cameras can help with law enforcement, but this policy would have to work in conjunction in other policies if we want to improve upon the overall state of local law enforcement.




10-15-2015 Addendum: The University of South Florida just released a case study showing that body cameras are indeed effective.

Monday, December 1, 2014

A Blessing On Your Head: Why Should a Jew Keep a Kippah on His Head?

For those who wear religious garb, it can be a spiritually rewarding experience. I wish I can say the same for me when I wear the kippah. Although it has certainly had its rewarding moments, on the whole, wearing a kippah has been something I have struggled with both in theory and practice. What I would like to do here is highlight both the reasons for wearing [or not wearing] the kippah while illustrating some of my personal struggles with this Jewish practice.

The kippah (כיפה, literally meaning "dome"), or what is alternatively called the yarmulke (יארמולקע; Yiddish word with its origins from the Aramaic "awe of the King," i.e., G-d) is a head covering that primarily observant Jewish males wear, although there are some non-Orthodox females that also wear a kippah. It is the most identifiable mark of a Jew, yet its origins are non-biblical in nature, which is where part of my frustration of wearing the kippah originates (The High Priest wore a head covering in Exodus 28:4, 7, 30, but this is a far cry from the obligation to wear a kippah). Why should that be frustrating? When you look at the history of Jewish practices, many of them were either created or evolved during the post-biblical era, which is fine. Religion is meant to evolve. Where I get frustrated is when we blur the line between custom and law, which at least for me, is what the kippah exemplifies. The kippah is first mentioned in the Talmud (more on that momentarily), and it was pretty much a practice for the particularly pious (חסידים). Only later did it turn into a widely accepted practice. Keep in mind that the Talmud does not forbid one to walk around bareheaded. Even during the Geonic period (6-11 c.), only those participating in services wore a head covering. The best way I can find to describe the legal status is the following: the Shulchan Aruch (Orach Chayim 91:3) and Mishneh Torah (Ahava, Hilchot Tefilah 5:5) require it in when praying in synagogue, studying Torah, and the like. Rabbis in pre-modern times had considerable debate as to whether there was an actual obligation (Orach Chayim 2:6 says there is an obligation), and that ambiguity leads me to believe that it is not a de jure obligation, although it is still a highly encouraged measure of piety. Still, many observant Jews treat it as if it were a de jure obligation. Even putting that debate to the side for a moment, what are some of the explanations for wearing the kippah?

  1. Fear and awe of G-d. The Talmud does not make many references to obligatorily wearing a head covering, but one general reference to wearing a kippah is in Tractate Shabbat 156b. The parable in the tractate goes as such: The mother of Rav Nachman bar Yitzchak was told by astrologers that her son would become a thief unless the son changed his ways. Upon hearing this, the mother told her son to cover his so he will feel the reverence of Heaven (יראת הי). When using this parable we should consider a couple of points. First, should we base a practice on astrology? While astrology was considered to be science back then, we now know that it is hokum. Second, it is possible that it is an extreme case rather than a norm. Jews already had tzitzit and tefillin, amongst other signs to remind them of G-d's presence. Do we need more? Perhaps human spirituality is fragile enough where more mitzvahs to remind us of His presence is not a bad idea. 
  2. Piety.  According to the Talmud (Kiddushin 31a), R. Huna did not walk four amot (אמות; cubit. Four cubits was the Talmudic definition of one's personal space) without having his head covered because he was humbled enough to be constantly reminded that there is always something above us. This talmudic passage could be why rabbinic authorities, including ChidaMagen Avraham, and Vilna Gaon (also see here) viewed the kippah as a measure of piety instead of a halachic obligation. If the kippah inculcated piety, I would be more appreciative of how such an external action could translate to internal awareness. It's hard to consistently have that level of piety. What happens when the desired effect is no longer there, or was never there in the first place? Has it lost its meaning of piety when Jewish individuals who commit reprehensible acts wear the kippah as a façade? I do wonder time from time how much the kippah has lost its meaning, at least in a sociological sense, when bad men commit wrongdoings while wearing the kippah.
  3. Jewish pride. Even in the Middle Ages, Jews were more and more insistent on wearing the kippah to distinguish themselves from their bareheaded, non-Jewish neighbors because they were fed up with their Christian oppressors. Since Christians would take their hats off in reverence, Jews did the exact opposite (Taz 8,3). (This could explain why Sephardic Jews have not developed the same level of universal practice with the kippah.) Thankfully, we don't live in that world anymore. In an American context, Jews have great religious freedom, freedom that is unprecedented in the history of the Jewish diaspora. Rather than wear the kippah as a response to oppression, Jews in America, at least, can proudly wear the kippah as an expression not only of one's religiosity and Jewish identity, but as a sign of how far American society has developed in terms of religious tolerance
  4. Jewish identity. 
    • This one I would like to point out based on a recent study session with a chevruta of mine when we were studying Kitzur Shulchan Aruch. Jewish law dictates that Jews are supposed to keep certain particularistic practices to distinguish the Jew from the non-Jew/gentile (Leviticus 18:3) in order to prevent assimilation (Deuteronomy 12:30). The Kitzur Shulchan Aruch discusses this in the specific context of wearing clothing. Here's an important question: which non-Jews are we not supposed to emulate? We don't live in the same world that our ancestors did. Jews don't live in ghettos or shtetls. The world is much more globalized than it used to be, and there is a lot of heterogeneity when it comes to what non-Jews wear. Given the wide variety of clothing in modern times, it's all the more difficult to say that wearing a certain style of clothing is a prima facie form of assimilation. We don't live in a world in which Jews wear distinctively Jewish clothing. Since many observant Jews buy their clothes from non-Jewish distributors, Jews have less ways to distinguish themselves in terms of clothing. Aside from wearing tzitzit, something which a sizable amount of observant male Jews tuck in anyways, the only way the Jew has to externally identify himself as Jewish, at least in terms of apparel, is the kippah. As such, the kippah has more relevance in terms of symbolism than it did in the past.   
    • Wearing a kippah in everyday situations not only shows that you are a Jew who observes Jewish laws and practices. There are enough identity politics in the type of kippah that one wears that one can determine the political or religious affiliation simply by looking at the type of kippah. For instance, the black velvet kippah is worn by yeshivish types. A crocheted kippah is typically worn by Modern Orthodox or religious Zionists, although there are some non-Orthodox Jews that wear them. The satin ones are worn by non-Orthodox Jews who borrow it from the synagogue on a one-time basis because they don't normally wear their own. A lot can be said about the type of Judaism one practices based on what wears on one's head. 

Postscript: Should the kippah be something worn by all Jewish males or return to being a status symbol of true piety? This question gets at the true struggle I feel with the kippah because I think that the kippah should not simply symbolize one's Jewishness, but one's level of internal piety. The vast majority of us do not feel a 24-7 closeness to G-d (דבקות). Because I do not consistently have that sense of piety or humility, there are times that I feel out of place or dishonest wearing the kippah. There is that part of me that feels that the kippah is for the truly pious. Conversely, it very well could be precisely because the דבקות is not always there is when I need the kippah as a reminder that G-d is above me. I will continue to have that internal conflict as to whether the kippah is meant to be a status symbol of one's inner piety or if it is meant to engender a further sense of inner piety. Whichever approach I take, I'm just glad that I can continue having this struggle over something that isn't unambiguously accepted under Jewish law.