Monday, December 22, 2014

Eating Cheese on Chanukah and Why Using the Story of Judith As a Basis for This Practice Has as Many Holes as Swiss Cheese

When I was in synagogue this past week, I learned about a peculiar minhag (custom) in Jewish practice: eating cheese on Chanukah. In spite what people might think, latkes, or potato pancakes [commonly eaten on Chanukah], were not originally derived from potatoes, but from cheese. Considering that the potato was a New World crop, this would make sense. But that isn't the disturbing part. It's how the practice of eating cheese on Chanukah began. The origin of this practice is first mentioned in the Shulchan Aruch by the Rema, also known as R. Moshe Isserles. In it, the Rema attributes this practice to the milk that "Judith (יהודית) fed to the enemy."

This made me ask an initial, but important question: who in the world is Judith? The Book of Judith is a deuterocanonical text, which is to say that this text made it into Christian Scripture, but never made it in the Jewish version of the Bible (Tanach). Why was this text not considered for the Jewish canon? The story itself could provide some context.

Although the text was allegedly written in Hebrew, the oldest surviving text is in ancient Greek. What the text depicts is that the Greeks conquered Judea, and the evil general Holofernes declared that all the Jewish virgin females had to sleep with a Greek official or be punished by death. Someone had to stop the madness, so Judith took it upon herself to do so. Essentially, Judith used her good looks to enter the Greek camp and seduce Holofernes. One night, she fed him cheese, which made him thirsty for wine. This was the point where she brought Holofernes to the point of inebriation, after which she decapitated Holofernes. The decapitation eroded the Greek morale, and the Greeks retreated.

Whether it's that Judith decapitated someone or that she used her sexual allure and prowess to get the job done, using Judith as an example of valor was probably something that the rabbis didn't want women emulating. Is the message that religious communities want to send to their daughters that exploiting a situation by using your sexual appeal is acceptable as long as the ends justify the means? Perhaps this is why the Book of Judith never made it into Jewish canon, or perhaps it is due to the historical anachronisms in the text or its possible Greek origin. What's even more ridiculous about using this as a basis for a minhag for Chanukah is that Holofernes wasn't Greek; he was Assyrian. This story took place during the rule of Nebuchadnezzar (6 c. B.C.E.), which was centuries before the Chanukah story, so the connection between Judith and Chanukah is literally inconceivable. It's also interesting to note that the earliest mention of this practice is during the 14th century.

I don't like the fact that a practice in Judaism, even if it's a minor one, is based on an apocryphal, fictional text with historical inaccuracies and a problematic protagonist. Fortunately, I was able to find another explanation for this practice because the primary, traditional one was very perturbing. This insight comes from the Ben Ish Chai. When the Greeks occupied Judea, they banned three specific Jewish institutions: maintaining the Jewish calendar [based on the lunar cycle], Shabbat, and circumcision. The Hebrew word for "month" is חודש, which begins with ח. The second letter of the word Shabbat (שבת) is ב. The third letter in the word מילה (a ברית מילה is the Hebrew term for circumcision) is ל. These three letters spell the world חלב, which is the Hebrew word for "milk," which gives us the basis for eating dairy on Chanukah.

It's a tenuous explanation, but let's go with it. The story of Chanukah took place during a time when the Greek rulers banned practices vital to Jewish observance. Milk is a source of sustenance. Not only does the Bible refer to Israel as the "land of milk and honey" (e.g., Exodus 3:8, 33:3; Deuteronomy 31:20), but milk symbolizes life in Judaism, as is observed by the prohibition of mixing meat and dairy. Much like milk can nurture life, Jewish rituals and practices nourish the Jewish people.

On the one hand, the universalist morals and ethics is a vital part of Judaism. On the other hand, without the ritualistic, particularistic practices, there is nothing to distinguish Judaism from other world religions. If consuming dairy products on Chanukah is to remind us of anything, it is that studying Torah, Shabbat, affixing mezzo, and the plethora of Jewish ritualistic practice engenders, vitalizes, and helps define Jewish spirituality.

Thursday, December 18, 2014

Reading About CIA Interrogation Methods Sort of Felt Like Torture

I know, I know. I'm running a tad behind on the news. I just moved to a different part of the country and I'm still getting settled in, so please cut me some slack on catching up here. I heard about the Senate's report on the CIA's detention and interrogation methods last week, and I have wanted to comment ever since, even if briefly.

After 9-11, things haven't been the same with the way the United States approaches national security. Fortunately, we didn't become a police state (Thank G-d!), but at the same time, it became easier to justify doing things in the name of national security, and what's worse is that most Americans are okay with their liberties being violated for security's sake. Didn't Benjamin Franklin say something about those who are willing to give up freedom for security deserve neither? I'm not just talking about starting two wars in the Middle East or passing the Patriot Act. The Senate's report shed a lot of light on what was taking place in the world of intelligence gathering. The CIA's interrogation techniques included "wallings," sleep deprivation, threatening the detainee's family with bodily harm, and the ever-infamous waterboarding.

There's the ethical question of whether we should be torturing people in the first place. There are those who are absolutely opposed in violating one's human rights to acquire national security intelligence. Proponents can certainly provide an extreme enough of a hypothetical where one would be inclined to reluctantly acquiesce, at least from a utilitarian perspective, to the violation of international law if the situation were that dire. Torture is akin to poison: "dosage matters." Given the information I presently have, I'm not quite convinced that the risk were so high that we need to use such methods. The problem with national security issues is that classified information and security clearances cause such an information asymmetry that only the top echelon would have adequate information to assess who is a threat and who is not. Objectively, we cannot know how deep the rabbit hole goes.

However, let's give the CIA the benefit of a doubt for a moment, and let's say that using torture to obtain pertinent national security information is reasonable, and let's also assume that the detainee actually has pertinent information to divulge. The intuition behind torture as an intelligence gathering method seems sound. You put the detainee through physical and psychological pain to get him to talk because he can't take the pain any longer. It has been done for centuries, so it's not like the intuition is anything new. Perhaps there is enough of a gradation in the quality and quantity of interrogation techniques where the CIA is justified in its actions. The problem is what the report illustrates, which is such interrogation methods are counterproductive, which makes intuitive sense, especially if they're just saying what the interrogators want to hear. The CIA has even admitted that at least up until 2013, it had no way of assessing effectiveness of its interrogation methods. If the interrogation methods don't provide the CIA with the information they require in the first place, what good is torture? The lack of oversight from either the legislative or executive branches, or even the CIA's Office of Inspector General for that matter, does not help with situation, either.

I'm about ready to head to work, so although I can say more, I really need to summarize my thoughts. Unsurprisingly, people criticize these methods. Proponents point out that we're nothing like China or North Korea. While it is true that America's methods are mild in comparison to the Middle Ages, if we are going around the world trying to promote democratic values, then America needs to "walk the walk" and act upon what it preaches as a matter of policy. I'm not here to say that America shouldn't have any counterterrorism measures whatsoever. There certainly is room to have a conversation on what the CIA's role should be in providing the social good of national security. What I am trying to say is that if the CIA is to have an active role in national security, the policy alternatives to improve the situation should be done tactfully, with accountability, and should be implemented with a greater context of the threat's overall risks in mind. We should expect the highest quality of governance from all bureaucratic agencies, and national security organizations like the NSA or the CIA are no exception. I hope that this report is a stepping stone to implementing some real national security reform.

Monday, December 15, 2014

Did the Minimum Wage Cause the Great Recession to Last Longer?

Economists and historians will be debating well into the future as to what caused the Great Recession. What is a comparably amusing debate to watch is what caused the Great Recession to linger on as long as it has. My money has been on unemployment benefits being the primary culprit (see here and here), and yet another theory comes along to complement the "unemployment benefits" theory: minimum wage laws. Shortly before the Great Recession began, Congress passed the Fair Minimum Wage Act of 2007, which gradually raised the federal minimum wage from $5.85 to $7.25 per hour. Minimum wage proponents like to think that gradual and "minute" minimum wage increases cause negligible economic harm at best, but recent research continues to add to the evidence that the minimum wage is nowhere as benign as proponents would have us believe. According to Professors Jeff Clemens and Michael Wither of the University of San Diego, the minimum wage hikes caused a net job loss of 1 million (Clemens and Wither, 2014).

Since there were states that were already paying a minimum wage that was higher than the proposed federal minimum wage, Clemens and Wither were able to measure the effects with a legitimate control group, which is no easy task in the world of social sciences. By doing so, the authors found that the employment-population ratio, i.e., the share of employed, working-age adults, decreased by 0.7 percentage points, which accounts for 15 percent of the overall decrease during the Great Recession. This helps make the study more credible because plenty of other minimum wage studies like to focus only on certain demographics (e.g., fast food workers, teenagers) instead of the macro effects of the minimum wage legislation.

This research also points out the significant declines in economic mobility (Clemens and Wither, Table 6), which is important because it reemphasizes the importance that low-skilled work has a stepping stone for upward mobility: five percentage points less likely to acquire a middle-class job. The other point that this research makes is how the minimum wage does not do nearly as good of a job of targeting low-skilled workers as the earned income tax credit does (Clemens and Wither, p. 33). The disemployment effect caused more educated workers to take on internship (p. 26), whereas less-educated workers were subject to increased odds of simply being unemployed (p. 27).

The fact that minimum wage increases unemployment and decreases economic mobility does not shock me in the slightest. While it is true that some individuals have the positive impact of an improved quality of life because of a minimum wage, let's not forget that it comes with the cost of depriving other individuals of the opportunity to gain experience and achieve higher-paid jobs in the long-run, which did nothing to help ameliorate the economic conditions of the Great Recession. This will hardly be the end of the minimum wage debate because it has become such a hot-button topic over the years. Nevertheless, if we want to help the poor, we should come up with policy alternatives that actually helps them, and spoiler alert, the minimum wage is not such an alternative.

Thursday, December 11, 2014

Does Income Inequality Cause Decreased Economic Growth?

The income inequality debate never seems to die. Its most recent revival was due to the Organisation for Economic Co-operation and Development (OECD) and its latest report (summary here) on "Trends in Income Inequality and its Impact on Economic Growth." Although the OECD's analysis has more variables, the essential relationship that the OECD establishes is between the Gini coefficient and the GDP growth rate.

What is the Gini coefficient? It is a form of statistical dispersion used to represent the income distribution of a given nation. It has become the gold standard for measuring income inequality. Although it works nicely because it's relatively easy to compare across countries, there are still some flaws with it. One is that it compares income, and not wealth. Two countries with different amounts of wealth can have the same Gini coefficient. The Gini coefficient can produce the same coefficient for two countries with different income distributions because the Lorenz curve can have different curvatures for different countries. Furthermore, the Gini coefficient does not account for utility or economic opportunity.

Much like with the GDP, until we can come up with a better metric, we need to do the best we have. Even if the OECD uses the GDP as the metric for economic success, I still take issue with the temporal comparison because over time, a more developing country is going to experience an overall decline in GDP growth rate with reasons having nothing to do with income inequality. Correlation has suddenly turned into causation, and that fact that the OECD recommends wealth redistribution, a policy that does more than its fair share of harm, based on a correlation that can be easily explained by other factors is most unfortunate. The OECD says that redistribution would work if the government could do so efficiently (OECD, p. 19), which I find to be a highly tenuous assumption.

Although there is enough reason to not to jump to conclusions with the OECD's report, what did the OECD end up finding? The ratio of the income of the richest ten percent to the poorest ten percent increased from 7:1 in the 1980s to 9.5:1. As a result, the OECD's economic analysis suggests that this increased income inequality has had a statistically significant, negative impact on economic growth. Conversely, what the OECD finds that is equally intriguing is that "no evidence is found that those with high incomes pulling away from the rest of the population harms [economic] growth (p. 6)." This is important because the typical income inequality narrative is that the top echelon is gobbling up the resources while the "99 percent" have nothing left.

Looking at the OECD study, the issue is not with the rich getting richer per se, but rather with the poor not having the same level of access to resources in order to develop their human capital. This is especially true when looking at educational attainment for lower-income families (p. 28), which was one of the biggest kvetches of the OECD in this study. If the OECD study is correct, then income inequality only affects those with a lower educational attainment. Those with parents who have medium to high educational attainment are not affected by income inequality (p. 25-26). 

The OECD focuses on the bottom of income distribution, as it well should. Anti-poverty initiatives are not enough, according to the OECD (p. 29), but they might not be enough because the current programs are not sufficient at accomplishing the task at hand. It very well could be because many anti-poverty initiatives are handled by government bureaucracies, which makes me wonder whether the government intervening to reduce income inequality will actually increase economic growth. There are many ways to revive economic growth, and I honestly don't think simply redistributing wealth is going to help. The IMF actually published a report, and showed that at best, redistribution is negligible, but it can also very well make things worse (Ostry et al., 2014, p. 23). There is no need to knock rich people down a peg with poor policy like the wealth tax because by the OECD's own admission, the "one percent" isn't de facto causing the issues at hand. I've discussed education and anti-poverty initiatives in the past, but it should go without saying that we should focus on policies that help make the poor less poor and provide them with the opportunity to access the tools they need to succeed in life. Whatever those policies may end up being, we should improve the quality of education and encourage entrepreneurship instead of going after the ever-intangible and elusive "income inequality."

Monday, December 8, 2014

The Fiscal Costs of the Death Penalty and How It Costs More Than an Arm and a Leg

The death penalty has caused much debate in this country. Does the death penalty deter crime? Should the government have the power over life and death? Is the death penalty appropriate if even one innocent person is executed? These are questions that typically surround the debate, but there is one I would like to cover: does the death penalty cost more than life in prison? This was a question the state of Nevada's Legislative Auditor seemed to answer in its audit released recently.

Looking at 28 death penalty cases in Nevada, the average death penalty case costs $532,000 more than a case when the death penalty is not sought (p. 10), which is nearly twice as much as a murder case for life without parole. Although incarceration costs were less for cases that sought the death penalty (Exhibit 7), what caused the death penalty cases to supersede the non-death penalty cases was average case costs (Exhibit 5). Most of the costs are racked up even before the trial begins (Exhibit 10), which is all the more damning since most cases in which the prosecutor seeks the death penalty does not actually impose the death penalty (Exhibit 2). For death penalty cases, they require more lawyers, more preparation, more investigators, more special motions, more witnesses, more experts, a longer jury selection, not to mention a longer appeals process (Exhibit 6).

Many other states, such as California, Indiana, Maryland, Louisiana, New JerseyMontana, Connecticut, North Carolina, Ohio, and Kansas, have attempted to capture the costs and have come to the same conclusion: the death penalty costs way more than life without parole. The money that was spent on the death penalty could have been spent on real crime control measures, such as solving, preventing, or prosecuting other crimes. The evidence is clear. If one wants to make an argument for the death penalty, trying to make the argument based on cost savings is not the way to go.

Friday, December 5, 2014

The FDA's Lifetime Ban on Gay Men Donating Blood Makes My Blood Boil

AIDS has been a frightening virus since its discovery in 1983. Since men who have sex with other men, also known as MSM, were the predominant carriers of AIDS, the Food and Drug Administration (FDA) decided to ban these men from donating blood. In some countries, deferrals allow for MSM to donate after a certain period of time. In the United States, however, no such deferral is allowed. This ban has been FDA policy for over thirty years, but the FDA has decided to revisit the topic and possibly change the law where MSM would have a deferral one year after their latest male-to-male sexual encounter. Part of the change of heart is because we have realized that AIDS is not a "gay disease." Part of it is because we have developed technology to better screen for HIV, the virus that causes AIDS. Has the ban outlived its usefulness, or should it still be in force?

According to the FDA, the purpose of this ban to use "multiple layers of safeguards in its approach to ensuring blood safety....A history of male-to-male sex is associated with an increased risk for exposure to and transmission of certain infectious diseases, including HIV, the virus that causes AIDS. Men who have had sex with other men represent approximately 2% of the US population, yet are the population most severely affected by HIV." Essentially, the FDA's concern is with safety and making sure that donated blood is not contaminated with HIV. Let's see how valid the FDA's concern really is.

According to CDC statistics, the most common transmission category of HIV (CDC, 2012, Table 1a). Male-to-male sexual contact accounted for 64 percent of overall diagnoses of HIV, which totaled at 30,695 estimated diagnoses in 2012. As for number of individuals who carry the virus, MSM account for 52 percent of the subpopulation, totaling at 451,656 men (Table 14a). Undiagnosed individuals have a comparable result (CDC, 2011, Table 9a): 596,600 MSM out of 1,144,500 persons living with HIV, i.e., 52 percent.

So 596,600 MSM with HIV make up 0.18 percent of the 316 million American populace. Even if you want to filter out the 23.3 percent of Americans who are under 18, these individuals are only 0.25 percent of the American population. Assuming that gay men make up six percent of the overall male population, that makes for 7.27 million gay men over 18. Even if one makes the highly tenuous assumptions that a) only gay men are MSM, and b) all gay men are MSM, then that would still mean that only eight percent of gay men have HIV. Even if we were to take this unreasonably high estimation at face value, is the ban justifiably based on science? In short, no.

Not only has our understanding of how it is transmitted changed, but treatment and detection have also developed since 1983. Nucleic acid tests can diagnose HIV within two weeks of infection (FDA, p. 3), but the window period lasts from three to six months. Additionally, federal laws require that the blood be tested for diseases, including HIV. The odds of HIV infection through a blood transfusion, 1 in 2,000,000, is so small that it is almost non-existent. This is why many countries have changed their policies from a lifetime ban to relatively short deferral periods. Australia found no increased rates of transmission of HIV when it switched from a five-year deferral to a one-year deferral (Seed et al., 2010). Many countries, including the UK, Sweden, and Japan have switched to one-year deferral periods. Although a one-year deferral is an improvement over a lifetime ban, it is still arbitrary and discriminatory.

Even if switched to a one-year deferral, it still makes the mistake of identifying high-risk groups instead of high-risk behaviors. Go back to the CDC statistics (Table 1a) and you'll see that 48 percent of those newly diagnosed with HIV are African-American. Does anyone hear clamoring for African-Americans to be barred from donating blood? No, because that would be discriminatory, and it wouldn't target the issue at hand. After all, why should a high-risk heterosexual male who has unprotected sex with multiple partners get a free pass while a homosexual male in a committed relationship and doesn't have anal intercourse get punished? Looking at a potential donor's behaviors is more accurate of a proxy than targeting homosexual males. Italy went from a lifetime ban to an individualized risk assessment, which had no adverse impact on the incident rate of HIV (Suligoi et al., 2013).

The American Red CrossAmerican Osteopathic Assocation and the American Medical Association, have all realized that the science does not support such prohibitions. I know the FDA is trying to be risk-averse as humanly possible, but there's a fine line between justifiable, precautionary measures and counterproductive measures with nothing to show for it except blood banks experiencing a shortage of donated blood. If the ban were lifted, it would mean 615,300 additional pints of blood. A one-year deferral would still mean an extra 317,000 pints of donated blood (Miyashita and Gates, 2014). Whatever minimal risks that exist are considerably outweighed by the clear benefit of helping close the shortage of donated blood so people can receive the medical services they need and deserve. I hope the FDA realizes that its policies are causing more harm than good, and that they use science-based evidence to overturn this ban that can only be described as bloody idiotic.

Wednesday, December 3, 2014

Focusing on Police Body Cameras and Best Practices for Law Enforcement

What has going on in Ferguson, Missouri has had the country quite riled up about race relations in America. It has become politicized enough where shortly after the release of a White House review on law enforcement practices on Monday, President Obama has recommended appropriating $75M to purchase 50,000 police body-mounted cameras. It should be no surprise that the events in Ferguson would elicit such a response. Personally speaking, I'm more perturbed by the increased police militarization in America that the Ferguson situation exemplified, which is something the White House review addresses. Regardless, it gets me wondering if equipping police officers with body cameras is such a good idea or not.

If one had to summarize the case for police body cameras in a single word, it would be "accountability." Since the shooting of Michael Brown on August 9, there has been considerable clamoring for police officers to wear body cameras to capture footage of police officers on the job. Not only are these cameras supposed to hold police officers accountable for their actions in order to reduce complaints of police misconduct, but it is also supposed to protect officers from false accusations of wrongdoing. Humans tend to behave better when they think they are being watched, which is the reasoning behind the body cameras and their efficacy.

While body cameras have the potential to alter behavior for the better, skeptics are worried about it can adversely affect law enforcement, as is illustrated my this Madison Police Department report. Do you think a confidential informant is going to want to talk to a cop with a camera streaming footage? Can a camera be turned off if the citizen requests it? Can this new technology be abused? Should police camera footage become public record? How much would these body cameras infringe upon the Fourth Amendment? Issues of privacy either for citizens or police officers set aside for a moment, there are also technological impediments.

As of date, the battery life on a camera can be as short as a couple of hours, but can be long as twelve hours. The technology can always improve, but it questions the ability of the camera to capture everything. Even if we assume that the camera never malfunctions during the entire tour of duty and the video is never tampered with, the camera is still not going to be a completely accurate telling of events because given the limits by the scope of the lens, it cannot capture everything. A video without context can be misinterpreted.

None of this even touches upon the dollar amount for such equipment. Obama is looking to spend $75M on 50,000 cameras, which comes out to $1,500 per camera. Considering that cameras range from $119-$1,000 per camera, I'm not sure why Obama is asking for this much money. Even so, this amount would only cover a fraction of the nearly 630,000 law enforcement officers. Also, the cost that is even bigger than the initial cost of buying the camera is video data storage. According to a recent Department of Justice study on police body cameras, the bulk of costs for body cameras goes to data storage, as the New Orleans Police Department has already discovered (p. 32). Again, technology can always improve, but considering the budget cuts that have been taking place since the Great Recession, it is going to be more difficult to fund such an initiative, even with federal funding assistance.

As for whether body cameras work, since they are a relatively nascent technology, the empirical evidence is scant (see Office of Justice Programs assessment here). Aside from the Department of Justice study cited above, a case study that has shown promising success is Rilato Police Department case study. In this case study, use of force by officers decreased by nearly two-thirds, and citizen complaints decreased by 88 percent. There are some other case studies out there, not to mention the UK Home Office's report on the topic, but there is still a lack of a causal link because it's not sure whether the citizens, officers, or both behave better as a result of being videotaped. Additionally, implementing the cameras is still new enough where we don't have anything close to a complete cost-benefit analysis. For instance, while the cameras cost money to purchase and maintain, there is the question of how help they prevent the costs of police misconduct. For instance, the NYPD paid out $152M last year as a result of claims of police misconduct, which is a lot more than body cameras would have cost. Do body cameras improve or erode relations between law enforcement officers and the citizenry? Do they have the ability to intimidate victims or even suspects, thereby altering their testimony?

Aside from it being new technology, I have my ethical and legal qualms about such technology. Even so, if the intuition behind the body cameras is correct, I have to agree with the American Civil Liberties Union (ACLU) in its 2013 report by saying that it will be an overall improvement over not having cameras. To affirm that assertion, more cities, such as Washington DC and New York City, should experiment to see if body cameras work. That being said, we should not treat this as a catch-all or a silver bullet in law enforcement reform. Body cameras can help with law enforcement, but this policy would have to work in conjunction in other policies if we want to improve upon the overall state of local law enforcement.