No individual who even remotely has a moral compass likes to see starving people. This is all the more so when seeing children who are dealing with malnutrition. However, ensuring that low-income children have adequate nutrition is the sort of argument that makes the case for a National School Lunch Program (NSLP) an easy sell. Although the NSLP started off as a way to deal with "surplus" food during the Great Depression and evolved into a WWII-era, national security concern to make sure people were fit for military service, it has become a program that has provided school meals to millions of children across the country. It was amazing to learn about many other facts about child nutrition programs from the Congressional Budget Office's (CBO) latest report on the subject. The argument of "Think of the children" has been used in everything from trying to ban video games to banning same-sex adoption to funding universal preschool. Arguing that we should provide children with better nutrition via the National School Lunch Program is a politically viable to maintain, but it doesn't answer the the underlying question of whether the federal government's role as the nation's lunch lady does children any favors.
These federal programs provide lunch for about 30 million children through the NSLP, and breakfast for 14 million children through the School Breakfast Program (SBP). Children from households up to 130 percent of the federal poverty line receive meals for free. Children from households for 130 to 185 percent of the poverty line are heavily subsidized, and any child beyond that federal poverty line receive a more modest subsidy (CBO, p. 1). While the Census Bureau has found that about 1.5 million children have potentially been pulled out of poverty, it also admits that its figures are most probably an overestimation given that it assumes optimal value of a school lunch (Census, p. 19).
The federal government spends quite a bit of money on food subsidies for children in school. According to the CBO report, $12.7 billion was spent on the NSLP in 2014, $3.7 billion on the SBP, and another $3.6 billion to both provide nutritional assistance outside of schools, as well as for school programs during the summer. (CBO, p. 1). These programs total to $20 billion per annum, which is about 0.11 percent of the country's GDP and 0.56 percent of the federal government's budget. These figures are nowhere as big as Social Security, Medicare, or military spending, but these smaller figures, when added together, make an impact on maintaining the country's debt-to-GDP ratio. Also, let's keep in mind that the spending on these programs has doubled since the 1990s (CBO, p. 2) while K-12 enrollment has only increased about 17 percent.
The price tag for the taxpayer, the unsafe amount of the chemical known as bisphenol A (BPA) found in school meals, or the fact that $12.5 million was spent on ineligible candidates should not be the only things we should consider when looking at the effectiveness of these food subsidy programs. 1,400 school districts have opted out of the USDA School Lunch Program since the Healthy Hunger-Free Kids Act of 2010, which was the USDA's attempt at greater regulations to make school lunches. The number of lunches served since 2010 has also been on the decline, and that does not even consider an increased enrollment size.
The Government Accountability Office (GAO) published an indicting report about the NSLP back in 2014. According to the GAO report, there was a decline in program participation of 1.6 million students, an increase in wasted food, issues with compliance of high nutritional standards in the Healthy Hunger-Free Kids Act, and price increases as a result of the already-existing subsidies. These standards have resulted in increased food costs [for students who are not beneficiaries], insufficient food storage, and newer kitchen equipment. Look at the fun regulations and standards set out by the United States Department of Agriculture on school lunches, and it should be no mystery why this has been an issue. Some of the crazier regulations include having to eat foods from at least three out of the five major food groups (one being produce), being prohibited from taking food out from the cafeteria to eat later, or making it nigh impossible to hide vegetables in more palatable foods. The School Nutrition Association, which is an organization of 55,000 school nutrition professionals, thinks that the regulations are too cumbersome, which is why the SNA is calling for more flexibility within the regulations.
The GAO is not the only one with intriguing results. According to a National School Boards Association survey conducted last year, 81.8 percent of schools saw significant increases in food costs, 83.7 percent of schools saw an increase in food waste, and 71.6 percent of schools noticed a decrease of students participating in the school lunch programs.
I was able to find a few more related studies, and showed that an estimated $1.2 billion dollars a year is wasted (Cohen et al., 2014). While preliminary research suggest that there has been a slight uptick in fruit and vegetable consumption, researchers from Harvard School of Public Health also found that the federal programs were not causing a decrease in food waste, and that 60 to 75 percent of vegetables and 45 percent of fruits are still being discarded. Johns Hopkins University also found that although the Healthy Hunger Free Kids Act provides healthier options, only about a quarter of the children are actually eating the healthier food. Another study, which actually compares produce consumption before and after the Healthy hunger Free Kids Act, shows that while there was a 29 percent increase of students taking fruits and vegetables, there was a 13 percent decrease in consumption because the children did not want to eat the healthier foods (Amin et al., 2015).
What should we do to help bring healthier food options for our children? The CBO makes a few suggestions in their report. Three of those policy alternatives are to increase the income limit for free school meals and to increase the reimbursement rate by 10¢, neither of which deal with the underlying issues. Removing the subsidies for higher-income households would eliminate at least some of the distortionary effects of the subsidy, which could be good. The CBO mentions the possibility of replacing the program with a smaller block grant because it would make spending more predictable and provide states with more freedom to spend the money. However, block grants would a) reduce programming for some, and b) make it more difficult to automatically adjust to economic hardship (CBO, p. 1).
Ideally, we would be in a world in which the government does not act as a micromanaging interferer, and parents capably monitor what their children eat for lunch. Parents play a much larger role in a child's life than a school cafeteria does, so it should be no surprise that parenting styles play a vital role in a child's food consumption patterns. This is why we should not only encourage students to opt out of this program, but also for families to work with religious organizations, businesses, and social organizations, e.g., Revolution Foods, to help create healthier meals for our children. It would be great to see parents parent or at least collaborate with local organizations to provide healthier meals, but it might be too much to ask in our rat-racer world in which parents feel pressed for time.
At a minimum, it would be nice to not only limit the federal government's scope, but to also leave the standardization to more local authorities. The federal government shouldn't be treating children as if they were all obese individuals who can only handle very few calories lest they die of cardiovascular disease in five years. Each child's nutritional scenario is different, and school food authorities (or even better, parents) should be able to have more flexibility instead of being constrained by a one-size-fits all model that clearly isn't working. Parents and school food authorities would have a better idea of a given child's nutrition than some bureaucrat in Washington D.C. By generating program flexibility and simplification, we can start to focus on providing healthy nutrition towards children with less interfering, stifling government regulations.
The political and religious musings of a Right-leaning, libertarian, formerly Orthodox Jew who emphasizes rationalism, pragmatism, common sense, and free, open-minded thought.
Sunday, September 27, 2015
Thursday, September 24, 2015
Americans Aren't As Poor As Latest Census Poverty Data Suggest
Late last week, the Census Bureau released its latest data on poverty statistics (see full report here). According to the Census Bureau, the poverty rate is at 14.8 percent, or at 46.7 million in poverty. This poverty rate is 2.7 percentage points higher than it was in 2007, which was the year before the Great Recession. Based on these numbers, the current average income is roughly the same as inflation-adjusted income for 1999. Per the Census Bureau's table [below], I also see that the poverty rate has hovered around 15 percent since the War on Poverty in 1965, but I'll leave that one alone for now. Based on these numbers, it seems like the American government has barely made a dent in its poverty rate. 46.7 million people dealing with poverty sounds like an awful lot, but I have to ask whether the number is that high, so at the very least, we can have a contextualized conversation about poverty in the United States.
A good place to start is to see how the Census Bureau is measuring poverty. Looking at the metrics that the Census Bureau uses, the primary ones actually overstate the extent of poverty in this country. Let's take a look at what those are:
I don't mind having a conversation about the pervasiveness of poverty (particularly in comparison to other countries) and what we can do to mitigate it, but can we at least use statistics that tell a more accurate story so we know can properly diagnose?
A good place to start is to see how the Census Bureau is measuring poverty. Looking at the metrics that the Census Bureau uses, the primary ones actually overstate the extent of poverty in this country. Let's take a look at what those are:
- Pre-tax income. If the government only provided a minimal, temporary safety net, then I could understand there being a negligible difference between measuring total income pre-tax and post-tax. However, that is not the world in which we live. There are many welfare programs that act as non-cash benefits for families in the United States, which brings me to my second point...
- Treatment of non-cash benefits. It is nice to see that unemployment benefits and Social Security are factored into the equation because they have a sizable impact on a family's budget. However, the fact that the Census Bureau does not include non-cash benefits in their figures. The average monthly food stamp benefits in this country is about $125 per month. Medicaid spending, which is health care dollars low-income and/or disabled individuals, is not a small amount. The child tax credit can be up to $1,000. The average Earned Income Tax Credit (EITC) refund was about $2,500 in 2014. Housing subsidies, school lunch subsidies, the list goes on. When we start to total these non-cash, post-tax benefits, they start to add up to a different amount (much like I alluded to back in 2012). If we adjust for post-tax income, much like economist Scott Winship does, households' incomes are actually higher than they ever have been.
- Poverty threshold statistics. While it's interesting to see that the Census Bureau has 48 different types of thresholds based on age and family size, does anyone find it peculiar that there is no geographic variation whatsoever? Cost of living varies from state to state, as well as between urban, suburban, and rural areas. The fact that geographic variation is not considered by the Census Bureau should give us reason to pause.
- Inflation. The official poverty thresholds are updated by the Consumer Price Index (CPI-U). The CPI-U overstates inflation in a few ways. The first is substitution bias. If a price of a good, let's say oranges, increases substantially [relative to substitute goods], then consumers are more likely to choose the lower-price alternative. The second issue with the CPI-U is a quality bias. Over time, technological progress tends to increase the longevity and usefulness of a product. As an example, a computer in 1982 had much less capabilities and cost more than it does now. Third, there is a new product bias. Products are not introduced to the CPI's basket of goods until they become more commonplace. This means that the CPI misses the dramatic price drop with new technological products. Overstating inflation does not have implications just for poverty statistics, but with rate of return on investments and real GDP growth figures.
I don't mind having a conversation about the pervasiveness of poverty (particularly in comparison to other countries) and what we can do to mitigate it, but can we at least use statistics that tell a more accurate story so we know can properly diagnose?
Tuesday, September 22, 2015
What Modern-Day Jews Can Learn From the Scapegoat Ritual
Judaism is not immune from that which is seemingly bizarre. Since we're approaching Yom Kippur, the practice that I have in mind is the scapegoat ritual. In the colloquial sense, a scapegoat is someone who blamed for the faults or wrongdoings of others. The modern-day concept of the scapegoat has some basis in the original scapegoat ritual that is in Leviticus 16. Essentially, the scapegoat ritual entailed using two goats: one for G-d and the other "for Azazel" (לעזאזל). The goat for G-d was sacrificed in a purification offering (Leviticus 16:9). What happened with the goat "for Azazel"? The High Priest would lay his hands on that goat, confess over it the sins and transgressions of Israel, thereby [symbolically] transferring the Jewish people's sins to the goat. The goat is then set free into the wilderness (Leviticus 16:20-22).
There hasn't been a Temple for over two millennia. Even when there was a Temple, the passage comes with a bigger issue: vicarious atonement. The fact that one can sin all year and have their sins transferred to a goat is, in all honesty, un-Jewish. I'm not saying that simply because Christians like using this passage to make some unfounded parallel between the scapegoat and Jesus, although if the sacrifice were that powerful, why would it need to be done every year? But I digress.
The scapegoat ritual is perturbing upon first glance because if taken literally, it means we don't have to care about the moral or ethical implications of our actions. "Why be good if the goat can absolve me of my sins?" It's the same reason I have an issue with the "Jewish" practice of kapparot (כפרות). Much like I do with other practices, I have to wonder whether the goat was literally supposed to atone for our sins or whether the ritual is an action-based meditation (e.g., tashlich) in which we are supposed to be motivated to something else.
Maybe the ritual symbolizes that instead of life being dictated by fate or random occurrences, we are meant to live a life with meaning and moral imperative. Maybe it symbolizes that while we have to pay a price for our sins (i.e., the first goat), we still need to confess our sins and do our best to uproot them from our lives. That is why interestingly enough, the most of the steps of teshuvah (repentance) are in the scapegoat ritual. In the scapegoat ritual, we realize we have sinned. We have to offer something to [do our best to] reverse the effects of the sin. We also have to confess our sins. The trick to this is that last step of teshuvah, which is that we make sure we don't commit the same sin again. Perhaps the fact that the scapegoat ritual was done every year, instead of as a one-time ritual, is to remind us that keeping ourselves in check and not screwing up is not a task for the fainthearted. Whether we revert back to a sin or shortcoming is up to the individual, not some goat dispatched in the wilderness. Even if the goat literally was meant to absolve us from sin for the previous year, we decide our own fate in terms of whether we choose to err in a certain fashion.
The great rabbi Maimonides (Rambam) viewed the scapegoat figuratively (Guide for the Perplexed, III, xlvi). In the Guide for the Perplexed, Rambam says that it's literally "not a sacrifice to Azazel, G-d forbid." For Rambam, the purpose of the ritual is to be a powerful allegory that impresses upon the mind of the individual that sins lead him to a wasteland. Ultimately, it was to be a spiritual wake-up call and turn back to G-d in teshuvah. Although we don't have a scapegoat ritual anymore, I hope we can all find that something that can wake us up and engender real change to make us more spiritual, more morally upright, and better human beings.
גמר חתימה טובה
There hasn't been a Temple for over two millennia. Even when there was a Temple, the passage comes with a bigger issue: vicarious atonement. The fact that one can sin all year and have their sins transferred to a goat is, in all honesty, un-Jewish. I'm not saying that simply because Christians like using this passage to make some unfounded parallel between the scapegoat and Jesus, although if the sacrifice were that powerful, why would it need to be done every year? But I digress.
The scapegoat ritual is perturbing upon first glance because if taken literally, it means we don't have to care about the moral or ethical implications of our actions. "Why be good if the goat can absolve me of my sins?" It's the same reason I have an issue with the "Jewish" practice of kapparot (כפרות). Much like I do with other practices, I have to wonder whether the goat was literally supposed to atone for our sins or whether the ritual is an action-based meditation (e.g., tashlich) in which we are supposed to be motivated to something else.
Maybe the ritual symbolizes that instead of life being dictated by fate or random occurrences, we are meant to live a life with meaning and moral imperative. Maybe it symbolizes that while we have to pay a price for our sins (i.e., the first goat), we still need to confess our sins and do our best to uproot them from our lives. That is why interestingly enough, the most of the steps of teshuvah (repentance) are in the scapegoat ritual. In the scapegoat ritual, we realize we have sinned. We have to offer something to [do our best to] reverse the effects of the sin. We also have to confess our sins. The trick to this is that last step of teshuvah, which is that we make sure we don't commit the same sin again. Perhaps the fact that the scapegoat ritual was done every year, instead of as a one-time ritual, is to remind us that keeping ourselves in check and not screwing up is not a task for the fainthearted. Whether we revert back to a sin or shortcoming is up to the individual, not some goat dispatched in the wilderness. Even if the goat literally was meant to absolve us from sin for the previous year, we decide our own fate in terms of whether we choose to err in a certain fashion.
The great rabbi Maimonides (Rambam) viewed the scapegoat figuratively (Guide for the Perplexed, III, xlvi). In the Guide for the Perplexed, Rambam says that it's literally "not a sacrifice to Azazel, G-d forbid." For Rambam, the purpose of the ritual is to be a powerful allegory that impresses upon the mind of the individual that sins lead him to a wasteland. Ultimately, it was to be a spiritual wake-up call and turn back to G-d in teshuvah. Although we don't have a scapegoat ritual anymore, I hope we can all find that something that can wake us up and engender real change to make us more spiritual, more morally upright, and better human beings.
גמר חתימה טובה
Thursday, September 17, 2015
China's One-Child Policy at 35: Successful Family Planning or Demographic Nightmare?
Family planning policy can be quite the hot button issue because, at least in the context of the Western world, it ranges from whether abortion should legally be a choice or whether employers should be mandated to provide birth control to their employees. It wouldn't cross the minds of most of us in the Western world to debate the merits of the government mandating abortions. However, if we were looking at this from a Sinocentric perspective, it would be the sort of public policy debate one would be having. Tomorrow will be the thirty-fifth anniversary in which the Chinese government enacted the One-Child Policy (计划生育政策). The reason for such draconian policy was social engineering. The Chinese government was worried about the economic and social impacts of overpopulation, and thought that limiting childbearing to one child per family was a good idea.
While the One-Child Policy is administered by the Ministry of Health, it is enforced at the provincial level and experiences varied levels of enforcement. The Chinese government has used heavy fines, firing people from their jobs, and forced sterilization and abortions to keep population growth at a minimum. At this juncture, the laws have been relaxed enough where the law de facto has become a Two-Child Policy for many of the provinces. Regardless of the extent to which the laws have been relaxed, the law has been in place with enough force where it has had impact. According to the Chinese government, there have been up to 400 million prevented births due to the One-Child Policy. Setting the moral implications aside for a moment (as hard as that might be), let's ask ourselves whether it was good public policy.
Let's start with China's fertility rate. After all, it was the fear of excess population growth that triggered the One-Child Policy in the first place. Per the diagram from Pew Research [below], what we see is that the fertility peaked at about 6.0 in the mid-1960s and took a sharp decline by the late 1970s to less than 3.0, which was when Chinese government officials started to consider the One-Child Policy.
Postscript: Even though its effects have been exaggerated, the One-Child Policy still has caused damage to Chinese society. The One-Child Policy violates the choices of women over the bodies, changed kin relations in Chinese culture, reduced the propensity to conceive (Howden and Zhou, p. 22), created a long-term labor shortage, and it adversely altered sex and dependency ratios. In terms of demographics, China's numbers are going down and will continue going down to the point where China's population might actually decrease. China's focus should not be on a population boom, but rather what to do with its lower fertility rate. China can change its sex ratio, but much like South Korea, such change takes time (Chung and Gupta, 2007).
Increasing fertility rates will face the uphill battles of economic development, urbanization, and cultural shifts, all of which disincentivize Chinese women to procreate. The problem is that China's fertility rate is at 1.5, which is well below replacement rates. Since countries with fertility rates below 1.5 have not been able to recover, I have to wonder if it's too late for China.
Population-boosting policies require more time for its results to take in effect, which is just another way of saying that China is an inevitable, ticking demographic time bomb waiting to happen. Demographically speaking, anything China would do to ameliorate the situation (and that includes reversing the enactment of the One-Child Policy this very second) will take at least three decades to take into effect.
While the One-Child Policy is administered by the Ministry of Health, it is enforced at the provincial level and experiences varied levels of enforcement. The Chinese government has used heavy fines, firing people from their jobs, and forced sterilization and abortions to keep population growth at a minimum. At this juncture, the laws have been relaxed enough where the law de facto has become a Two-Child Policy for many of the provinces. Regardless of the extent to which the laws have been relaxed, the law has been in place with enough force where it has had impact. According to the Chinese government, there have been up to 400 million prevented births due to the One-Child Policy. Setting the moral implications aside for a moment (as hard as that might be), let's ask ourselves whether it was good public policy.
Let's start with China's fertility rate. After all, it was the fear of excess population growth that triggered the One-Child Policy in the first place. Per the diagram from Pew Research [below], what we see is that the fertility peaked at about 6.0 in the mid-1960s and took a sharp decline by the late 1970s to less than 3.0, which was when Chinese government officials started to consider the One-Child Policy.
Source: Pew Research
One of the major issues with the One-Child Policy is that the Chinese government includes prevented births from the 1970s, which is 10 years before the One-Child Policy is enacted. Peer countries also had similar fertility rate declines in the latter twentieth century. As such, the vast majority of births had been averted due to a naturally declining fertility rate, not because of the One-Child Policy (Wang et al., 2013, p. 121). Even if the One-Child Policy never was enacted, China's fertility rate probably would have been around 1.5 by 2010 (ibid., p. 122). There ended up being 50 million families with only one child (ibid., p. 124). While it would be tenuous to assume that every single of those 50 million families opted for one child only because of the policy, it does at least bring the number of averted births done considerably from the commonly touted number of 400 million.
While the impact of the One-Child Policy was less minimal relative to what the Chinese government estimated, the One-Child Policy still impacted the sex ratio in China (ibid., p. 123). While part of the preference towards males is due to historical patriarchal views (e.g., ancestral worship, property inheritance), it more so has to do with the fact that sons are preferred to help maintain farmland in the rural areas. This de facto gendercide is perturbing to say the least. The United Nations Population Fund points out that while the sex ratio ticked up a little bit between 1964 and 1982 [to 109:100, which is slightly above the natural sex ratio of 105:100], the ratio was really exacerbated after the One-Child Policy took into effect. When looking at the difference between rural areas and urban areas, the sex ratio is less skewed in urban areas (i.e., the ratio is currently 112:100 in urban areas versus 119:100 in rural areas) since the incentive for son preference is smaller. I have to wonder whether the One-Child Policy has been responsible for an insanely high household savings rate [to the point where people won't spend to help boost the economy] or fewer married men resulted in higher crime rates.
Then there is the matter of the dependency ratio. By 2050, there will only be about two workers to support each retiree (Howden and Zhou, 2014, p. 19). Chinese families face the "4-2-1 dilemma" in which four grandparents are supported by two parents, and two parents supported by one child. This imbalance puts quite the pressure on younger generations to support their elders. China is facing an acute worker shortage. By 2050, it is projected by the United Nations that 13.7 percent of the population will be over 65 years of age, and only 48 percent of the population will be of working age (ibid., p. 21). This downward pressure will be felt in China's GDP growth, which the Chinese government regrettably uses as its sole metric of economic success. None of this gets into the distinct possibility that the One-Child Policy has exacerbated the female suicide rate.
Increasing fertility rates will face the uphill battles of economic development, urbanization, and cultural shifts, all of which disincentivize Chinese women to procreate. The problem is that China's fertility rate is at 1.5, which is well below replacement rates. Since countries with fertility rates below 1.5 have not been able to recover, I have to wonder if it's too late for China.
Population-boosting policies require more time for its results to take in effect, which is just another way of saying that China is an inevitable, ticking demographic time bomb waiting to happen. Demographically speaking, anything China would do to ameliorate the situation (and that includes reversing the enactment of the One-Child Policy this very second) will take at least three decades to take into effect.
Sunday, September 13, 2015
The Nazarite as a Metaphor of Change on the High Holy Days
The vow of the nazir (נזיר), also known as the nazarite, is one of those more peculiar practices in the Jewish tradition. This vow, which has its origins in Numbers 6, entails three main prohibitions: drinking wine and grape juice (which was later extended to all alcoholic beverages), cutting one's hair, and coming into contact with corpses. After the nazarite completes his term as a nazarite, the individual, whether male or female, is obligated to bring offerings to the Temple. Interestingly enough, while it is possible to be a nazarite today, it is highly discouraged for two reasons. First, the vow of the nazarite would have to be a permanent vow. The second reason is that the nazarite would need to be confined to the land of Israel.
Upon first glance, this vow does not seem to particularly spiritual. The nazarite does not entail a monastic lifestyle. It does not stop one from enjoying life or socializing with others. There is only one nazarite mentioned in the entirety of Hebrew Scriptures: Samson. Upon reading the Book of Judges, we find that in spite of being an ancient rockstar, Samson was violent, short-tempered, and promiscuous. None of those character flaws would be considered lofty goals, and could very well suggest that the nazarite vow is inefficient in inculcating change. Yet there is an entire tractate of the Talmud (Tractate Nazir), as well as a chapter of Mishneh Torah, that is devoted to the particulars of the nazarite vow. What is it about the nazarite vow that is so edifying, and more to the point, what can we glean from the nazarite vow for the High Holy Days?
The Hebrew word nazir comes from the root נ-ז-ר, which can either mean "crown" (e.g., II Samuel 1:10) or "separation" (e.g., Leviticus 22:2). Etymologically speaking, the nazarite is used both the distinguish and distance oneself from certain acts. Why does one who takes the nazarite vow feel the need to distance oneself? Based on the Talmud (Sotah 2a), Rashi asks why the case of the adulteress and the nazarite are juxtaposed. Rashi's conclusion was that whoever sees an adulteress in her disgrace should vow to abstain from wine, for such inebriation leads to lewd behavior. From Rashi's point of view, the nazarite vow can either be viewed as a preventative measure or a measure to change one's current behavior into something holier.
The way I have viewed the nazarite, however, has been something more of a mixed bag to the point where I consider the nazarite to be a holy sinner. On the one hand, the nazarite is willing do make a change in life and is willing to go beyond the minimal requirement to acquire holiness. Abstaining from alcohol provides one with a greater clarity of mind, letting one's hair grow allows for one to be less distracted with one's physical appearance (Numbers Rabbah 10:10), and the separation from dead bodies is a step of ritual purity that goes beyond the laws of ritual purity for priests. The fact that the nazarite is able to make these changes and ascend to such holiness is why Ramban is of the opinion that the sin offering is brought because the vow has ended. Rambam also believes that the nazarite is praiseworthy (Mishneh Torah, Nezirut 10:14) because by abstention, one avoids worse evils.
On the other hand, the individual had to go through such an extreme that they were willing to prevent themselves from elevating the physical world by consuming that which is technically permitted, which is why Rambam views the fact that one had to take on the nazarite vow in the first place as a sin (Mishneh Torah, Hilchot De'ot, 3:1). The Kli Yakar opined that one who could maintain self-control did not need to take the nazarite vow. As the Jerusalem Talmud states (Kiddushin 4:12), one will have to account for all the good food and drink for which G-d put in the world and refused to consume. Shimon HaTzaddik made it a point to not eat the offerings of a nazarite because his view was that the nazarite made the vow either in excessive guilt or enthusiasm (Numbers Rabbah 10:7). R. Shmuly Yanklowitz brings another thought to why the nazarite vow ends in a sin offering: Instead of dealing with the problem head-on, we are avoiding the problem, which is incomplete personal change. This extreme should not normally be permitted, but perhaps it is a lesson showing us how the changes we make in our lives, even holy and lofty ones, can come with mixed results. The Mei Shiloach teaches that Tractate Nazir comes before Tractate Sotah in the ordering of the Talmud to teach us that we should have enough foresight to make sure we don't have to learn the hard way. It is edifying to learn things that we should not emulate, but what more can we learn from the nazarite about what we should do to changing ourselves?
The first thing is recognizing that one has to make the change. The Gemara (Nazir 11b) points out if a vow is made unintentionally, e.g., the vow was made and the individual had been mistaken to the facts of the situation, one can approach a halachic authority to annul the vow. For change to work, we need to have awareness of what needs to change before we make that change.
While knowing is half the battle, it is not enough to be cognizant of one's need to change. One can be aware and still continue in the undesired behavior. That could explain why if a nazarite vow were made in a cemetery, the period of the vow cannot begin until one is no longer within the cemetery (Nazir 16b). Coming back to the metaphor, one can begin to change until one is in the proper frame of mind and takes that first step to start the change.
The first step towards change might not be the largest step one will take in self-transformation, but it is the most important one because it is the step to start the momentum. That fact is illustrated by the nazarite. As R. Shmuel Herzfeld points out in his recently published book Renewal, the nazarite is making a limited and realistic goal (p. 77).
Not only are the prohibitions limited in nature, but so is the timeframe of the nazarite vow. The default length of a nazarite vow is thirty days (Nazir 5a). Yes, one technically can take on a nazarite vow for longer, and even take on the nazarite vow for life. The fact that one can lengthen the vow shows that one is to determine the length of the vow based on their own determination via introspection of one's own particular situation. When I see the default length of the nazarite vow, I cannot help but conclude that in spite of the fact that we should have a long-term sense of where we would like to go in life, we should put greater emphasis and focus of our goal-setting in the here and now. As I learned in a recent shiur with R. Haim Ovadia, we make a mistake in this sort of goal-setting by asking ourselves what our purpose is in life. R. Ovadia quoted Viktor Frankl and got us to think of the question of what is our purpose in life right now. The nazarite shows that limiting goals is important because it helps put extra effort into not erring. That ability to devote total focus and dedication on certain aspects of our life that truly need fixing is why the root נ-ז-ר means "crown." We crown with ourselves a realistic way to make ourselves into better human beings.
We hear about how goals need to be SMART: Specific, Measurable, Achievable, Relevant, and Time-Bound. As much as George T. Doran is credited with the SMART paradigm for setting goals, it was really G-d who provided us with the paradigm for optimal goal-setting. Only 8 percent of people are able to keep New Year's resolutions, which can be a depressing figure for those of us who want to bring about true change in our life. However, if we follow in the footsteps of the nazarite and keep our goals SMART, we can experience a sense of renewal that the High Holy Days are meant to bring in our lives.
Upon first glance, this vow does not seem to particularly spiritual. The nazarite does not entail a monastic lifestyle. It does not stop one from enjoying life or socializing with others. There is only one nazarite mentioned in the entirety of Hebrew Scriptures: Samson. Upon reading the Book of Judges, we find that in spite of being an ancient rockstar, Samson was violent, short-tempered, and promiscuous. None of those character flaws would be considered lofty goals, and could very well suggest that the nazarite vow is inefficient in inculcating change. Yet there is an entire tractate of the Talmud (Tractate Nazir), as well as a chapter of Mishneh Torah, that is devoted to the particulars of the nazarite vow. What is it about the nazarite vow that is so edifying, and more to the point, what can we glean from the nazarite vow for the High Holy Days?
The Hebrew word nazir comes from the root נ-ז-ר, which can either mean "crown" (e.g., II Samuel 1:10) or "separation" (e.g., Leviticus 22:2). Etymologically speaking, the nazarite is used both the distinguish and distance oneself from certain acts. Why does one who takes the nazarite vow feel the need to distance oneself? Based on the Talmud (Sotah 2a), Rashi asks why the case of the adulteress and the nazarite are juxtaposed. Rashi's conclusion was that whoever sees an adulteress in her disgrace should vow to abstain from wine, for such inebriation leads to lewd behavior. From Rashi's point of view, the nazarite vow can either be viewed as a preventative measure or a measure to change one's current behavior into something holier.
The way I have viewed the nazarite, however, has been something more of a mixed bag to the point where I consider the nazarite to be a holy sinner. On the one hand, the nazarite is willing do make a change in life and is willing to go beyond the minimal requirement to acquire holiness. Abstaining from alcohol provides one with a greater clarity of mind, letting one's hair grow allows for one to be less distracted with one's physical appearance (Numbers Rabbah 10:10), and the separation from dead bodies is a step of ritual purity that goes beyond the laws of ritual purity for priests. The fact that the nazarite is able to make these changes and ascend to such holiness is why Ramban is of the opinion that the sin offering is brought because the vow has ended. Rambam also believes that the nazarite is praiseworthy (Mishneh Torah, Nezirut 10:14) because by abstention, one avoids worse evils.
On the other hand, the individual had to go through such an extreme that they were willing to prevent themselves from elevating the physical world by consuming that which is technically permitted, which is why Rambam views the fact that one had to take on the nazarite vow in the first place as a sin (Mishneh Torah, Hilchot De'ot, 3:1). The Kli Yakar opined that one who could maintain self-control did not need to take the nazarite vow. As the Jerusalem Talmud states (Kiddushin 4:12), one will have to account for all the good food and drink for which G-d put in the world and refused to consume. Shimon HaTzaddik made it a point to not eat the offerings of a nazarite because his view was that the nazarite made the vow either in excessive guilt or enthusiasm (Numbers Rabbah 10:7). R. Shmuly Yanklowitz brings another thought to why the nazarite vow ends in a sin offering: Instead of dealing with the problem head-on, we are avoiding the problem, which is incomplete personal change. This extreme should not normally be permitted, but perhaps it is a lesson showing us how the changes we make in our lives, even holy and lofty ones, can come with mixed results. The Mei Shiloach teaches that Tractate Nazir comes before Tractate Sotah in the ordering of the Talmud to teach us that we should have enough foresight to make sure we don't have to learn the hard way. It is edifying to learn things that we should not emulate, but what more can we learn from the nazarite about what we should do to changing ourselves?
The first thing is recognizing that one has to make the change. The Gemara (Nazir 11b) points out if a vow is made unintentionally, e.g., the vow was made and the individual had been mistaken to the facts of the situation, one can approach a halachic authority to annul the vow. For change to work, we need to have awareness of what needs to change before we make that change.
While knowing is half the battle, it is not enough to be cognizant of one's need to change. One can be aware and still continue in the undesired behavior. That could explain why if a nazarite vow were made in a cemetery, the period of the vow cannot begin until one is no longer within the cemetery (Nazir 16b). Coming back to the metaphor, one can begin to change until one is in the proper frame of mind and takes that first step to start the change.
The first step towards change might not be the largest step one will take in self-transformation, but it is the most important one because it is the step to start the momentum. That fact is illustrated by the nazarite. As R. Shmuel Herzfeld points out in his recently published book Renewal, the nazarite is making a limited and realistic goal (p. 77).
Not only are the prohibitions limited in nature, but so is the timeframe of the nazarite vow. The default length of a nazarite vow is thirty days (Nazir 5a). Yes, one technically can take on a nazarite vow for longer, and even take on the nazarite vow for life. The fact that one can lengthen the vow shows that one is to determine the length of the vow based on their own determination via introspection of one's own particular situation. When I see the default length of the nazarite vow, I cannot help but conclude that in spite of the fact that we should have a long-term sense of where we would like to go in life, we should put greater emphasis and focus of our goal-setting in the here and now. As I learned in a recent shiur with R. Haim Ovadia, we make a mistake in this sort of goal-setting by asking ourselves what our purpose is in life. R. Ovadia quoted Viktor Frankl and got us to think of the question of what is our purpose in life right now. The nazarite shows that limiting goals is important because it helps put extra effort into not erring. That ability to devote total focus and dedication on certain aspects of our life that truly need fixing is why the root נ-ז-ר means "crown." We crown with ourselves a realistic way to make ourselves into better human beings.
We hear about how goals need to be SMART: Specific, Measurable, Achievable, Relevant, and Time-Bound. As much as George T. Doran is credited with the SMART paradigm for setting goals, it was really G-d who provided us with the paradigm for optimal goal-setting. Only 8 percent of people are able to keep New Year's resolutions, which can be a depressing figure for those of us who want to bring about true change in our life. However, if we follow in the footsteps of the nazarite and keep our goals SMART, we can experience a sense of renewal that the High Holy Days are meant to bring in our lives.
Thursday, September 10, 2015
Are Immigrants Using So Many Welfare Services That It Is Causing a Fiscal Drain?
Immigration reform has been receiving quite the attention since the presidential debates have been taking place. Certain presidential hopefuls want to scapegoat immigrants for our country's woes. We see this sentiment in the Republic primaries when Donald Trump proposes the idea of mass deportation. Fortunately, most Americans want comprehensive immigration reform that provides immigrants a path towards citizenship. The bad news is that the Center for Immigration Studies (CIS) put out a recent study on immigrants' welfare usage, which had the key finding of 51 percent of immigrants benefiting from at least one welfare program, in spite of restrictions on immigrants receiving welfare. I call this bad news because it can be something to solidify confirmation bias against immigrants. Looking at the CIS study, can the real takeaway be that immigrants use way too much welfare?
I do like that the study used Survey and Income Participation Program (SIPP) data from 2008-2012 to come to its conclusions because the dataset is comprehensive. However, I do have to question the study's overall methodology.
For one, the study groups people by household. This might sound great, but the issue is that Steve Camorata, the author of the report, defines a household as "as using welfare if any one if its members used welfare during 2012." It doesn't matter if the household has an American spouse that is using welfare or a child born in the United States to an "immigrant household" benefits (e.g., school lunches). They still all count as immigrants. This ambiguous definition of household allowed CIS to exaggerate the numbers of immigrants on welfare.
Second, the welfare programs that look at are SSI (Supplemental Security Income), Temporary Assistance for Needy Families (TANF), School Lunches, WIC (the Special Supplemental Nutrition Program for Women, Infants, and Children), Medicaid, and housing subsidies. As the Government Accountability Office points out, there are over 80 types of welfare programs. Most notably, the CIS does not include Social Security and Medicare. How can this study be considered an understanding of welfare usage when two large programs are not even included in the analysis? CIS says that Social Security and Medicare are different because just about everyone uses Social Security and Medicare, whereas only some use welfare. Yet these are programs in which the welfare of many are determined, or at least guided, by the government. CIS wants to skirt by saying that "immigrant households are creating a significant fiscal drain in a way that is not true for natives." More on fiscal drain in a moment.
Even with all of these methodological flaws, let's assume for a moment that 51 percent of immigrants are consuming at least one form of welfare services. Even so, this study suffers from a similar omitted variable bias that we see when one capriciously throws out the misleading statistic that women make 77¢ for every dollar a man earns. How so? When saying that women make 77¢ for every dollar a man earns, it aggregates the salaries of all men and the salaries of all women to make the comparison. Essentially, it's a false analogy because it assumes that women work the same types of jobs for the same hours at the same skill level. The statistic does not filter out for differences between the groups. When one factors in the variables that differentiate the difference between the wage-earning of women and men, women earn 94¢ for every dollar a man earns. It's not a 1:1 ratio, but it's interesting to see that the statistic exaggerates the problem by about fourfold.
There is that similar "apples to oranges" comparison in the CIS study on immigration. It assumes that the demographic composition between immigrant families and native families are identical, and the truth is that they're not. As CIS points out, immigrant households are more likely to be on welfare than native households because they are poorer and less educated. If that is the case, shouldn't we filter out those who are 300 or 400 percent above the federal poverty line so one can determine whether it is poverty or being an immigrant that makes one more likely to consume welfare? CIS also doesn't filter out illegal immigrants (something it says it will do in a forthcoming study), and it does not factor in the fact that immigrant households are larger than native households. At the very least, illegal immigrants/undocumented workers work in the underground market, which means they will be less likely to pay income or payroll taxes (CIS, Figure 9), which simply illustrates the non-analogous nature of this study.
What is even more glaring is that this study only provides the percentage of households in each category (i.e., immigrant versus native), which misleads the reader to assume that welfare consumption, regardless of household or welfare program, is done on a 1:1 ratio. For instance, TANF spending does not have the same fiscal impact on the American economy as Medicaid. More to the point, in order to prove fiscal drain, there needs to be revenues and expenditures for each program. CIS' study falls woefully short because it does not provide the amount in dollars that immigrant households consume versus native households. How can CIS make the claim that "significantly higher welfare use associated with immigrants means that it is very likely immigration is a drain on public coffers" without providing figures on cost?
Setting that glaring omission aside for a moment, in order to determine if immigrants are indeed a drain, you need to calculate net fiscal costs, which means that you need to also add in the economic benefit that they are creating. If the economic benefit exceeds the cost, then immigrants are an asset to the United States economy. If cost exceeds benefit, then and only then could immigrants be considered a net drain. Which is it then: economic boon or drain? After looking through the available fiscal data and empirical research, not only do immigrants consume less welfare in dollar terms, but the conclusion that I came to is that immigrants, even the low-income households, are a net benefit to the United States economy.
If determining fiscal drain were the primary purpose for CIS to release this study, then it should have provided fiscal data to affirm that claim. Some will use this study to confirm their biases against immigration, but I hope that you, the reader, decide to review the fiscal data yourself to determine the fiscal effects of immigrants on the American economy. What I will end with is this: providing non-analogous percentages of ambiguously defined households and their welfare consumption without fiscal data is specious, to say the least.
I do like that the study used Survey and Income Participation Program (SIPP) data from 2008-2012 to come to its conclusions because the dataset is comprehensive. However, I do have to question the study's overall methodology.
For one, the study groups people by household. This might sound great, but the issue is that Steve Camorata, the author of the report, defines a household as "as using welfare if any one if its members used welfare during 2012." It doesn't matter if the household has an American spouse that is using welfare or a child born in the United States to an "immigrant household" benefits (e.g., school lunches). They still all count as immigrants. This ambiguous definition of household allowed CIS to exaggerate the numbers of immigrants on welfare.
Second, the welfare programs that look at are SSI (Supplemental Security Income), Temporary Assistance for Needy Families (TANF), School Lunches, WIC (the Special Supplemental Nutrition Program for Women, Infants, and Children), Medicaid, and housing subsidies. As the Government Accountability Office points out, there are over 80 types of welfare programs. Most notably, the CIS does not include Social Security and Medicare. How can this study be considered an understanding of welfare usage when two large programs are not even included in the analysis? CIS says that Social Security and Medicare are different because just about everyone uses Social Security and Medicare, whereas only some use welfare. Yet these are programs in which the welfare of many are determined, or at least guided, by the government. CIS wants to skirt by saying that "immigrant households are creating a significant fiscal drain in a way that is not true for natives." More on fiscal drain in a moment.
Even with all of these methodological flaws, let's assume for a moment that 51 percent of immigrants are consuming at least one form of welfare services. Even so, this study suffers from a similar omitted variable bias that we see when one capriciously throws out the misleading statistic that women make 77¢ for every dollar a man earns. How so? When saying that women make 77¢ for every dollar a man earns, it aggregates the salaries of all men and the salaries of all women to make the comparison. Essentially, it's a false analogy because it assumes that women work the same types of jobs for the same hours at the same skill level. The statistic does not filter out for differences between the groups. When one factors in the variables that differentiate the difference between the wage-earning of women and men, women earn 94¢ for every dollar a man earns. It's not a 1:1 ratio, but it's interesting to see that the statistic exaggerates the problem by about fourfold.
There is that similar "apples to oranges" comparison in the CIS study on immigration. It assumes that the demographic composition between immigrant families and native families are identical, and the truth is that they're not. As CIS points out, immigrant households are more likely to be on welfare than native households because they are poorer and less educated. If that is the case, shouldn't we filter out those who are 300 or 400 percent above the federal poverty line so one can determine whether it is poverty or being an immigrant that makes one more likely to consume welfare? CIS also doesn't filter out illegal immigrants (something it says it will do in a forthcoming study), and it does not factor in the fact that immigrant households are larger than native households. At the very least, illegal immigrants/undocumented workers work in the underground market, which means they will be less likely to pay income or payroll taxes (CIS, Figure 9), which simply illustrates the non-analogous nature of this study.
What is even more glaring is that this study only provides the percentage of households in each category (i.e., immigrant versus native), which misleads the reader to assume that welfare consumption, regardless of household or welfare program, is done on a 1:1 ratio. For instance, TANF spending does not have the same fiscal impact on the American economy as Medicaid. More to the point, in order to prove fiscal drain, there needs to be revenues and expenditures for each program. CIS' study falls woefully short because it does not provide the amount in dollars that immigrant households consume versus native households. How can CIS make the claim that "significantly higher welfare use associated with immigrants means that it is very likely immigration is a drain on public coffers" without providing figures on cost?
Setting that glaring omission aside for a moment, in order to determine if immigrants are indeed a drain, you need to calculate net fiscal costs, which means that you need to also add in the economic benefit that they are creating. If the economic benefit exceeds the cost, then immigrants are an asset to the United States economy. If cost exceeds benefit, then and only then could immigrants be considered a net drain. Which is it then: economic boon or drain? After looking through the available fiscal data and empirical research, not only do immigrants consume less welfare in dollar terms, but the conclusion that I came to is that immigrants, even the low-income households, are a net benefit to the United States economy.
If determining fiscal drain were the primary purpose for CIS to release this study, then it should have provided fiscal data to affirm that claim. Some will use this study to confirm their biases against immigration, but I hope that you, the reader, decide to review the fiscal data yourself to determine the fiscal effects of immigrants on the American economy. What I will end with is this: providing non-analogous percentages of ambiguously defined households and their welfare consumption without fiscal data is specious, to say the least.
Monday, September 7, 2015
Thank Capitalism, and Not Unions, for the 40-Hour Work Week
Today being Labor Day, we are supposed to appreciate the contributions of the workers' rights movements of the early twentieth century. The best one-liner I have heard from union proponents goes along the lines of "you should thank unions because without the unions, we wouldn't have the 40-hour work week." Should we be groveling at the feet of the AFL-CIO and thank our lucky stars that unions brought the American people such a wonderful gift?
A bit of history on the matter: Yes, it is true that unions had been clamoring for a forty-hour work week. In 1912, the Federal Public Works Act mandated a forty-hour work week, but that was only for federal government workers. It was not until June 1938 when the Fair Labor Standards Act (FLSA) was enacted. The FLSA states that a) those covered under the FLSA have a maximum workweek of 40 hours, and b) in the event that they work over 40 hours, they must be paid overtime. The most objective way to determine the union's role in this matter is to take a look at the data on the amount of hours workers work within a given time period to determine if the unions were the cause of more optimal work-hours or if something else was the cause. The St. Louis Federal Reserve Bank has average weekly work hours data from 1939 to present, which doesn't exactly help us with determining the trend prior to the FLSA. To know what effect the FLSA may or may not have had on the average work week, we need to be able to look further back than 1939.
With available datasets, what's the verdict? The Economic Historical Association shows that weekly work hours were 58.5 per week in 1900, and had decreased to 40.6 hours by 1934, which is four years before the FLSA's enactment. The decrease in work hours from 1900 to 1940 was 35 percent. Here is another year-by-year breakdown from 1900 to 1975 that shows that downward trend. A study from the London School of Economics (Huberman and Minns, 2007, p. 542) also shows that this general downward trend was occurring in multiple countries since 1870. The work week decreased from 63 to 49 weekly hours between 1870 and 1913 (Huberman, 2002, p. 19). Section D of the Historical Census Data also confirms this downward trend.
Most economists and historians actually reject that labor unions played a primary role in reducing the number of hours worked in a given week. Neither unionization nor work-hours legislation account for this decline (Costa, 1998, p. 14-15). If this trend was in existence well before the passage of the FLSA, what could possibly be attributing to this decline if not unions?
Much like I argued when discussing sweatshops, the economy was predominantly agricultural in the nineteenth century. The productivity of labor so low that a) people had to work 70 to 80 hours a week, and b) in many instances, child labor was necessary to keep food on the table. It was not through some piece of legislation, but through technological development (e.g., steam power in the nineteenth century) and factor productivity growth that people were able to work less hours. By the time the FLSA was passed such legislation was unnecessary because the average work hours during the late 1930s and early 1940s was hovering around the desirable length of the work week.
We didn't need a mandate to provide us with a 40-hour work week. Market forces were already doing that for us, and they should continue doing so. Employers should be able to experiment with various work-hour structures to see if it improves production factors and marginal utility of employees, and instead of being impeded by maximum work week laws, employees should have the freedom to work more than forty hours to increase their income if they so desire. The first Monday of every September should not be a celebration of what unions factually did not do. It should be a celebration of how capitalism has been the single greatest modus operandi to bring us wealth and prosperity. Happy Labor Day!
A bit of history on the matter: Yes, it is true that unions had been clamoring for a forty-hour work week. In 1912, the Federal Public Works Act mandated a forty-hour work week, but that was only for federal government workers. It was not until June 1938 when the Fair Labor Standards Act (FLSA) was enacted. The FLSA states that a) those covered under the FLSA have a maximum workweek of 40 hours, and b) in the event that they work over 40 hours, they must be paid overtime. The most objective way to determine the union's role in this matter is to take a look at the data on the amount of hours workers work within a given time period to determine if the unions were the cause of more optimal work-hours or if something else was the cause. The St. Louis Federal Reserve Bank has average weekly work hours data from 1939 to present, which doesn't exactly help us with determining the trend prior to the FLSA. To know what effect the FLSA may or may not have had on the average work week, we need to be able to look further back than 1939.
With available datasets, what's the verdict? The Economic Historical Association shows that weekly work hours were 58.5 per week in 1900, and had decreased to 40.6 hours by 1934, which is four years before the FLSA's enactment. The decrease in work hours from 1900 to 1940 was 35 percent. Here is another year-by-year breakdown from 1900 to 1975 that shows that downward trend. A study from the London School of Economics (Huberman and Minns, 2007, p. 542) also shows that this general downward trend was occurring in multiple countries since 1870. The work week decreased from 63 to 49 weekly hours between 1870 and 1913 (Huberman, 2002, p. 19). Section D of the Historical Census Data also confirms this downward trend.
Most economists and historians actually reject that labor unions played a primary role in reducing the number of hours worked in a given week. Neither unionization nor work-hours legislation account for this decline (Costa, 1998, p. 14-15). If this trend was in existence well before the passage of the FLSA, what could possibly be attributing to this decline if not unions?
Much like I argued when discussing sweatshops, the economy was predominantly agricultural in the nineteenth century. The productivity of labor so low that a) people had to work 70 to 80 hours a week, and b) in many instances, child labor was necessary to keep food on the table. It was not through some piece of legislation, but through technological development (e.g., steam power in the nineteenth century) and factor productivity growth that people were able to work less hours. By the time the FLSA was passed such legislation was unnecessary because the average work hours during the late 1930s and early 1940s was hovering around the desirable length of the work week.
We didn't need a mandate to provide us with a 40-hour work week. Market forces were already doing that for us, and they should continue doing so. Employers should be able to experiment with various work-hour structures to see if it improves production factors and marginal utility of employees, and instead of being impeded by maximum work week laws, employees should have the freedom to work more than forty hours to increase their income if they so desire. The first Monday of every September should not be a celebration of what unions factually did not do. It should be a celebration of how capitalism has been the single greatest modus operandi to bring us wealth and prosperity. Happy Labor Day!
Subscribe to:
Posts (Atom)