The Conversation: Where Journalists Associated with Universities Publish
Name: The Conversation (Visit The Conversation)
Type: Academic Paper Fueled News Outlet
Best Website For: Academic News and Commentary
Reason it's on The Best Sites:
The Conversation is a not-for-profit media outlet that uses content sourced from journalists who are affiliated with a university. The site is featured in MakeUseOf's "Best Websites on the Internet" article.
Americans have blamed many culprits, from mental illness to inadequate security, for the tragic mass shootings that are occurring with increasing frequency in schools, offices and theaters across the U.S.
Yet in our nation’s ongoing conversation about the root causes of gun violence, the makers of guns are hardly ever mentioned. As a public health researcher, I find this odd, because evidence shows that the culture around guns contributes significantly to gun violence. And firearm manufacturers have played a major role influencing American gun culture.
To help spur this much-needed discussion, I’d like to share some critical facts about the firearm industry that I’ve learned from my recent research on gun violence prevention.
Surging handgun sales
The U.S. is saturated with guns – and has become a lot more so over the past decade. In 2016 alone, U.S. gun manufacturers produced 10.6 million firearms for entry into the market, up from 3.6 million in 2006. Pistols and rifles made up about 85 percent of the total.
In addition, only a small number of gunmakers dominate the market. The top five pistol manufacturers alone controlled half of all production in 2016: Sturm, Ruger & Co., Sig Sauer, Glock, Kimber Manufacturing and SCCY Industries. Similarly, the biggest rifle manufacturers – Remington Arms, Sturm, Anderson Manufacturing, Smith & Wesson and Savage Arms – controlled 62.3 percent of that market.
But that only tells part of the story. A look at the caliber of pistols manufactured over the past decade reveals a significant change in demand that has reshaped the industry.
The number of manufactured large caliber pistols able to fire rounds greater than or equal to 9 mm increased six-fold from 2005 to 2016, rising from just over half a million to more than 3 million. The number of 0.380 caliber pistols – small pistols designed specifically for concealed carry – jumped to over 1.1 million from just over 100,000 during the same period.
This indicates a growing demand for guns with increasing lethality and a design focused specifically on self-defense and concealed carry.
Production of rifles has also increased, rising from 1.4 million in 2005 to 4.2 million in 2016. This is driven primarily by a higher demand for semi-automatic weapons, including assault rifles.
Explaining the stats
So what can explain the jump in the sale of high caliber handguns and semi-automatic rifles?
For example, in 2005, Smith & Wesson announced a major new marketing campaign focused on “safety, security, protection and sport.” The number of guns the company sold soared after the switch, climbing 30 percent in 2005 and 50 percent in 2006, led by strong growth in pistol sales. By comparison, the number of firearms sold in 2004 rose 11 percent over the previous year.
There’s strong survey evidence that gun owners have become less likely to cite hunting or sport as a reason for their ownership, instead pointing to personal security. The percentage of gun owners who told Gallup the reason they possessed a firearm was for hunting fell to 36 percent in 2013 from almost 60 percent in 2000. The share that cited “sport” as their reason fell even more.
‘Stand-your-ground’ laws flourish
Another possible explanation for the uptick in handguns could be the widespread adoption of state “stand-your-ground laws” in recent years. These laws explicitly allow people to use guns as a first resort for self-defense in the face of a threat.
Utah enacted the first stand-your-ground in 1994. The second adoption did not take place until 2005 in Florida. A year later, stand-your-ground laws took off, with 11 states enacting one in 2006 alone. Another dozen passed such laws since then, bringing to the total to half of all states.
These laws were the result of a concerted National Rifle Association lobbying campaign. For example, Florida’s law, which George Zimmerman used in 2013 to escape charges for killing Trayvon Martin, was crafted by former NRA President Marion Hammer.
The American Legislative Exchange Council, an association of state legislators dedicated to limited government of which the NRA was a member, has helped push the laws around the country using a model drafted by another NRA official.
It’s not clear whether the campaign to promote stand-your-ground laws fueled the surge in handgun production. But it’s possible that it’s part of a larger effort to normalize firearms for self-defense.
This overall picture suggests that a change in firearm industry marketing fueled an increased demand for more lethal weapons. This, in turn, appears to have fostered a change in gun culture, which has shifted away from an appreciation of the use of guns for hunting, sport and recreation and toward a view that guns are a necessity to protect oneself from criminals.
How and whether this change in gun culture is influencing rates of firearms violence is a question I’m currently researching.
Michael Siegel receives funding from the Robert Wood Johnson Foundation and the National Institute of Justice to support research on firearm violence prevention.
Can President Donald Trump’s recent repudiation of domestic violence actually help prevent it?
Rob Porter, a high-level aide to Trump, was accused of serial domestic violence by his two ex-wives. The controversy dominated news coverage earlier this month. Trump publicly denounced domestic violence one week after Porter resigned, saying “I’m totally opposed to domestic violence of any kind.”
Those who called for such a statement by the president may be motivated by the belief that when powerful men convincingly call out abusers, society’s acceptance of domestic violence can be diminished.
There is not a lot of research to support this commonsense idea. But there is a growing body of work with men and boys that research shows can be effective in diminishing domestic violence.
This is a significant evolution in the field. Since the establishment of the first domestic violence shelters in the 1970s, domestic violence policies and services have rightly focused most attention on survivors and meeting their needs for safety and healing.
Increasingly, though, domestic violence organizations are adopting approaches that involve men and boys in domestic violence prevention. The idea is that by addressing the root causes, these programs can stop domestic violence from occurring in the first place.
I am a professor who studies how to intervene and prevent men’s violence against women. Our research team has researched the effectiveness of programs that involve boys and men in domestic violence prevention efforts.
Much of the work being done with men and boys is not well-known, despite the fact that this is a thriving movement. Here is a snapshot of some of those efforts, both global and local, and what we know about their effectiveness.
1. Sports and prevention
Some efforts to involve men are directed in the arena of sports because, well, men like sports. They venerate sports figures and identify with teams. Reports of domestic violence perpetrated by athletes have grown more common. This has led to visible efforts by sports organizations to respond with sanctions for perpetrators and to become part of prevention efforts, like the NFL’s No More. The seriousness and effectiveness of these efforts are yet to be determined.
But sports have also been the site of innovative and effective interventions for youth. Coaching Boys Into Men provides high school athletic coaches with the resources they need help prevent relationship abuse, harassment and sexual assault by their players.
The program’s curriculum includes coach-to-athlete trainings that model respect and promote healthy relationships. It also includes a card series to help coaches incorporate themes of teamwork, integrity, fair play and respect into their daily practice and routine.
2. Transition to fatherhood
A prime risk factor for future abuse is the exposure of children to violence. Preventing abuse in new families would reduce children’s exposure to violence and thus the potential for future violence.
One promising strategy is to involve men in prevention efforts as they move into fatherhood. Research shows that a caring and supportive relationship with their fathers reduces the risk of harsh physical discipline by the next generation of parents, both for fathers and mothers. Positive fathering predicts warmer and more positive parenting by adult sons when they become fathers.
Strategies that could be used in this area include engaging men at prenatal visits such as ultrasound appointments, which the vast majority of men in the U.S. attend, and at in-home visits during pregnancy and after the birth of a child.
Global campaigns such as MenCare seek to improve caregiving by fathers and address partner violence. MenCare’s programs ask men to become more equitable partners and provide them with opportunities to learn and practice parenting skills. They promote policy change, like paid parental leave. And they conduct media campaigns to inspire men and their communities to support men’s caregiving.
3. Preventing dating abuse
Studies document high levels of dating violence beginning in middle school. When it comes to prevention, one could argue that programs must intervene early or the effort will be wasted, because stopping abuse before it becomes a entrenched pattern is more likely to be effective in preventing relationship violence.
School-based abuse prevention programs like Safe Dates and Fourth R have shown some success in changing attitudes and behavior. The Safe Dates program, for example, uses nine 50-minute sessions, a student-performed 45-minute play and a poster contest, to explore topics on how to cultivate caring relationships, overcome gender stereotypes and help friends.
4. Bystander programs
“Bystander” prevention programs, increasingly commonplace on college campuses, build skills to recognize, respond to, and disrupt behavior that might lead to sexual assault or intimate partner violence.
Some examples of bystander behavior include telling a man who is saying disrespectful things about women to stop or helping a woman who is being harassed to get away from a situation in which she could be harmed. Teens involved in bystander interventions are more likely to intervene to prevent victimization of their peers.
5. Motivating men to be allies
Our research group surveyed men around the world who have been involved in efforts to prevent violence against women. The survey revealed that many men who get involved have a personal experience with violence – as child witnesses or survivors of their own child abuse. Still others find their way to prevention efforts through a commitment to social justice.
Importantly, we found that many men are receptive to violence prevention efforts when they tune in to survivors’ experiences.
Given that men are moved by learning about survivors’ experiences, the visibility of and emotional power of the #MeToo movement and the remarkable and vivid accounts of White House aide Rob Portman’s ex-wives can lead men to get involved in ending violence against women.
Whatever the pathway, men’s involvement in preventing domestic violence in their families, workplaces and communities can be part of the global effort to promote safety and equality for women, and to end victimization in all its forms.
Richard Tolman has received funding in the past from the Rauner Family Foundation, the Templeton Foundation, the National Institutes of Health, the Robert Wood Johnson Foundation and the Mott Foundation.
During his unsuccessful campaign for the Republican presidential nomination, Marco Rubio made the dubious (and grammatically unsound) assertion that “we need more welders and less philosophers.”
Bill Miller clearly disagrees with the Florida senator.
Miller, a prominent investor who spent three years studying philosophy at Johns Hopkins University as a graduate student, recently gave that school US$75 million to support its philosophy department. The famed stock picker’s donation, the largest ever by far to any philosophy program and among the biggest for a specific humanities field, stands out for a good reason. Generosity on that scale to support science, technology, engineering and math, the so-called STEM disciplines, is far more common.
As the director of the Baker-Nord Center for the Humanities at Case Western Reserve University, a STEM-steeped institution, part of my job is to counter the pressure from politicians, high school counselors and parents that is driving students into these fields and away from subjects like history, literature or philosophy based on a belief that studying the humanities cannot lead to professional success. Only 25 first-year students in a class of 1,200 at my school said they intended to declare a humanities subject as their primary major.
I believe gifts like Miller’s will help scholars like me make our case that more students should embrace the humanities.
Humanities vs. STEM giving
Miller just doesn’t buy the conventional wisdom about the impracticality of studying the humanities.
“I attribute much of my business success to the analytical training and habits of mind that were developed when I was a graduate student at Johns Hopkins,” said the investor, who took a single philosophy class while majoring in economics as an undergraduate at Washington and Lee University.
His gift is part of a broader trend in academic giving, which has bounced back from the sharp decline brought about by the Great Recession.
Higher-education donations totaled $43.6 billion in 2017, a 6.3 percent increase over the previous year. Johns Hopkins, which just concluded a $5 billion capital campaign, racked up $637 million of that total.
While relatively little of this money for colleges and universities targeted the humanities, the liberal arts fields devoted to the study of human culture, giving to the humanities has soared over the past decade. These donations rose 26 percent between 2005 and 2015, which is actually less than half the 57 percent climb for higher education overall. Total donations for all colleges and universities increased from $25.6 billion in 2005 to $40.3 billion in 2015.
Despite being smaller than all of the 14 biggest higher-ed gifts of 2017, Miller’s donation is the largest donation to support the humanities at a university since 2006.
Studying the humanities vs. STEM
Hopkins will use Miller’s gift to increase its philosophy faculty from 13 to 22 and expand programs for graduate and undergraduate students at a time when many universities are downsizing humanities departments.
His bet on philosophy comes as students are losing interest in the humanities.
The share of bachelor’s degrees awarded in humanities disciplines peaked at 15 percent in 2005, according to the American Academy of Arts and Sciences. It fell to 12 percent by 2015, the lowest since 1987, and the total number of students getting these degrees also declined.
Most commentators see these trends as a sign that students are responding to market needs in the wake of the Great Recession.
STEM’s questionable crisis
At the same time, there are scholars who question the arguments driving STEM philanthropy.
Corporate and political leaders have sounded alarms over a supposed shortage of young workers equipped with STEM degrees five times since the end of World War II, Teitelbaum observes.
For example, the National Research Council claimed in 2005 that inadequate numbers of scientists and engineers constituted a “creeping crisis” threatening U.S. economic prosperity and security. Teitelbaum finds these claims to be unsubstantiated. He says that explains why STEM enrollment surges after every false alarm, precipitating surplus graduates.
Oh, the humanities
Given the recurring glut of scientists and engineers, I believe that Miller is responding to the workforce’s need for the soft skills of the humanities, such as critical thinking, communication and cultural awareness. After all, as Apple’s Steve Jobs once said,
“It’s in Apple’s DNA that technology alone is not enough – that it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our hearts sing.”
For all the fretting by the likes of Marco Rubio, employers covet these skills. The American Academy of Arts and Sciences finds that while humanities graduates do earn less than their STEM counterparts, they are in fact employed and making a living. The median salary for a humanities graduate in 2015 was $52,000. That was less than the $82,000 for an engineering graduate, but equal to graduates in the life sciences. And earlier studies show that the income difference between the humanities and other disciplines narrows over time.
What’s more, at 4.3 percent, the unemployment rate for humanities graduates in 2015 was only slightly higher than the 3 percent rate for graduates in all fields. And humanities graduates are every bit as satisfied with their jobs.
Perhaps advocates for the humanities don’t need to be defensive about the declining interest in pursuing degrees in these fields, which can lead to a meaningful as well as lucrative career.
For, as investor Bill Miller has said with his big gift, the humanities are just as as important for the fabric of our society and our economy as science, technology, engineering and math.
Peter E. Knox works for Case Western Reserve University. He has previously received funding from the National Endowment for the Humanities and the Loeb Classical Library Foundation. He is a representative to the National Humanities Alliance.
White women in the U.S. are slightly more likely to develop breast cancer than black women – but less likely to die of it. There has been a 35 percent decrease in breast cancer mortality rate from 1990-2012. The breakdown by race over this period, however, shows a different story. Death rates for black women decreased by 23 percent, while the death rates for white women declined by 42 percent.
A big, but not the only, reason for this is that white women tend to more frequently get two subtypes of breast cancers, called ER-positive or HER2-positive, for which we now have very effective targeted treatments.
Black women, however, are two to three times more likely than white women to get an aggressive type of breast cancer called triple negative breast cancer, for which there are still no approved targeted treatments. Researchers do not yet know all the reasons why this is so, but are looking for answers.
Research has vastly improved breast cancer treatments and survival rates over the years, with a five-year survival rate for localized breast cancer at 98.9 percent, but the gap in mortality rates between black and white women has stubbornly persisted.
We study breast cancer, with a special emphasis on health disparities. Here are some of the trends we see.
First, some statistics that lay out the extent of the problem. About 1 in 8 American non-Hispanic white women, and about 1 in 9 African-American women will suffer from breast cancer in their lives.
While breast cancer is slightly less prevalent in African-American women, it is much more likely to be diagnosed at a later stage in them. About 37 percent of white patients and about 47 percent of black patients will have cancers that have spread from their breast to nearby lymph nodes at diagnosis. When the disease has spread, it typically presents a greater treatment challenge. In fact, the five-year survival rate for breast cancer patients with distant metastasis, or disease that has traveled to another organ such as the liver or bone, is 26.9 percent, as compared to 98.9 percent for those with a localized disease.
In addition, the aggressive triple negative type of breast cancer accounts for 12-20 percent of tumors in white women, but about 20-40 percent in black women. Triple negative breast cancer is particularly hard to treat because it does not respond to targeted treatments that have proven to be effective in treating breast cancers that test positive for certain receptors on cancer cell surfaces.
Beyond triple negative cancer itself, there also seem to be racial differences in what we call the tumor microenvironment of the cancer cells. Tumor microenvironment is the immediate cellular environment of the cancer cells, including surrounding blood vessels, immune cells, signaling molecules and the tissue matrix that surrounds tumor cells (i.e., the extracellular matrix). Since the tumor microenvironment can affect behavior of the tumor cells and their response to treatments, these racial differences could impact tumor biology and disease progression. Studies have also uncovered racial differences in gene expression patterns of cancer cells, in which genes are over-expressed or under-expressed in the tumor cells of black versus white women.
One of the common abnormalities found in cancer cells proliferating within tumors is that they often gain or lose stretches of DNA, which could include multiple genes, or even whole chromosomes that carry hundreds of genes. As a result, cancer cells may harbor higher-than-normal or lower-than-normal copies of genes compared to healthy cells. Daughter cells that arise from such cancer cells form a “clone of cells” that could be genetically different from other such clones within the tumor.
When this gain or loss occurs at a fast rate, it results in a tumor with astounding clonal diversity. Such tumors are more likely to harbor clones that can spread very efficiently through the body or resist treatments very staunchly, resulting in a higher risk of death for the patient. Scientists have discovered that breast tumors in black women tend to be more clonally diverse, and therefore harder to treat, than those in white women. The discovery of these biological factors is fairly recent, and research is still ongoing.
Beyond tumor biology
Having other diseases, such as diabetes, also could be not only a risk factor for developing breast cancer but also for poorer outcomes, research has shown.
Some statistics point to problems outside of the sphere of medicine, however.
In the U.S., about 23.1 percent of black women live in poverty, compared to 9.6 percent of white women. Studies have shown that a lack of resources makes a huge difference in survival rates, treatment responses, and progression of disease. Poor women are less likely to have good quality health insurance, to get as much information on early detection and screening, and to have access to the best health care and latest treatments.
Another factor, that is both biological and environmental, is obesity. According to the National Cancer Institute, fat tissue actually makes the hormone estrogen. Exposure to high levels of estrogen over a lifetime increases the risk of breast cancer.
Further, in the U.S., obesity is strongly linked to poverty, according to the National Institutes of Health. In other words, since black women are more likely to be poor, they are more likely to be obese – which makes them more likely to develop breast cancer.
The higher incidence of poverty among African-Americans also affects access to high-quality, timely care compared to white women.
The search for advances
In future years, we hope we will find specific mechanisms that explain the observed racial differences in breast cancer mortality. Eventually, we believe it will be possible to give each patient customized targeted treatments based on their genetic profile and other factors.
There are many factors that will need to be addressed to create racial equity in breast cancer outcomes. Bridging the gap will require a wide range of experts: clinicians, bioinformaticians, diagnosticians and epidemiologists from the science side, but also social scientists and public health experts. Only by joining together can we make sure that all breast cancer patients get the treatment that is best for them.
I am affiliated with Novazoi Theranostics, Inc.
Ritu Aneja receives funding from NIH.
Twenty years ago, images of staggering cattle and descriptions of brains resembling Swiss cheese became associated with one of the most popular television programs of the day when Texas Panhandle cattlemen sued “The Oprah Winfrey Show” for defamation under Texas’ “veggie libel law.” They claimed the program’s negative portrayal of their business caused a steep decline of beef prices.
On the surface, this conflict looked like a battle between an industry and the TV producers who portrayed it negatively. But at its heart was some complicated science that had the potential to scare the public and be sensationalized by the media.
Today’s practitioners of science communication grapple with the difficulty of transmitting science information via the media to a lay audience. This 1998 trial serves as a rare public case study documenting the media’s imperfect attempts to clarify the science of mad cow disease in the midst of a celebrity spectacle.
Ultimately Oprah won the legal case. But how did the public’s understanding of the science fare?
Facts of the case
A year and a half earlier, rancher-turned-animal-rights activist Howard Lyman appeared on Winfrey’s program. He claimed the American beef industry was giving cattle feed that contained remains of processed cattle. This practice, no longer legal in the U.S., had been banned by the British government in 1996 due to the belief it had led to the 1980s outbreak in Great Britain of bovine spongiform encephalopathy.
BSE is a fatal nervous system disease in cattle; a human form of the disease, Creutzfeldt-Jakob, was subsequently diagnosed in England, causing the deaths of 178 people in the U.K. through 2017. Medical researchers believed this form of CJD was caused by eating the meat of cattle infected with BSE.
Upon hearing these revelations Winfrey proclaimed on-air, “It has just stopped me cold from eating another burger!” The “Oprah effect” kicked into gear and the term “mad cow disease” rose in the public consciousness.
The resulting lawsuit initially focused on the science of BSE and the extent of the danger to beef consumers. However, the judge’s ruling ultimately hinged on legal questions of freedom of speech, rather than whether “The Oprah Winfrey Show” broadcast scientifically valid findings.
Science from lawyers, via media, to public
The verdict itself doesn’t provide a clear reflection of how effectively the science of BSE had been communicated during the trial to the jury. But the case was also tried, as they say, in the court of public opinion.
U.S. District Judge Mary Lou Robinson imposed a gag order on the attorneys, prohibiting them from talking about the case outside of court. She did, however, provide permanent seats in the Amarillo courtroom for local media. One of us (Larry Lemmons) was the lead reporter for the local CBS affiliate during the trial.
Celebrity sightings around the courthouse were common. PETA protesters traded insults with local restaurant employees grilling burgers for the crowd. Presumably because I was one of the primary local media reporters, my reports were followed by attorneys from both sides. When I personally met Oprah Winfrey she remarked, “So you’re Larry Lemmons.” I never figured out precisely what that meant.
My media colleagues and I struggled to understand and communicate the specifics of BSE. We listened to the attorneys present the science to the jury, and then communicated those details to the public, who tended to be more interested in the spectacle.
In Amarillo in 1998, although access to the internet was growing more common, we reporters tended to regard it with suspicion. We gathered news the old-fashioned way, via in-person or phone interviews. For BSE research, I went to the library and a local college where a science professor provided me with some background. Part of my job as a reporter was to get the complicated scientific facts straight, and I couldn’t ask any of the trial participants for clarification.
Looking back over two decades, I wondered if my challenges communicating the science were shared by colleagues and other important players in the trial. Now, as a doctoral student of media and communication (working with Dr. Landrum and others at Texas Tech), I contacted some of them to discuss how attorneys related the science of BSE to the jury and how the media subsequently reported on information presented in the courtroom.
Thinking back to the trial
As expected, there are conflicting perspectives on how effectively the science was communicated.
Howard Lyman’s defense attorney, Barry Peterson, said that “to prevail I had to inform the jury that there was reasonable scientific evidence to support Howard’s opinions.” But he also had to consider the political environment: “We were more concerned about our ability to successfully defend Howard and HARPO Production because we are in beef country.”
Despite representing the losing side, one of the plaintiff’s attorneys Vince Nowak said the trial was a success for the cattle industry because it convinced the public that BSE was not a serious threat to American livestock. Though he presented extensively on the science during the trial, he acknowledged that “science played a very small factor” in the subsequent ruling by the judge.
Nevertheless, some reporters said the local cattle industry, who were not affected by the judge’s gag order, should have been more eager to clarify to the media the relative risks of Texas cattle becoming infected with BSE. At the time, Kay Ledbetter worked for the Amarillo Globe-News. She said obtaining scientific information was frustrating and limited to what was discussed in the courtroom:
“There was nobody reliable to discuss what the disease – bovine spongiform encephalopathy – really was … We were left with the catch phrase Mad Cow Disease, and our imagination.”
On the other hand, Stacy Yates, who covered the trial for local news radio station KGNC, thought both the defense and plaintiffs did a reasonable job communicating the science and that “if you were a person who wanted to understand the science, the coverage was there.”
Ultimately the media covering this trial were left to muddle through as best we could – and the public relied on our efforts.
The name matters
My own notes from the trial are rich with legal and scientific explanations that accompanied courtroom observations. Notes for one report included this passage:
“[Winfrey’s] attorney Charles Babcock tried to establish links between what’s called ‘new variant CJD’ in humans and mad cow disease in cattle. [Primary plaintiff Paul] Engler insisted upon precise scientific answers while Babcock tried to put the issue in layman’s terms.”
However, recent research on how to most effectively communicate science has found that sometimes putting a scientific issue into less accurate layman’s terms can add to confusion and heighten controversy.
Ledbetter is now an agriculture science communicator for Texas A&M AgriLife, a statewide agricultural research institution. She said that by using the term “mad cow disease,” the media misrepresented the issue:
“It’s not Mad Cow Disease, it’s bovine spongiform encephalopathy or BSE. And if agriculture would have taken the same stance on this issue as they did on Swine Flu, trying to educate on what it really was and asking the media to call it by its real name, H1N1, many people wouldn’t have had the same concerns.”
Ledbetter’s point of view is supported by science communication research. In one study, researchers investigating a subsequent mad cow outbreak in France determined that the framing of the issue influences public perception. When people were confronted with the term “mad cow,” they reacted more emotionally than they did to a scientific label, such as BSE. It’s an open question, though, how opinion would have changed with the use of a more deliberative description of the disease during the Oprah Winfrey lawsuit.
Today the CDC considers the risks to Americans from BSE to be “extremely low.” Since 1993 there have been a total of only 25 cases of BSE in North American cattle, the majority of those in Canada. In “A Comparative Study of Communication About Food Safety Before, During, and After the ‘Mad Cow’ Crisis,” food law scholar Matteo Ferrari concluded the public decides whom to trust regarding the message by how government, industry or advocates frame it.
In this case, the jury determined the media’s First Amendment protections outweighed the defamation concerns presented by the plaintiffs. Ironically, because of the media focus on the trial, the perspectives of the cattle industry were also highlighted.
The public got the message that there was little evidence that BSE threatened American livestock in a substantial way. Two decades of hindsight suggest that lawyers and media – in perhaps a piecemeal, stumbling way – did transmit relatively accurate science information. The cattlemen may have lost the case, but U.S. media consumers were left with the understanding that U.S. beef was safe. Media professionals still struggle with knowing how to best explain and condense complex science and public health issues in ways that won’t inappropriately trigger defensiveness, denial or fear. Research in the area of the science of science communication has made great strides in exploring these issues, but there is still much work to be done.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
More than 42,000 people died in 2016 from an opioid overdose. Forty percent of these deaths involved a prescription opioid. Overall, deaths from opioid overdoses have contributed to a decrease in American life expectancy for the second year in a row. The last time that happened was in 1962 to 1963.
In light of such statistics, it’s not surprising that a recent article explaining how opioids aren’t always necessary after surgery made a bit of a splash. Firoozeh Dumas, who wrote the article, underwent a laparoscopic hysterectomy in Germany. She was told that ibuprofen – the nonprescription medication found in Motrin and Advil – would be sufficient. What she really would need was rest. Dumas, who had moved from California, worried that her pain would be undertreated. But, it turned out that her worry was misplaced. She recovered well, with the help of very little ibuprofen and lots of tea and rest.
There are many interesting lessons in this story, but one main point that the reader is invited to take away is that part of America’s opioid problem has to do with patient expectations. Dumas attributed her focus on pills to her time in the U.S. Abroad, she ended up finding something valuable in her experience of being forced to slow down and heal.
So are those of us who want, expect, or even request opioid medications doing something wrong? Should we see each medical encounter for pain as an opportunity to be part of the solution to the opioid crisis?
As an academic who wrestles with the ethics of pain management both professionally and personally, I think stories like Dumas’ are important, but that inferring too much from them is dangerous. Yes, America probably needs a culture change regarding pain medicine, but we have to be careful how we frame that challenge.
The public health dilemma
Opioids are certainly powerful analgesics, but in the medical literature consensus is beginning to emerge that they are not as good as initially thought. Most of us now know that the risks are very serious, including the development of opioid use disorder and even fatal overdose.
Given such serious risks, their benefits would need to be significant to warrant use. But the literature shows that this often isn’t the case. For many pains, a combination of non-opioid therapies — acetaminophen and ibuprofen, as well as nonpharmaceutical alternatives like physical therapy, exercise and cognitive behavioral therapy — works quite well, and carries less serious risks.
As a result, the Centers for Disease Control and Prevention recommends using opioids sparingly for severe, acute pain, and only under special circumstances for chronic pain.
If opioids aren’t a panacea for pain, then doctors need to be careful how they use them. Patient demands, however, can make that difficult. In a culture of patient as consumer, changing physician behavior may require changing patient expectations.
Don’t stigmatize patients
The above considerations seem to support the view that, as patients, perhaps each of us has an obligation not to request or demand opioid therapy, and to resist if offered.
However, I would urge caution. Stories aren’t data, and they don’t generalize. Firoozeh Dumas was fortunate to have experienced as little pain as she did after her surgery in Germany. It’s important to recognize, though, that her procedure was laparoscopic. As a result, she had only small incisions and was able to leave the hospital on the same day. This is certainly not to minimize her pain, but only to point out that many other patients will be in very different circumstances. Some surgeries are brutally painful, as is trauma and many other medical conditions, such as sickle-cell anemia.
Patients are individuals, and they react uniquely to pain and medication. What works for one person may not work for others. Some people are unable to take ibuprofen or are limited in their ability to exercise.
In addition, following a full pain regimen that utilizes both pharmaceutical and nonpharmaceutical therapies is expensive, while generic opioids are quite cheap. As a result, insurance companies in the U.S. that readily pay for morphine may not pick up the tab for pricier medications, or for nonpharmaceutical therapies.
Narratives that emphasize not taking opioids are valuable for making clear that not all pain requires a pill. But they could also be risky: Pain patients and opioid therapies are already deeply stigmatized. Those who take opioids for pain are often treated with suspicion and taken to be drug-seeking.
Here’s what patients can do
So is there behavior that we, as patients, ought to adopt in order to help combat the opioid epidemic?
Like most ethical issues, the answer, I think, is less clear and more nuanced than most people might like it to be. I do believe that opioids should not be viewed as evil, and those, for whom they make life more manageable, should not be shamed. But we surely can make some changes.
The medical community is getting better at understanding what kinds of surgical interventions are likely to require little or no opioid therapy. So, if you are undergoing surgery, talk to your doctor about whether she thinks you will really need opioid therapy, and if so, for how long.
Tell her that you don’t want a prescription for more pills than is likely to be necessary. There is emerging evidence that many surgeons overprescribe opioids, and that patients often need far fewer pills than are prescribed.
If you end up with unused medication anyway, dispose of it properly. Half-used or even untouched bottles of pills can be found and used by family members or stolen to be misused or sold on the street.
Finally, many of us do need to understand that zero pain is often both an inappropriate and unrealistic goal. So we need to wrestle with the fact that injury, trauma, surgery, and just aging often hurt, and while medicine may help us improve our quality of life, we shouldn’t expect or demand magic.
None of this means, however, that we should accept further marginalization of pain and opioid therapy patients. Some pain is devastating and life-limiting, and sometimes such pain responds well to opioids. We should not, in the name of public health, cultivate an attitude that makes it more difficult for those suffering to access what relief they have been able to find.
Editor’s note: This piece is part of our series on ethical questions arising from everyday life. We would welcome your suggestions. Please email us at firstname.lastname@example.org.
Travis N. Rieder does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
For nearly a decade, I believed I was helping improve victims’ lives by prosecuting people who committed domestic violence in Seattle, Washington.
I aimed to advance the goals of the criminal justice system: Stop the violence, hold the defendant accountable and enhance the survivor’s safety.
Then, in 1996, I met a woman whose case would eventually shatter my view of what prosecution means for domestic violence victims. I’ll call her Marina, a pseudonym to protect her identity.
Over the past several years, I have spent hours reconnecting with Marina and a few other victims from the cases I prosecuted. I wanted to learn from victims how to reform the criminal justice system to better serve them.
Here’s what I learned.
The case that changed everything
Like many victims I met, Marina suffered gruesome violence at the hands of her boyfriend. Marina testified at trial that her boyfriend tried to rip out her tongue and that she believed his threats to kill her and her 10-year-old daughter. The jury agreed and convicted him.
At sentencing, I asked for prison time. Marina told the judge that she still loved her boyfriend, he was a good dad, and the violence was her fault. She begged that he not be incarcerated. The judge sentenced her boyfriend to five years and prohibited him from any contact with Marina or her daughter. To me, the prosecution was a complete success. Marina viewed it differently. Furious, she screamed at me, saying I had ruined her life.
Believing I was doing good, I brushed off Marina’s accusation. Later, it haunted me. Had I ruined her life, or had I helped her? I did not know because I did not see Marina again after sentencing. Twenty years later, I sat down with her and later with her daughter.
Today, Marina is afraid and alone. She is still dealing with her trauma and still hiding from her boyfriend who has since been released. She shuns intimate relationships for fear of more violence. Her daughter, now 31, lives in another state with two small children and has no contact with Marina. She has been in many abusive relationships, became addicted to meth and has tried to kill herself. Marina and her daughter are both alive, but the prosecution hardly looks like a success to me now.
Dumped by the system
Talking to Marina led me to contact a handful of other victims, and several themes emerged from those conversations.
Yes, I had done my job: Often the violence stopped and the victims survived. However, for all of these victims, trauma – not love – endured. Worse yet, every victim felt abandoned by the prosecution and left to manage that trauma on their own. For example, one victim told me she felt “dumped” after the trial.
Across the country, domestic violence survivors have access to a wide variety of services. The majority of the victims I spoke with accessed services after the prosecution and felt those services helped them. Marina, for example, found temporary housing for her and her daughter through a domestic violence organization. Other victims got counseling or mental health medications.
Unfortunately, these services are provided by a network of underfunded nonprofit organizations that the victim needs help to navigate. Meanwhile, courts only have control over defendants – they cannot order victims into treatment. That means current criminal justice reform efforts focus primarily on improving services for defendants.
The victims I spoke with were only able to access a portion of the services they needed and only through their own initiative. Often crucial needs, such as civil legal assistance, went unmet. Helping victims recover after the trial was never part of my job as a prosecutor. After each trial, I immediately moved on to the next case, the next defendant and the next victim.
Marina, and all the victims I talked to, told me that what they wanted but did not get was an ongoing relationship with the prosecution to help them heal.
Another victim I interviewed, someone I’ll call Steve to protect his identity, showed me how things might change.
Steve was one of my more challenging cases. Steve’s boyfriend smashed Steve’s head with a bottle. Several months later, he slit Steve’s throat. Even though Steve nearly died, he tried everything to avoid testifying, including committing himself to the psych ward during the trial – in response to his boyfriend’s manipulative mix of threats and expressions of love.
When his evasion efforts failed, Steve testified that he had cut his own throat. Fortunately, the jury saw through Steve’s ruse and convicted his boyfriend. I wasn’t sure Steve – who as a disabled alcoholic with AIDS, left homeless after his boyfriend’s violence caused his eviction – would have survived the years between his court appearance and when I tracked him down for an interview. Instead, I learned Steve’s AIDS actually saved him.
When he sought treatment for his condition after the trial, the AIDS clinic assigned Steve a doctor and a social worker who identified his needs. Over the next 15 years, they helped him with meals, medication, counseling, job training, substance abuse treatment, housing assistance and finding community. This case management approach is considered best practice response to the AIDS epidemic and is largely funded by the government. These are precisely the resources every domestic violence survivor deserves.
Prosecution and victim services should consider a case management model for victims after trial in which each victim would be assigned to a team. The team would include a prosecutor, victim advocate and social worker to assess needs and coordinate services. This could fulfill victims’ desire for continued contact with the prosecution and help victims recover from the trauma. Today, family justice centers are a step in this direction by locating a variety of service providers in a single location. What they are lacking is connecting victims with a specific team of individuals who are responsible for their care during and after their criminal case concludes.
This model will also have another benefit. Prosecutors must consider what happens after the gavel drops, not just for victims of domestic violence, but for their own sake.
Through this process, I discovered that I too carried the victims’ trauma. Because I had no contact with victims post-trial, they remained forever ingrained in my memory as they were in the courtroom. Reconnecting with the victims and learning about their lives after the prosecutions allowed me to relinquish some of their trauma. It helped me heal as well.
Andrew King-Ries was a deputy prosecuting attorney with the King County Prosecutor's Office in Seattle, Washington. He is a member of the ABA Commission on Domestic and Sexual Violence.
After handing them their suicide capsules, Norwegian Royal Army Colonel Leif Tronstad informed his soldiers, “I cannot tell you why this mission is so important, but if you succeed, it will live in Norway’s memory for a hundred years.”
These commandos did know, however, that an earlier attempt at the same mission by British soldiers had been a complete failure. Two gliders transporting the men had both crashed while en route to their target. The survivors were quickly captured by German soldiers, tortured and executed. If similarly captured, these Norwegians could expect the same fate as their British counterparts, hence the suicide pills.
Feb. 28 marks the 75th anniversary of Operation Gunnerside, and though it hasn’t yet been 100 years, the memory of this successful Norwegian mission remains strong both within Norway and beyond. Memorialized in movies, books and TV mini-series, the winter sabotage of the Vemork chemical plant in Telemark County of Nazi-occupied Norway was one of the most dramatic and important military missions of World War II. It put the German nuclear scientists months behind and allowed the United States to overtake the Germans in the quest to produce the first atomic bomb.
While people tend to associate the United States’ atomic bomb efforts with Japan and the war in the Pacific, the Manhattan Project – the American program to produce an atomic bomb – was actually undertaken in reaction to Allied suspicions that the Germans were actively pursuing such a weapon. Yet the fighting in Europe ended before either side had a working atomic bomb. In fact, a rehearsal for Trinity – America’s first atomic bomb test detonation – was conducted on May 7, 1945, the very day that Germany surrendered.
So the U.S. atomic bomb arrived weeks too late for use against Germany. Nevertheless, had the Germans developed their own bomb just a few months earlier, the outcome of the war in Europe might have been completely different. The months of setback caused by the Norwegians’ sabotage of the Vemork chemical plant may very well have prevented a German victory.
Nazi bomb effort relied on heavy water
What Colonel Tronstad, himself a prewar chemistry professor, was able to tell his men was that the Vemork chemical plant made “heavy water,” an important ingredient for the Germans’ weapons research. Beyond that, the Norwegian troops knew nothing of atomic bombs or how the heavy water was used. Even today, when many people have at least a rudimentary understanding of atomic bombs and know that the source of their vast energy is the splitting of atoms, few have any idea what heavy water is or its role in splitting those atoms. Still fewer know why the German nuclear scientists needed it, while the Americans didn’t.
“Heavy water” is just that: water with a molecular weight of 20 rather than the normal 18 atomic mass units, or amu. It’s heavier than normal because each of the two hydrogen atoms in heavy H2O weighs two rather than one amu. (The one oxygen atom in H2O weighs 16 amu.) While the nucleus of a normal hydrogen atom has a single subatomic particle called a proton, the nuclei of the hydrogen atoms in heavy water have both a proton and a neutron – another type of subatomic particle that weighs the same as a proton. Water molecules with heavy hydrogen atoms are extremely rare in nature (less than one in a billion natural water molecules are heavy), so the Germans had to artificially produce all the heavy water that they needed.
In terms of their chemistries, heavy water and normal water behave very similarly, and you wouldn’t detect any differences in your own cooking, drinking or bathing if heavy water were to suddenly start coming out of your tap. But you would notice that ice cubes made from heavy water sink rather than float when you put them in a glass of normal drinking water, because of their increased density.
Those differences are subtle, but there is something heavy water does that normal water can’t. When fast neutrons released by the splitting of atoms (that is, nuclear fission) pass through heavy water, interactions with the heavy water molecules cause those neutrons to slow down, or moderate. This is important because slowly moving neutrons are more efficient at splitting uranium atoms than fast moving neutrons. Since neutrons traveling through heavy water split atoms more efficiently, less uranium should be needed to achieve a critical mass; that’s the minimum amount of uranium required to start a spontaneous chain reaction of atoms splitting in rapid succession. It is this chain reaction, within the critical mass, that releases the explosive energy of the bomb. That’s why the Germans needed the heavy water; their strategy for producing an atomic explosion depended upon it.
The American scientists, in contrast, had chosen a different approach to achieve a critical mass. As I explain in my book, “Strange Glow: The Story of Radiation,” the U.S. atomic bomb effort used enriched uranium – uranium that has an increased concentration of the easily split uranium-235 – while the Germans used unenriched uranium. And the Americans chose to slow the neutrons emitted from their enriched uranium with more readily available graphite, rather than heavy water. Each approach had its technological trade-offs, but the U.S. approach did not rely on having to synthesize the extremely scarce heavy water. Its rarity made heavy water the Achilles’ heel of the German nuclear bomb program.
Stealthy approach by the Norwegians
Rather than repeating the British strategy of sending dozens of men in gliders, flying with heavy weapons and equipment (including bicycles!) to traverse the snow-covered roads, and making a direct assault at the plant’s front gates, the Norwegians would rely on an alternate strategy. They’d parachute a small group of expert skiers into the wilderness that surrounded the plant. The lightly armed skiers would then quickly ski their way to the plant, and use stealth rather than force to gain entry to the heavy water production room in order to destroy it with explosives.
Six Norwegian soldiers were dropped in to meet up with four others already on location. (The four had parachuted in weeks earlier to set up a lighted runway on a lake for the British gliders that never arrived.) On the ground, they were joined by a Norwegian spy. The 11-man group was initially slowed by severe weather conditions, but once the weather finally cleared, the men made rapid progress toward their target across the snow-covered countryside.
The Vemork plant clung to a steep hillside. Upon arriving at the ravine that served as a kind of protective moat, the soldiers could see that attempting to cross the heavily guarded bridge would be futile. So under the cover of darkness they descended to the bottom of the ravine, crossed the frozen stream, and climbed up the steep cliffs to the plant, thus completely bypassing the bridge. The Germans had thought the ravine impassible, so hadn’t guarded against such an approach.
The Norwegians were then able to sneak past sentries and find their way to the heavy water production room, relying on maps of the plant provided by Norwegian resistance workers. Upon entering the heavy water room, they quickly set their timed explosives and left. They escaped the scene during the chaotic aftermath of the explosion. No lives were lost, and not a single shot was fired by either side.
Outside the plant, the men backtracked through the ravine and then split into small groups that independently skied eastward toward the safety of neutral Sweden. Eventually, each made his way back to their Norwegian unit stationed in Britain.
The Germans were later able to rebuild their plant and resume making heavy water. Subsequent Allied bomber raids on the plant were not effective in stopping production due to the plant’s heavy walls. But the damage had already been done. The German atomic bomb effort had been slowed to the point that it would never be finished in time to influence the outcome of the war.
Today, we don’t hear much about heavy water. Modern nuclear bomb technology has taken other routes. But it was once one of the most rare and dangerous substances in the world, and brave soldiers – both British and Norwegian – fought courageously to stop its production.
Timothy J. Jorgensen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
For more than a decade, I documented one man’s deportation, the impact on his family and his eventual return to the U.S.
I did this as part of my work studying the migration of indigenous Mayan refugees from Guatemala to Mexico and the U.S. My telling of the story of this man, who I’ll call Alex to protect his identity, is forthcoming in the journal Representations. I believe it can help shed light on the human consequences of deportations and family separations – and the enormous risks deportees are willing to take, irrespective of walls, fences, and the danger of reuniting with their families.
Here is Alex’s story.
An immigration raid
Alex was born in a refugee settlement in Chiapas, Mexico. His family is one of more than 200,000 Guatemalans who fled a protracted war, supported by the U.S. and its allies, that largely targeted the indigenous people in Guatemala. In Mexico, barriers to legal status barred refugees from formal employment. To improve his family’s situation, he decided to enter the U.S. through a human smuggler, often referred to as a “coyote,” in 2000. He worked at a meatpacking plant and sent money to support his parents in Mexico.
Two years later, Alex fell in love with his wife, who we’ll refer to as Grace, a Guatemalan national. Grace arrived in the U.S. in 1999, also with the help of a coyote, to join her parents who enrolled her in elementary school. Grace too was smitten, and in 2002 she decided to forgo her high school senior year to be with Alex. One year later, Grace gave birth to their first child.
In December of 2006, Alex was deported, without a hearing – a violation of international human rights law – in one of the largest immigration raids in U.S. history. He was separated from his wife and 3-year-old child. Family separation due to deportation is common. It is also a violation of the American Convention on Human Rights. Article 17 of the Convention, signed by the U.S. in 1969, states: “The family is the natural and fundamental group unit of society and is entitled to protection by society and the state.” The statement of this principle underscores how deportation violates one of the fundamental rights protected by the Convention – the rights of the family.
Back in Mexico, Alex faced a lack of job opportunities, compounded by racial discrimination toward indigenous peoples. Meanwhile, Grace, faced economic instability in the U.S. without Alex’s income. Even so, she advised Alex to stay in Mexico, due to her fear of more raids and concern over increased criminalization to punish unauthorized migration. In addition, under enforcement priorities initiated under George W. Bush, unlawful entry and re-entry would initiate a new era in the criminalization of migration that involved the creation of new classes of “felonies” that apply only to noncitizens.
Six months later, Alex’s wife and their son joined him in Chiapas, Mexico – becoming what are often referred to as “de facto” deportees. Studies have identified how deportees face stigma and discrimination upon return. National origins, language abilities and presumptions of why one was deported can become significant factors in the reintegration to home countries or the countries of spouses or parents. For example, Grace is a Guatemalan national and could not seek legal employment in Mexico.
Despite their reunification, Alex’s family continued to face significant hardships in Mexico. Alex’s friend, a coyote, informed him that he could make a livable wage working as a coyote and stay close to his family. Alex decided to explore this alternative.
Alex’s participation in a human smuggling network came with enormous risks. One day, for example, armed bandits surrounded a group of migrants he was leading across the U.S.-Mexico border and confiscated their belongings. Alex ran into the desert bush and escaped to the U.S. side of the border.
After a night in the desert without food or water, Alex returned to Mexico. He later learned that the bandits had taken the migrants hostage, but that they were released after relatives in the U.S. paid an undisclosed ransom. Alex returned home and informed his wife and parents of the incident. All insisted that he no longer continue to work as a coyote.
A large body of research documents how the militarization of the U.S.-Mexico border has led to a militarization and reorganization of drug trafficking organizations. Drug cartels can interfere with smuggling networks, making migrants even more vulnerable to assault or being forced to transport drugs. Scholar Laura Ortiz argues that the increased participation of imposter coyotes, who recruit migrants only to extort them, has helped reinforce dominant perceptions of smuggling as intertwined with drug trafficking. Despite this dominant perception, scholar Simón Pedro Izcara Palacios argues that human smuggling and drug trafficking are operated by different groups. Drug cartels are not directly involved with human smuggling, but instead extort fees from human smugglers. Indeed, Alex identified how cartels pressured smugglers to pay a user’s fee for crossing the Sonora-Arizona border. Failure to pay can result in violence.
The risk of violence in coyote work, along with an absence of viable job opportunities in Chiapas, prompted Alex and his wife, now with four children, to make arrangements to join other family in the U.S. in 2015. Although Grace’s father is a U.S. permanent resident, he could not sponsor them through what politicians refer to as “chain migration” because he failed to meet annual income criteria - no less than 125 percent of the federal poverty level. Without legal options, Alex paid US$3,500 to a fellow coyote to take his wife and children over the border. This coyote successfully led them to a location on the Sonora-Arizona border where U.S. immigration officials at the time captured and later released migrants.
Grace and her children turned themselves in to U.S. Border Patrol. Because of new legal protections for families and children in the U.S., Grace and her children were released following one night in detention. The U.S. government filed removal proceedings; and to provide an opportunity to appear before an immigration judge to adjudicate her case, Grace and her children received deportation relief.
To help finance his family’s crossing, Alex decided to make a final clandestine crossing with eight migrants in the Sonora–Arizona corridor. Hours after beginning their journey, heavily armed men captured and held them hostage in a secluded building. A ransom was requested from U.S. kin of every migrant. Despite Grace’s offer to pay the ransom for his release, his captors refused. Instead, Alex remained captive for several weeks and was physically tortured.
Eventually, he escaped his captors, but was stuck in the desert again. Without food or water, he turned himself in to U.S. Border Patrol agents who documented the physical wounds he sustained from his prolonged torture and placed him in detention.
Alex’s previous unauthorized entry prompted what’s called a “reinstatement of removal,” a provision under the 1996 immigration reform that re-established the order of removal from 2006. At the request of his family, I located and covered the cost of an attorney. Because Alex expressed a well-founded fear of persecution or torture upon return to Mexico, his attorney advised him to apply for either a “withholding of removal,” or protection under the Convention Against Torture. Unlike asylum, neither of these remedies offer a path to permanent resident status.
Additionally, the application would have required that Alex remain in detention – potentially six months to a year – during the adjudication of his case. Scholars have noted how prolonged detention can exacerbate post-traumatic stress and other harms that asylum seekers and their families may have suffered in their own countries. Others have identified how prolonged detention compromises due process protections that violate international human rights law and coerce migrants into surrendering to deportation. Several have identified how deportations can be tantamount to a death sentence.
His family feared that he too may be killed following deportation and begged him to permit the attorney to adjudicate his case. Alex weighed his options and declined legal counsel. Following the end of his sentence and deportation, he returned to Chiapas. Within a month he made arrangements with another coyote and paid $7,000 to cross via the Chihuahua-Texas border. Like most who attempt re-entry following an apprehension, he succeeded, and is now reunited with his wife and children in the U.S.
Alex says he never wanted to be a coyote. His story provides an opportunity to understand the complex motivations that fuel unauthorized re-entry of deportee parents with family in the U.S. A 2009 Department of Homeland Security report states that 21 percent of re-entries are those without a U.S.-born child, while more than one-third of re-entries are parents of U.S. citizen children. Scholars have also shown how deportees like Alex, who are separated from families in the U.S., are more likely to migrate again than those without family ties.
As the nation considers reforming immigration policy, it is important to remember that deterrence strategies are ineffective in reducing the intention to migrate, particularly among those with family in the U.S. Walls or even detention cells are no match for those with direct experience with crime and violence who have credible fear claims and those separated from their families in the U.S.
In 2017, two years after Alex’s family’s reunification in the U.S., Grace appeared in court for her immigration hearing. The immigration judge filed an order of removal. Like other returnees and recent unauthorized arrivals, both now face the threat of deportation. Until international protocols on the protection of migrants and their families are upheld, the U.S. will continue producing unauthorized persons or families who are at risk of deportation for years to come.
Oscar Gil-Garcia received funding from the University of California Institute for Mexico and the United States and Chancellor's Postdoctoral Fellowship, University of California, Los Angeles.
For years, incensed Mexicans have demanded that President Enrique Peña Nieto – now in the final stretch of his six-year term – take action. Recently, lawmakers from his Revolutionary Institutional Party proposed a controversial solution: Put Mexico’s military on the streets to fight crime.
A military history of massacres
I’ve been studying the violence in my home country for decades. While something must be done to stem the bloodshed, history shows that militarizing law enforcement will hurt rather than help.
Mexico’s military has actually been fighting crime informally for over a decade. In 2006, former President Felipe Calderón sent 6,500 soldiers to battle cartels in the state of Michoacán. And they never really stopped.
The consequences have been grave. Between 2012 and 2016, Mexico’s attorney general launched 505 investigations into alleged human rights abuses – including torture and forced disappearances – committed by the military.
In 2014, soldiers shot 22 unarmed citizens in the town of Tlatlaya. Later that year, the army was allegedly involved in the unsolved kidnapping of 43 students from a teachers college in southern Mexico.
Much of the military’s extrajudicial violence is undocumented and investigations move slowly, so crimes by the armed forces have been difficult to prosecute. In 11 years, only 16 soldiers have been convicted of human rights abuses in civilian courts.
Supporters of the Internal Security Law, including Secretary of Defense Gen. Salvador Cienfuegos, say the new law will right this wrong. By providing a legal framework for the armed forces to take on law enforcement duties, it ensures stricter regulation and more oversight.
Security experts, on the other hand, call the Internal Security Law dangerous, saying it delays much-needed police reforms and violates the Mexican Constitution, which prohibits using the military for Mexico’s public security.
The authoritarian connection
The idea of “internal security” has a dark genealogy in Mexican law. It first appeared just after the country’s independence from Spain, in 1822. According to the short-lived Emperor Agustín de Iturbide, his government had the right to protect “the internal order and the external security” of the fledgling nation.
In practice, that meant persecuting those who had opposed Iturbide’s dissolution of Congress and proclamation of himself as Mexico’s new emperor.
Authoritarian regimes have since invoked “internal security” – which made its way into the country’s 1917 constitution – to fight all sorts of rebels, from revolutionaries to student liberals to indigenous discontents.
The new Internal Security Law continues this tradition, giving the president the right to order federal authorities, including the army and the navy, to intervene when other federal and local forces cannot handle certain “threats to internal security.”
Built-in safeguards are supposed to prevent the government from abusing this power. Within 72 hours of such a threat emerging, the president must publish a “designation of protection” that details the specific place and limited time frame of military occupation.
In practice, though, these requirements are optional. In cases of “grave danger,” the law says, the president can take “immediate action.”
The new law contains other concerning contradictions. One article states that peaceful protests do not constitute a threat to Mexico’s internal security. This should avoid a repeat of the 1968 Tlatelolco massacre, in which soldiers in Mexico City gunned down hundreds of student demonstrators.
But another article of the law may undermine that provision by deeming “controlling, repelling or neutralizing acts of resistance” to be a legitimate use of military force.
The most challenged law
Instead, Peña Nieto approved the law but declared that it would not be enforced until the Supreme Court can review its constitutionality.
The Supreme Court has now received thousands of legal challenges to the Internal Security Law. Suits alleging that the law encroaches on Mexicans’ basic rights were filed by Mexico’s National Human Rights Commission, 188 congressmen and 43 senators. More than 12,000 citizens have also submitted individual complaints on similar grounds. On Feb. 12, the hugely popular governor of Chihuahua, Javier Corral, traveled to Mexico City to personally file a claim in the name of the people of his state.
No date has yet been set for the 11 Supreme Court justices to hear arguments.
The problem with the police
Another consequence of the Internal Security Law, in my analysis, is that it will further weaken Mexico’s already troubled police force.
According to a December 2017 government report, Mexico has just 0.8 police officers per 1,000 inhabitants – less than half what the U.N. – recommends.
The report also notes that just 1 in 4 officers has received sufficient training. And out of 39 police academies, only 6 satisfy the minimum conditions – for example, dormitories, medical services or training infrastructure – to be considered fully functional.
Mexico’s police are also widely perceived as corrupt and ineffective. In part, that’s due to their low salaries. Currently, officers in poor states like Chiapas and Tabasco earn about half the federally recommended minimum monthly salary of 9,993 pesos, or US$500.
To supplement their poverty wages, as Mexicans well know, many police officers have traditionally turned to petty bribery. More recently, some police have gotten involved in more lucrative criminal activity, working with the same drug cartels they’re supposed to be fighting.
Successive Mexican governments have used the shortcomings in the police force to justify sending in soldiers and marines, claiming it’s a provisional measure to get crime under control while the police are professionalized. The new law has turned this temporary solution into national policy.
A spectacular failure
The military is not exempt from corruption.
The claim that the military can keep Mexicans safe was recently put to its first test. In January President Peña Nieto had to cancel a trip to the city of Reynosa, in Tamaulipas state, where criminal groups have been violently clashing. The army said it could not guarantee his safety there.
If the military cannot even protect the president, Mexicans ask, what hope do the people have?
Luis Gómez Romero does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
What should a self-driving car do when a nearby vehicle is swerving unpredictably back and forth on the road, as if its driver were drunk? What about encountering a vehicle driving the wrong way? Before autonomous cars are on the road, everyone should know how they’ll respond in unexpected situations.
I develop, test and deploy autonomous shuttles, identifying methods to ensure self-driving vehicles are safe and reliable. But there’s no testing track like the country’s actual roads, and no way to test these new machines as thoroughly as modern human-driven cars have been, with trillions of miles driven every year for decades. When self-driving cars do hit the road, they crash in ways both serious and minor. Yet all their decisions are made electronically, so how can people be confident they’re driving safely?
Fortunately, there’s a common, popular and well-studied method to ensure new technologies are safe and effective for public use: The testing system for new medications. The basic approach involves ensuring these systems do what they’re intended to, without any serious negative side effects – even if researchers don’t fully understand how they work.
The regulations that are created for self-driving cars will have massive effects that ripple throughout the economy and society. The rules are likely to come from some combination of the two current automotive regulators, the federal National Highway Traffic Safety Administration and state departments of transportation.
Federal rules focus primarily on safety standards for structural, mechanical and electrical components of the vehicles, like airbags and seat belts. States can enforce their own safety rules – for example, regulating emissions and handling driver licensing and vehicle registration, which often also includes requiring insurance coverage.
Today’s state and federal rules treat drivers and cars as separate entities. But self-driving cars, by definition, combine the two. Without consistency between those regulations, confusion will reign.
The Obama administration came up with 116 pages of regulations with lots of details, but little understanding of how self-driving cars worked. For example, they called for each car to have human-readable permanent labels listing its specific self-driving capabilities, including limits on speeds, specific highways and weather conditions, all of which would be extremely confusing for users. The regulations also called for ethical decisions to be made “consciously and intentionally” – which is questionable, if not impossible, for a machine.
The Trump administration pared down the rules to 26 pages, but have not yet addressed the important issue of testing self-driving cars.
Testing algorithms is very like testing medications. In both cases, researchers can’t always tell exactly why something works (especially in the case of machine learning algorithms), but it is nevertheless possible to evaluate the outcome: Does a sick person get well after taking a medication?
The U.S. Food and Drug Administration requires medicines be tested not for their mechanisms of treatment, but for the results. The two main criteria are effectiveness – how well the medicine treats the condition it’s intended to – and safety – how severe any side effects or other problems are. With this method, it’s possible to prove a medication is safe and effective without knowing how it works.
Similarly, federal regulations could – and should – require testing for self-driving cars’ algorithms. To date, governments have tested cars as machines, ensuring steering, brakes and other functions work properly. Of course, there are also government tests for human drivers.
A machine that does both should have to pass both types of tests – particularly for vehicles that don’t allow for human drivers.
In my view, before allowing any specific self-driving car on the road, NHTSA should require test results from the car and its driving algorithms to demonstrate they are safe and reliable. The closest standard at the moment is California’s requirement that all manufacturers of self-driving cars submit annual reports of how many times a human driver had to take control of its vehicles when the algorithms failed to function properly.
That’s a good first step, but it doesn’t tell regulators or the public anything about what the vehicles were doing or what was happening around them when the humans took over. Tests should examine what the algorithms direct the car to do on freeways with trucks, and in neighborhoods with animals, kids, pedestrians and cyclists. Testing should also look at what the algorithms do when both vehicle performances and sensors’ input is compromised by rain, snow or other weather conditions. Cars should run through scenarios with temporary construction zones, four-way intersections, wrong-way vehicles and police officers giving directions that contradict traffic lights and other situations.
Human driving tests include some evaluations of a driver’s judgment and decision-making, but tests for self-driving cars should be more rigorous because there’s no way to rely on human-centered concepts like instinct, reflex or self-preservation. Any action a machine takes is a choice, and the public should be clear on how likely it is that those choices will be safe ones.
Comparing with humans
Self-driving cars’ algorithms constantly calculate probabilities. How likely is it that a particular shape is a person? How likely is it that the sensor data means the person is walking toward the road? How likely is it that the person will step into the street? How likely is it that the car can stop before hitting her? This is in fact similar to how the human brain works.
That presents a straightforward opportunity for testing autonomous cars and any software updates a manufacturer might distribute to vehicles already on the road: They could present human test drivers and self-driving algorithms with the same scenarios and monitor their performance over many trials. Any self-driving car that does as well as, or better than, people, can be certified as safe for the road. This is very much like the method used in drug testing, in which a new medication’s performance is rated against existing therapies and methods known to be ineffective, like the typical placebo sugar pill.
Companies should be free to test any innovations they want on their closed tracks, and even on public roads with human safety drivers ready to take the wheel. But before self-driving cars become regular products available for anyone to purchase, the public should be shown clear proof of their safety, reliability and effectiveness.
Srikanth Saripalli does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
“I did not invent penicillin. Nature did that. I only discovered it by accident.” - Alexander Fleming
Natural products used in drug therapies are complex, diverse, highly specialized compounds produced by living things. Many evolved as defense mechanisms against other organisms. Certain microbes, for example, spew out potent antibacterial toxins that kill competing microbe species. Streptomycin, chloramphenicol and tetracycline – three of the most widely used antibiotics – were all discovered in soil bacteria. Nature is the grand architect behind a major proportion of modern drugs.
At a time when antibiotic-resistant infections are running rampant, the need for effective new drugs is acute. Every year, drug-resistant bacteria cause over 2 million infections and 23,000 deaths in the United States alone. And yet, despite their effectiveness, pharmaceutical companies often overlook natural compounds, instead focusing on subpar synthetic ones. Using current technologies to revisit natural products could help researchers identify badly needed new drugs, particularly antibiotics.
The first ‘golden age’ of antibiotics
Fleming’s discovery of penicillin in 1929 launched the antibiotic “golden age.” In the years surrounding World War II, the pharmaceutical industry churned out dozens of new antibiotics in over 20 unique categories. A few were engineered in the lab, but most were discovered in microbes. These new drugs led to a dramatic decrease in bacterial infections worldwide, increasing the average life expectancy by several decades. Things were looking good.
Sadly, it couldn’t last. In the 1960s, the discovery of new antibiotic categories, or classes, came to a screeching halt. Since then, only two new classes have come to market. After years of blasting infections with the same classes of antibiotics, organisms had evolved resistance mechanisms; many existing antibiotics stopped working. Having picked the low-hanging fruit of antibacterials, our arsenal was drying up. Bacteria are developing resistance faster than we’re coming up with new weapons.
Rise of high-throughput screening
Researchers in the 1980s started to focus on a rising technology called high-throughput screening (HTS). Automated systems test thousands – even millions – of compounds per year. The goal is to identify compounds that would spell bad news for infectious agents. Researchers observe how effective each compound is against a potential target – for example, in disrupting the bacterial cell wall or hindering its ability to synthesize DNA, RNA or protein. Many HTS systems aim for just one of these processes at a time in a plastic multi-well plate.
Some top-of-the-line HTS robots can push through 100,000 compounds per day. The idea is that by screening millions of compounds, researchers are bound to find some with antimicrobial activity.
To save costs, pharmaceutical companies put together compound libraries: huge databases of small molecules in just about every configuration they could think of. Despite their proven track record, many companies decided natural products had low economic value and instead turned to cheaper synthetic chemicals. These are screened against pathogen targets in the search for “hits”: cases where the database molecule affects the infectious agent.
Prioritizing quantity over quality
It turns out, scientists aren’t as good at designing antibiotics as we’d hoped. Compared to natural products, synthetic compounds simply haven’t been a high-quality source of drugs. Even after years of fine-tuning HTS, success rates for novel compounds are extremely low. Pharmaceutical companies might spend years looking for drug candidates and still come up empty.
In fact, natural products still account for half of newly discovered drugs since the 1980s, and approval rates for naturally derived products are climbing, even though very few are screened compared to synthetic compounds.
Time and money are precious resources in drug development; it takes 10 to 15 years and millions of dollars – even billions – to develop a single drug from “farm-to-table.” There are four major steps of drug development:
- Screen compound library and identify “hits”;
- Confirm hits with further testing, at which point they become “lead compounds”;
- Advance leads through clinical trials;
- And finally, successful release of the drug.
From beginning to end, maybe 1 in 10 million compounds screened – and that’s a generous estimate – will become a successful drug for infectious diseases. This number has not significantly improved over the years.
Bring back the old-school methods
One reason natural products are such a promising resource for new drugs is that they are more biologically relevant than synthetics; they’re ready-made to be active within cells. They contain fewer heavy metals and can be extremely stable. Most importantly, because of their high complexity and diversity, a single natural compound often simultaneously targets multiple bacterial processes (for instance, both the cell wall and protein synthesis), making it less susceptible to resistance.
In comparison, high-throughput screening usually involves pinpointing only single targets – for instance, a particular bacterial enzyme or viral protein. Then, follow-up experiments will determine whether the drug-target interaction actually works within a cell, and not just in a test tube. This is incredibly inefficient and is a crucial limitation of classic HTS.
The most effective antibiotics are discovered by testing for antibacterial, antifungal or antiviral activity first, and then teasing apart the molecular mechanism. This means turning the focus back to bacterial assays, where compounds are tested in live bacterial cell cultures from the start. Newer HTS systems do target whole-cell systems, but much of the pharmaceutical industry still persists in using synthetic small-molecule libraries and shies away from naturally derived products.
This doesn’t mean that HTS has no place in drug development. But a meal is only as good as its ingredients, and high throughput is useless without high-quality compounds.
There are certainly barriers to natural product research. When it comes to plant-based chemicals, for example, high throughput can be a challenge; purifying a specific chemical from pulverized plant material can be difficult to do the exact same way every time. Natural products are also difficult to patent, so for pharmaceutical companies, large compound libraries are more economically viable.
However, with improved technology, HTS of natural products is becoming a reality, particularly when it comes to compounds produced by microbes. Mass production is also getting easier; not just for bacteria, which are straightforward to culture on a large scale, but for plant-based chemicals too. In 2006, for example, researchers at UC Berkeley found a way to engineer yeast to mass-produce the precursor for artemisinin, the antimalarial and anti-TB compound that comes from the herb Artemisia annua.
Potential gold mines for natural compounds
There is a huge area of untapped resources for natural anti-pathogenic molecules. A very small portion – less than 15 percent – of terrestrial plants have been explored for natural product research; strikingly, less than 1 percent of the microbial world has been tapped.
Almost two-thirds of natural products come from a group of bacteria called actinomycetes, which include the streptomycetes. They produce antibacterial, antifungal, antiparasitic, immunosuppressant and anti-viral compounds. Potent anti-viral compounds have been found in fungi, plants and even marine sponges. Drugs could potentially even come from the microbiome in our own gut.
Currently, I study certain plant-based extracts, traditionally used as anti-inflammatories in non-Western medicine, that actually have antiviral properties. Feverfew, for example, can protect cells against viruses like herpes simplex and Epstein-Barr by inhibiting inflammatory pathways. Artemisinin, mentioned earlier as an antimalaria drug, also has broad-spectrum activity against many different viruses.
Drug-resistant infections are a major global health threat. It is critical that drug developers push for high-quality source material in their search for new drugs. It’s time to use technology to revive and upgrade tried-and-true methods.
Natalie Jones Slivinski does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Uranium – the raw material for nuclear power and nuclear weapons – is having a moment in the spotlight.
Companies such as Energy Fuels, Inc. have played well-publicized roles in lobbying the Trump administration to reduce federal protection for public lands with uranium deposits. The Defense Department’s Nuclear Posture Review calls for new weapons production to expand the U.S. nuclear arsenal, which could spur new domestic uranium mining. And the Interior Department is advocating more domestic uranium production, along with other materials identified as “critical minerals.”
What would expanded uranium mining in the U.S. mean at the local level? I have studied the legacies of past uranium mining and milling in Western states for over a decade. My book examines dilemmas faced by uranium communities caught between harmful legacies of previous mining booms and the potential promise of new economic development.
These people and places are invisible to most Americans, but they helped make the United States an economic and military superpower. In my view, we owe it to them to learn from past mistakes and make more informed and sustainable decisions about possibly renewing uranium production than our nation made in the past.
Mining regulations have failed to protect public health
Today most of the uranium that powers U.S. nuclear reactors is imported. But many communities still suffer impacts of uranium mining and milling that occurred for decades to fuel the U.S.-Soviet nuclear arms race. These include environmental contamination, toxic spills, abandoned mines, under-addressed cancer and disease clusters and illnesses that citizens link to uranium exposure despite federal denials.
As World War II phased into the Cold War, U.S. officials rapidly increased uranium production from the 1940s to the 1960s. Regulations were minimal to nonexistent and largely unenforced, even though the U.S. Public Health Service knew that exposure to uranium had caused potentially fatal health effects in Europe, and was monitoring uranium miners and millers for health problems.
Today the industry is subject to regulations that address worker health and safety, environmental protection, treatment of contaminated sites and other considerations. But these regulations lack uniformity, and enforcement responsibilities are spread across multiple agencies.
This creates significant regulatory gaps, which are worsened by a federalist approach to regulation. In the 1970s the newly created Nuclear Regulatory Commission initiated an Agreement States program, under which states take over regulating many aspects of uranium and nuclear production and waste storage. To qualify, state programs must be “adequate to protect public health and safety and compatible with the NRC’s regulatory program.”
Today 37 states have joined this program and two more are applying. Many Agreement States struggle to enforce regulations because of underfunded budgets, lack of staff and anti-regulatory cultures. These problems can lead to piecemeal enforcement and reliance on corporate self-regulation.
For example, budget cuts in Colorado have forced the state to rely frequently on energy companies to monitor their own compliance with regulations. In Utah, the White Mesa Mill – our nation’s only currently operating uranium mill – has a record of persistent problems related to permitting, water contamination and environmental health, as well as tribal sacred lands and artifacts.
Neglected nuclear legacies
Uranium still affects the environment and human health in the West, but its impacts remain woefully under-addressed. Some of the poorest, most isolated and ethnically marginalized communities in the nation are bearing the brunt of these legacies.
There are approximately 4,000 abandoned uranium mines in Western states. At least 500 are located on land controlled by the Navajo Nation. Diné (Navajo) people have suffered some of the worst consequences of U.S. uranium production, including cancer clusters and water contamination.
A 2015 study found that about 85 percent of Diné homes are still contaminated with uranium, and that tribe members living near uranium mines have more uranium in their bones than 95 percent of the U.S. population. Unsurprisingly, President Donald Trump’s decision to reduce the Bears Ears National Monument has reinvigorated discussion over ongoing impacts of uranium contamination across tribal and public land.
Despite legislation such as the Radiation Exposure Compensation Act of 1990, people who lived near uranium production or contamination sites often became forgotten casualties of the Cold War. For instance, Monticello, Utah, hosted a federally owned uranium mill from 1942 to 1960. Portions of the town were even built from tailings left over from uranium milling, which we now know were radioactive. This created two Superfund sites that were not fully remediated until the late 1990s.
Monticello residents have dealt with cancer clusters, increased rates of birth defects and other health abnormalities for decades. Although the community has sought federal recognition and compensation since 1993, its requests have been largely ignored.
Today tensions over water access and its use for uranium mining are creating conflict between regional tribes and corporate water users around the North Rim of the Grand Canyon. Native residents, such as the Havasupai, have had to defend their water rights and fear losing access to this vital resource.
Uranium production is a boom-and-bust industry
Like any economic activity based on commodities, uranium production is volatile and unstable. The industry has a history of boom-bust cycles. Communities that depend on it can be whipsawed by rapid growth followed by destabilizing population losses.
The first U.S. uranium boom occurred during the early Cold War and ended in the 1960s due to oversupply, triggering a bust. A second boom began later in the decade when the federal government authorized private commercial investment in nuclear power. But the Three Mile Island (1979) and Chernobyl (1985) disasters ended this second boom.
Uranium prices soared once again from 2007 to 2010. But the 2011 tsunami and meltdown at Japan’s Fukushima Dai-ichi nuclear plant sent prices plummeting once again as nations looked for alternatives to nuclear power.
Companies like Energy Fuels maintain – especially in public meetings with uranium communities – that new production will lead to sustained economic growth. This message is powerful stuff. It boosts support, sometimes in the very communities that have suffered most from past practices.
But I have interviewed Westerners who worry that as production methods become more technologically advanced and mechanized, energy companies may increasingly rely on bringing in out-of-town workers with technical and engineering degrees rather than hiring locals – as has happened in the coal industry. And the core tensions of boom-bust economic volatility and instability persist.
Uranium production advocates contend that new “environmentally friendly” mills and current federal regulations will adequately protect public health and the environment. Yet they offer little evidence to counter White Mesa Mill’s poor record.
In my view, there is little evidence that new uranium production would be more reliably regulated or economically stable today than in the past. Instead, I expect that the industry will continue to privatize profits as the public absorbs and subsidizes its risks.
Stephanie Malin receives funding from the National Institute of Environmental Health Sciences, the American Sociological Association's Spivack Community Action Research Initiative Grant, the Rural Sociological Society's Early Career Award, and the CSU Water Center and School for Global Environmental Sustainability.
This has led both the federal government and many state governments to propose new accountability measures that seek to spur colleges to improve their performance.
This is one of the key goals of the PROSPER Act, a House bill to reauthorize the federal Higher Education Act, which is the most important law affecting American colleges and universities. For example, one provision in the act would end access to federal student loans for students who major in subjects with low loan repayment rates.
Accountability is also one of the key goals of efforts in many state legislatures to tie funding for colleges and universities to their performance.
As a researcher who studies higher education accountability – and also just wrote a book on the topic – I have examined why policies that have the best of intentions often fail to produce their desired results. Two examples in particular stand out.
Federal and state failures
The first is a federal policy that is designed to end colleges’ access to federal grants and loans if too many students default on their loans. Only 11 colleges have lost federal funding since 1999, even though nearly 600 colleges have fewer than 25 percent of their students paying down any principal on their loans five years after leaving college, according to my analysis of data available on the federal College Scorecard. This shows that although students may be avoiding defaulting on their loans, they will be struggling to repay their loans for years to come.
The second is state performance funding policies, which have encouraged colleges to make much-needed improvements to academic advising but have not resulted in meaningful increases in the number of graduates.
Based on my research, here are four of the main reasons why many accountability efforts fall short.
1. Competing initiatives
Colleges face many pressures that provide conflicting incentives, which in turn makes any individual accountability policy less effective. In addition to the federal government and state governments, colleges face strong pressures from other stakeholders. Accrediting agencies require colleges to meet certain standards. Faculty and student governments have their own visions for the future of their college. And private sector organizations, such as college rankings providers, have their own visions for what colleges should prioritize. (In the interest of full disclosure, I am the methodologist for Washington Monthly magazine’s college rankings, which ranks colleges on social mobility, research and service.)
As one example of these conflicting pressures, consider a public research university in a state with a performance funding policy that ties money to the number of students who graduate. One way to meet this goal is to admit more students, including some who have modest ACT or SAT scores but are otherwise well-prepared to succeed in college. This strategy would hurt the university in the U.S. News & World Report college rankings, which judge colleges in part based on ACT/SAT scores, selectivity and academic reputation.
Research shows that students considering selective colleges are influenced by rankings, so a university may choose to focus on improving their rankings instead of broadening access in an effort to get more state funds.
2. Policies can be gamed
Colleges can satisfy some performance metrics by gaming the system, instead of actually improving their performance. The theory behind many accountability policies is that colleges are not operating in an efficient manner and that they must be given incentives in order to improve their performance. But if colleges are already operating efficiently – or if they do not want to change their practices in response to an external mandate – the only option to meet the performance goal may be to try to game the system.
An example of this practice is with the federal government’s student loan default rate measure, which tracks the percentage of borrowers who default on their loans within three years of when they are supposed to start repaying their loans. Colleges that are concerned about their default rates can encourage students to enroll in temporary deferment or forbearance plans. These plans result in students owing more money in the long run, but also they push the risk of default outside the three-year period that the federal government tracks, which essentially lets colleges off the hook.
3. Unclear connections
It’s hard to tie individual faculty members to student outcomes. The idea of evaluating teachers based on their students’ outcomes is nothing new; 38 states require student test scores to be used in K-12 teacher evaluations, and most colleges include student evaluations as a criterion of the faculty review process. Tying an individual teacher to a student’s achievement test scores has been controversial in K-12 education, but it is far easier than identifying how much an individual faculty member contributes to a student’s likelihood of graduating from college or repaying their loans.
For example, a student pursuing a bachelor’s degree will take roughly 40 courses during their course of study. That student may have 30 different professors over four or five years. And some of them may no longer be employed when the student graduates. Colleges can try to encourage all faculty to teach better, but it’s difficult to identify and motivate the worst teachers because of the elapsed time between when a student takes a class and when he or she graduates or enters the workforce.
4. Politics as usual
Even when a college should be held accountable, politics often get in the way. Politicians may be skeptical of the value of higher education, but they will work to protect their local colleges, which are often one of the largest employers in their home states. This means that politicians often act to stop a college from losing money under an accountability system.
Take for example Senate Majority Leader Mitch McConnell, R-Ky., who was sympathetic to the plight of a Kentucky community college with a student loan default rate that should have resulted in a loss of federal financial aid. He got a provision added to the recent federal budget agreement that allowed only that college to appeal the sanction.
Robert Kelchen is the methodologist for Washington Monthly magazine's college rankings.
“Is this person a citizen of the United States?”
However, census experts, over 100 national scientific and civil rights organizations, the Congressional Hispanic Caucus, The Leadership Conference on Civil and Human Rights and Democratic senators and House members protested vehemently.
I am a social scientist who studies immigration. I have used census data on immigration and citizenship in my research for over two decades, and I have urged government statistical agencies before to collect more data about immigrants. But I don’t think it’s wise to collect citizenship status in the 2020 census. Doing so would not only raise the risk of collecting inaccurate data, but also reduce public confidence in the census itself.
On the one hand, data on citizenship is valuable. In any modern democracy, statistical data is essential for informing policy debates and guiding the implementation of governmental programs. Without it, decisions would almost certainly be too easily shaped by anecdotal evidence and personal biases.
Citizenship data has been used to track political participation and inclusion of immigrant groups. Citizenship is strongly associated with access to public assistance, health care and jobs. Social scientists and policy analysts rely heavily on survey items on citizenship to understand immigrants’ well-being and their impact on host societies.
What’s more, the U.S. Census Bureau has successfully collected confidential information on citizenship status in the past. The citizenship question was first introduced in the 1870 census and was part of all censuses from 1890 through 1950. It was included in the “long” form of the census – administered to 1 in 6 households – as late as 2000. It’s also asked in the American Community Survey, a survey that Census Bureau conducts every year.
Immigrants tend to be willing survey respondents. In a 2010 study, Hispanic immigrants were more likely than U.S.-born Hispanics to agree that the census is good for the Hispanic community. They were also more likely to correctly understand that the census cannot be used to determine whether a person is in the country legally, and that the bureau must keep their responses confidential.
In another study I published in 2014 with two colleagues, James Bachmeier and Frank Bean, we found that nearly all immigrants answered questions about their immigration and documentation status. These response rates are on par with or better than typical survey questions on health or income. Moreover, immigrants’ responses to these questions appeared to be fairly accurate.
Harming the data
However, the political climate surrounding immigration has changed in the last year.
Not all immigrants have been cooperative respondents in the past. Those who are more likely to be undocumented have been undercounted in past censuses and were more likely to incorrectly report themselves as U.S. citizens.
The Trump administration’s anti-immigrant rhetoric and policy may have increased mistrust among all immigrants, not just those who are undocumented. During focus group interviews conducted by the Census Bureau roughly six months into Trump’s presidency, immigrants appeared anxious and reluctant to cooperate with Census Bureau interviewers. They mentioned fears of deportation, the elimination of DACA, a “Muslim ban” and ICE raids. One respondent walked out when the questionnaire turned to the topic of citizenship, leaving the interviewer alone in his apartment. Respondents even omitted or gave false names on household rosters to avoid “registering” with the Census Bureau. Interviewers remarked that it was much easier to collect data on immigration and citizenship just a few years ago than it is now.
It’s not yet clear whether the fears seen in the focus group interviews are widespread or how such fears would affect response rates if the citizenship question were added to the 2020 census. Additionally, researchers haven’t yet worked out a way to ask the citizenship question so it’s not perceived as threatening.
Unfortunately, there’s not enough time to find out. A finalized questionnaire must be submitted to Congress by the end of March.
What to do in 2020
I served on the Census Advisory Board from 2008 to 2011 and have personally witnessed the time and effort it takes for the Census Bureau to develop questions for the census. Officials must pay meticulous attention to the exact question wording, response categories, ordering and questionnaire layout.
I believe adding a citizenship question without adequate testing could severely reduce participation in the 2020 census among the country’s 44 million immigrants and the additional 32 million U.S.-born people who live with them.
The social and economic consequences of a low response rate for the 2020 census would be severe. Even small errors in coverage could shift the distribution of political power and federal funds, as well as reduce the effectiveness of public health systems and other government functions.
Perhaps even worse, high coverage error in the 2020 census could undermine the public’s trust in the census as the nation’s source of information on the size, growth and geographic distribution of the U.S. population.
This occurred a century ago, as historian Margo Anderson described in her book, “The American Census.” The 1920 census revealed dramatic shifts in population from rural to urban areas, as large waves of Eastern and Southern European immigrants settled predominantly in American cities. Congress, fearing the political ramifications of these changes, rejected the results of the 1920 census and voted not to redistribute the seats of the House according to the most recent census data. A similar rejection of the results of the 2020 census would likely result in a constitutional crisis today.
Citizenship data would be valuable. But the risks of poor data quality – or the erosion of public trust in the census and other governmental institutions – far outweigh the potential benefits. Given that there are other current data available on citizenship, why take unnecessary risks when the stakes are so high?
Jennifer Van Hook receives funding from the National Institutes of Health. She is Roy C. Buck Professor of Sociology and Demography at the Pennsylvania State University, and is the nonresident fellow of the Migration Policy Institute.
When 17 people were killed at Marjory Stoneman Douglas High School in Parkland, Florida, it was just the latest in a tragic list of mass shootings, many of them at schools.
Then something different happened: Teens began to speak out. The Stoneman Douglas students held a press conference appealing for gun control. Teens in Washington, D.C., organized a protest in front of the White House, with 17 lying on the ground to symbolize the lives lost. More protests organized by teens are planned for the coming months.
Teens weren’t marching in the streets calling for gun control after the Columbine High School massacre in 1999. So why are today’s teens and young adults – whom I’ve dubbed “iGen” in my recent book on this generation – speaking out and taking action?
With mass shootings piling up one after another, this is a unique historical moment. But research shows that iGen is also a unique generation – one that may be especially sensitive to gun violence.
Keep me safe
People usually don’t think of teenagers as risk-averse. But for iGen, it’s been a central tenant of their upbringing and outlook.
During their childhoods, they experienced the rise of the helicopter parent, anti-bullying campaigns and, in some cases, being forced to ride in car seats until age 12.
Their behavior has followed suit. For my book, I conducted analyses of large, multi-decade surveys. I found that today’s teens are less likely to get into physical fights and less likely to get into car accidents than teens just 10 years ago. They’re less likely to say they like doing dangerous things and aren’t as interested in taking risks. Meanwhile, since 2000, rates of teen binge drinking have fallen by half.
With the culture so focused on keeping children safe, many teens seem incredulous that extreme forms of violence against kids can still happen – and yet so many adults are unwilling to address the issue.
“We call on our national and state legislatures to finally act responsibly and reduce the number of these tragic incidents,” said Eleanor Nuechterlein and Whitney Bowen, the teen organizers of the D.C. lie-in. “It’s essential that we all feel safe in our classrooms.”
Treated with kid gloves
In a recent analysis of survey data from 8 million teens since the 1970s, I also found that today’s teens tend to delay a number of “adult” milestones. They’re less likely than their predecessors to have a driver’s license, go out without their parents, date, have sex, and drink alcohol by age 18.
This could mean that, compared to previous generations, they’re more likely to think of themselves as children well into their teen years.
As 17-year-old Stoneman Douglas High School student David Hogg put it, “We’re children. You guys are the adults. You need to take some action.”
Furthermore, as this generation has matured, they’ve witnessed stricter age regulations for young people on everything from buying cigarettes (with the age minimum raised to 21 in several states) to driving (with graduated driving laws).
Politicians and parents have been eager to regulate what young people can and can’t do. And that’s one reason some of the survivors find it difficult to understand why gun purchases aren’t as regulated.
“If people can’t purchase marijuana or alcohol at the age of 18, why should they be given access to guns?” asked Stoneman Douglas High School junior Lyliah Skinner.
She has a point: The shooter, Nikolas Cruz, is 19. Under Florida’s laws, he could legally possess a firearm at age 18. But – because he’s under 21 – he couldn’t buy alcohol.
Libertarianism – with limits
At the same time, iGen teens – like their millennial predecessors – are highly individualistic. They believe the rights of the individual should trump traditional social rules. For example, I found that they’re more supportive of same-sex marriage and legalized marijuana than previous generations were at the same age.
Their political beliefs tend to lean toward libertarianism, a philosophy that favors individual rights over government regulations, including gun regulation. Sure enough, support for protecting gun rights increased among millennials and iGen between 2007 and 2016.
But even a libertarian ideologue would never argue that individual freedom extends to killing others. So perhaps today’s teens are realizing that one person’s loosely regulated gun rights can lead to another person’s death – or the death of 17 of their teachers and classmates.
The teens’ demands could be seen as walking this line: They’re not asking for wholesale prohibitions on all guns. Instead, they’re hoping for reforms supported by most Americans such as restricting the sale of assault weapons and more stringent background checks.
In the wake of the Stoneman Douglas High School shooting, the teens’ approach to activism – peaceful protest, a focus on safety and calls for incremental gun regulation – are fitting for this generation.
Perhaps iGen will lead the way to change.
Jean Twenge has received funding from the Russell Sage Foundation and the National Institutes of Health.
Since 20 children were gunned down at Sandy Hook Elementary School in December 2012, we’ve seen public calls for the release of crime scene photos – the idea being that the visceral horror evoked by images of young, brutalized bodies could spur some sort of action to combat the country’s gun violence epidemic.
The day after the Parkland, Florida, high school shooting, a Slate article echoed the demand for crime scene photos to be released, arguing that if Americans could actually see the bloodshed, we might finally say, “Enough is enough.”
As a scholar who specializes in photojournalism ethics, I’ve thought extensively about how journalism can responsibly cover gun violence, balancing the moral imperatives of seeking truth while minimizing harm. I’ve also studied how images can galvanize viewers.
Fundamental questions remain: What is the line between informing audiences and exploiting victims and their families? Should the media find a balance between shocking and shielding audiences? And when it comes to mass shootings – and gun violence more broadly – if outlets did include more bloody images, would it even make a difference?
The limitations of a photo
On the same day of the Parkland shooting, my research on news images of mass shootings was published. Given the intense yet fleeting nature of media coverage, I wanted to examine how news outlets cover these crimes, specifically through the lens of visual reporting.
The study analyzed nearly 5,000 newspaper photos from three school shootings: Virginia Tech, Sandy Hook and Umpqua Community College. Of those images, only 5 percent could be characterized as graphic in nature.
Most depicted the shock and grief of survivors, family and friends. These elements certainly make up an important part of the story. Nonetheless, they create a narrative where, as the Slate article put it, “mass shootings are bloodless.”
Does that matter?
Research has shown that when audiences feel emotionally connected with news events, they’re more likely to change their views or take action. Photographs of violence and bloodshed can certainly serve as a conduit for this emotional connection. Their realism resonates, and they’re able to create a visceral effect that can arouse a range of emotions: sorrow, disgust, shock, anger.
But the power of images is limited. After particularly shocking images appear, what we tend to see are short bursts of activism. For example, in 2015, following the publication of the harrowing image of a drowned Syrian boy lying facedown in the sand, donations to the Red Cross briefly spiked. But within a week, they returned to their typical levels.
The ethics of violent imagery
If a graphic image can inspire some action – even it’s minimal and fleeting – do media outlets have an obligation to run more photos of mass shooting victims?
Perhaps. But other concerns need to be weighed.
For one, there are the victims’ families. Widely disseminated images of their massacred loved ones could no doubt add to their already unthinkable grief.
Moreover, we exist in a media landscape that overwhelms us with images. Individual photographs become harder to remember, to the point that even graphic ones of bloodshed could fade into ubiquity.
Another concern is the presentation of these images. As media consumers, so much of what we see comes from manipulated, sensationalized and trivialized social media feeds. As a colleague and I wrote last year, social media “begs us to become voyeurs” as opposed to informed news consumers. In a digital environment, these images could also be easily appropriated for any number purposes – from pornography to hoaxes – and spread across social media, to the point that their authenticity will be lost.
There’s another unintended consequence: Grisly images could inspire another mass shooting. Research indicates that news coverage of mass shootings – and in particular the attention given to body counts and the perpetrators themselves – can have a contagious effect on would-be mass killers.
Journalism has a responsibility to inform audiences, and sometimes a graphic image does that in a way that words can’t.
However this doesn’t mean that any and all gruesome images should be published. There are professional guidelines for deciding whether to publish these types of images – mainly, to consider the journalistic purpose of publishing them and the “overriding and justifiable need to see” them.
The extent to which graphic images should be present in our news media is an ongoing debate. And it’s one that must continue.
A new image emerges
Following mass shootings, there’s a predictable pattern of news media coverage. There are the breaking news reports filled with speculation. Then details of the perpetrator emerge. Reporters and pundits question whether or not it was an act of terrorism. Elected officials respond with “thoughts and prayers,” and debates about mental health and gun control rage. Finally, there’s coverage of the vigils and funerals.
But this time, there’s something new: images of resistance.
Students at Marjory Stoneman Douglas High School are stepping up and demanding action from the country’s elected leaders.
In an impassioned speech, senior Emma Gonzalez chastised lawmakers, stating, “We are up here standing together because if all our government and president can do is send thoughts and prayers, then it’s time for victims to be the change that we need to see.”
This, in the end, may prove to be more effective than any images of bloodshed or grief. Fanning across the news outlets and social media networks, these images of resistance seem to be spurring action, with school walkouts and nationwide protests against gun violence in the works.
Illustrations of protest, courage and resilience – from high school students, no less – might have the power to sink in.
Perhaps it will be these images – not those of bloodied victims – that will stir people from complacency and move them to action.
Nicole Smith Dahmen is a supporter of the advocacy organizations Mom's Demand Action and Sandy Hook Promise.
For many young adults, the college years are filled with excitement, as students gain independence and establish new adult identities and behaviors. However, not all behaviors are healthy. Typical changes in college student behavior include a decrease in exercise and activity levels and an increase in sitting or sedentary time. Other changes include changes in eating and sleeping patterns, increased stress, weight fluctuations as well tobacco, alcohol and drug use.
Without intervention, these typical college life behaviors have the potential to become cardiovascular disease (CVD) risk factors during college and further develop into CVD during adult years.
For my doctoral dissertation, I recently completed a study investigating heart disease risk factors in college students at Colorado State University (CSU). I found a total of 434 CVD risk factors among 180 students, and many of the students did not perceive themselves to be at risk.
I recruited 180 students between 18-25 years of age to participate. I evaluated these specific risk factors: nicotine use; family history of heart disease; elevated systolic blood pressure (the top number on your blood pressure reading); elevated diastolic blood pressure (the bottom number); elevated cholesterol levels; low high-density lipoproteins (HDLs); elevated low-density lipoproteins (LDLs); elevated triglycerides; elevated fasting glucose; inactivity; and excess weight.
Among the 180 study participants, I identified that 84 percent, or 151 students, had at least one CVD risk factor. And, 62 percent, or 112 students, had two risk factors; 38 percent, or 68 students, had three or more CVD risk factors.
My findings were consistent with larger national studies that show college-aged young adults are at higher risk for CVD than they may realize.
Further, my study results indicated that college men may be at greater risk for future heart disease then college women. This is a bit surprising, as Colorado has been identified by the Centers for Disease Control and Prevention (CDC) as one of the healthiest states in the nation. In one national study, Colorado was ranked seventh in overall health. The same ranking reported that Colorado residents had the second highest level of physical activity.
However, it’s important to remember that my data was from college students, and their lifestyles tend to not be well-balanced.
Men more likely to use tobacco
I found the difference in CVD risk factors between genders in my study to be alarming, even though it is consistent with previous findings.
Male students had statistically significant lower levels of the “good” cholesterol (HDL). They also had elevated blood glucose levels and elevated systolic and diastolic blood pressures when compared to female students. Males were also much more likely to use cigarettes, e-cigarettes, cigars, smokeless tobacco and to consume more beer than female students.
The only form of a tobacco product that did not show a statistically significant difference between genders was hookah, a water pipe used to smoke specially flavored tobacco or cannabis. Nationally, hookah bars have increased in popularity in recent years, with some small studies showing that as many as 22 to 40 percent of college students used hookah in the past year.
The flavored tobacco and pipe delivery system leads some to believe that smoking tobacco from hookahs is safe. The CDC and health experts are not convinced. Compared to cigarette smoking, hookah smoking involves deeper inhalations, which are held for a period of time before exhalation.
Ethnicity, class and state differences
I also compared risk factors between ethnic groups. White students were more likely to use hookah and smokeless tobacco than non-white students.
And, I saw differences in risk between class rankings. I observed that students’ systolic blood pressures – the top number on a blood pressure reading – was found to increase as students progress through college. Upperclassmen had much higher blood pressures than freshman. Contributing variables are explained by decreases in activity or exercise, more sitting time, weight fluctuations, chronic academic, financial and social stress, nicotine use and alcohol use.
Finally, I compared the risk factors I identified in CSU data to data from the National College Health Assessment. CSU students were less likely to have elevated blood pressure or to be obese, and more likely to rate their general health as “excellent” or “very good” compared to college students throughout the nation.
However, hookah use stood out again, as the CSU sample students showed a greater use of hookah than college students throughout the nation.
Perception versus reality
One of the questions I asked the students was whether they perceived themselves to be at risk for heart disease. Almost all CSU students did not perceive themselves as having an elevated cholesterol level, blood glucose level or blood pressure. However, when measured, the number of students with elevations in these variables was statistically significant. I found that the reality of having one or multiple heart disease risk factors was much higher than the perception of having an elevated risk factor or factors.
My study suggests that college students and their health care providers should be paying more attention to heart disease risk factors. As the number of CVD risk factors increases, so does the potential for clinical consequences of CVD, such as a heart disease or stroke. Therefore, preventive steps, such as screenings, are very important. The American Heart Association has estimated, for example, that life expectancy would rise by seven years.
Studies have shown that young adults, particularly young men, often overlook the risks of high blood pressure.
Heart disease is often referred to as the “silent killer,” as many risk factors have no signs and symptoms. Therefore, college students are often asymptomatic, leaving elevated CVD risk factors undetected for years. Meanwhile, chemical damage from a high total cholesterol level or mechanical damage from hypertension can be causing structural changes to the epithelium cells of the arterial system.
It is apparent from these findings that undergraduate college students may be at greater risk for developing CVD risk factors and subsequent CVD than previously thought and should be screened beginning at age 20 as recommended by health and medical experts.
The findings of this study further support the need for a cardiovascular disease risk reduction program specifically designed for college students. Program components should include preventive screening, health promotion programs and health education targeted at the reducing or eliminating CVD risk factors, which our department will launch this fall in an effort to improve and support college students and their health and well-being.
Wendy DeYoung does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
On Feb. 21, Billy Graham, the evangelical Christian minister who was widely regarded as “America’s pastor,” died at the age of 99.
Graham is best known for his global “crusades” – rallies that attracted crowds in the millions – and for the spiritual counsel he provided to American presidents for over a half-century. But, what is less widely known is his contribution to the religious language in American public life.
Americans before the mid-20th century were often ambivalent about religious language and images in public life. Graham helped change that reality.
Religion in American public discourse
Rhetoric linking the United States with a divine power, which Graham would later embrace, emerged on a large scale with the outbreak of the Civil War in 1861. M.R. Watkinson, a Pennsylvania clergyman, encouraged the placement of “In God We Trust” on coins at the war’s outset in order to help the North’s cause. Such language, Watkinson wrote, would “place us openly under the divine protection.”
In 1864, with the Civil War still raging, a group supported by the North’s major Protestant denominations began advocating to change the preamble of the Constitution. The proposed language would have declared that Americans recognized “Almighty God as the source of all authority and power in civil government.”
If the amendment’s supporters had succeeded in having their way, Christian belief would be deeply embedded in the United States government.
But, such invocations of God in national politics were not to last. Despite lobbying by major Protestant denominations such as the Methodists, this so-called Sovereignty of God amendment was never ratified.
Though “In God We Trust” was added to coins, it was not added to the increasingly common paper money. In fact, when coins were redesigned late in the 19th century, it disappeared from coins as well.
As I demonstrate in my book, these developments were related to the spread of secularism in the post-Civil War U.S. For many people at the time, placing religious language in the Constitution or on symbols of government was not consistent with American ideals.
Graham’s influence on religious politics
In the 1950s, however, religious language found its way into government and politics, due in no small part to Billy Graham.
In 1953, at the strong encouragement of Graham, President Dwight Eisenhower held the first National Prayer Breakfast, an event that brings together political, military and corporate leaders in Washington, D.C., usually on the first Thursday of February.
In the following years, Eisenhower signed a bill placing the phrase “In God We Trust” on all American currency and the phrase was adopted as the first official motto of the United States.
Both of these developments reflected the desire to emphasize Americans’ religious commitment in the early years of the Cold War. Historians such as Jonathan Herzog have chronicled how leaders such as Eisenhower and Graham stressed the strong faith of the nation in setting the U.S. apart from the godlessness of Soviet communism. But, there were domestic concerns as well. Princeton University historian Kevin Kruse has shown that religious language was not merely rhetoric against communism.
Indeed, this belief in American religiosity had emerged over several decades. Conservative businessmen had allied with ministers and evangelical leaders such as Billy Graham, to combat the social welfare policies and government expansion that began with Franklin Roosevelt’s New Deal. These wide-ranging programs, designed to tackle the Great Depression, irked many conservatives. They objected to government intervention in business and Roosevelt’s support for labor unions.
As Kruse notes, this alliance of conservative business leaders and ministers linked “faith, freedom, and free enterprise.”
To be sure, Billy Graham was not singularly responsible for all of these developments. But as his biographers have noted, he loomed large in the religious politics of the 1950s.
The prevalence of religious language in U.S. politics that Graham helped inspire continues to this day. Indeed, the Trump administration has been particularly swift to employ it.
In his address to the National Prayer Breakfast on Feb. 8, President Donald Trump emphasized the centrality of faith in American life. After describing the country as a “nation of believers,” Trump declared that “our rights are not given to us by man” but “come from our Creator.”
These remarks came a week after Trump linked religion with American identity in his first State of the Union address. On Jan. 30, he similarly invoked “In God We Trust” while proclaiming an “American way” in which “faith and family, not government and bureaucracy, are the center of the American life.”
Trump’s language captured the linking of faith and public life that Graham encouraged as he rose to fame nearly 70 years ago.
This is an updated version of an article, originally published on Feb. 2, 2018.
David Mislin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
At the beginning of each school year, before the students arrived, teachers from every school in the Atlanta Public Schools district were placed on school buses and taken to the old Georgia Dome.
We were not organized by alphabetical order, or even by elementary, middle or high school. Instead, all schools were organized by their test scores. The better your test results for the previous year, the closer you sat to Beverly Hall, the former superintendent of the district, who died in 2015. My experience with this event began in 2005, when I started my job as a high school English teacher in the district.
For three hours, Superintendent Hall and the principals took pictures together in front of large cardboard checks that showed how much bonus money each teacher and staff member would receive. Every school that got a check was supposedly able to demonstrate that a percentage of students demonstrated competency in one or more areas. These learning goals – or targets, as they are referred to by teachers and school leaders – were issued by district personnel and were not negotiable.
Little did I know, the festivities at the Georgia Dome were essentially an awards ceremony for what in reality was a massive and systemic cheating scandal.
Soon, teachers at the high school where I taught English began to question how it was that every single year, the elementary schools that feed directly into our high school supposedly met and exceeded their target goals in every discipline, yet by the time the students in those schools got to us, they could barely read and write.
I never once thought the answer to that question would be cheating.
Scandals keep coming
As a professor of education, I believe my experience in Atlanta is an important one to recall, especially in light of the recent graduation scandal in Washington, D.C., where hundreds of students were permitted to graduate despite missing large chunks of school, as well as a grade-fixing scandal in Prince George’s County, Maryland.
Why do these types of education scandals keep happening? There is no excuse for educational leaders to fake academic success. But in an effort to understand why certain school leaders resort to falsifying data, we need to examine what creates and sustains environments where educators feel induced to cheat, so that we can be preventative rather than reactionary.
Fear of being branded as an underperformer
A number of forces create environments where cheating seems a viable option to some. First, the anxiety around school report cards and being labeled as a deficient or underperforming school is real. No school or district wants the label. Parents don’t want to send their children to schools with poor test scores. Corporations don’t want to relocate to places where their employees can’t send their children to school.
Second, since the inception of federal initiatives such as “No Child Left Behind” and “Race to the Top,” teacher and administrative evaluations and financial compensation have been tied to test scores, even though the research states that incentive-based compensation systems have no impact on student achievement. Even more dangerous is that financial compensation and pressure can shift motivation. In my opinion, it is no coincidence that the District of Columbia Public Schools system, where the recent graduation scandal unfolded, happens to be one in which bonuses and job security are tied to annual teacher assessments.
Third, school administrators and teachers of failing schools face job insecurity and are more likely to be observed and evaluated by local, state and federal personnel. During the era of “No Child Left Behind,” between 2002 and 2015, I had between one to five different observers in the classroom at any given time, and those observations could last for the entire school year.
Notably, during my time as a schoolteacher in Atlanta, my school hired four different principals in six years. The turnover continues to this day. Teachers at the school were always under the threat of “restructuring” and having to reapply for our jobs. As soon as we were able to regroup and plan on how to reach the students in our community, our leadership was stripped away. In addition, almost every year, there were a new set of state standards or district initiatives by which we were supposed to abide.
Every student impacted by grade inflation or fake attendance reports will feel the impact long after graduation.
According to a 2015 Georgia State University report, many of the students whose test scores were falsified continued to perform poorly in reading and language arts assessments. These same students, in order to graduate from high school, were later required to pass five subject tests. Some did not pass despite repeated attempts. I know this, because I personally tutored dozens of students at the high school where I worked in an effort to help them pass the test, and ultimately they did not.
Better ways to gauge academic success
During the Obama administration, the U.S. Department of Education took a small step in the right direction when it suggested a plan that states can use to reduce the amount of unnecessary testing. I would argue that more emphasis should also be placed on the use of authentic assessments – that is, testing students on what they were actually taught – at the classroom level.
School district leaders and policymakers should also seek to revamp how teachers are evaluated so that teacher evaluations are not tied to how students performed on one test on one day, but rather how much of an academic gain the students made over time.
Prospective teachers and school leaders should continue to look at the deeper meanings of teaching and learning rather than relying disproportionately on numbers. This kind of reflection will enable school leaders to shift the focus of children’s education beyond metrics and data.
Stephanie Jones does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
In late December 2017, the Georgia Public Service Commission faced a major decision: whether to cancel construction of two nuclear power reactors at Plant Vogtle, near Waynesboro, which had been plagued by delays and escalating costs.
Earlier in the year, utilities in South Carolina abandoned work on two reactors that also were behind schedule and over budget. Vogtle is now the only large-scale nuclear construction underway in the United States.
Georgia regulators voted unanimously to allow construction at Vogtle to continue. But they also increased the financial exposure of Georgia Power, the largest investor in the plant, by reducing costs that the company would be allowed to charge to its customers by US$1.7 billion.
Nuclear advocates called the vote a win for the economy and the environment. In reality, however, it says more about the challenges facing the nuclear industry in the 21st century. I have worked for years on proposals to mitigate climate change, first in the U.S. Senate and now at Duke University. If nuclear power is to be part of a U.S. climate change strategy over the next century, policymakers must address its increasingly precarious economics.
Struggling to compete
Many experts predict that Vogtle will be the last traditional light-water reactor commissioned in the United States. The challenge is largely economic. According to the Department of Energy, the cost of generating electricity from newly constructed nuclear plants is almost double the cost for power from a new natural gas combined-cycle plant – the highly efficient type that utilities are most commonly building now.
Natural gas combined-cycle plants aren’t just outcompeting nuclear power based on price. They also give power system operators flexibility to adjust quickly to the ebbs and flows of intermittent renewable sources, such as wind and solar power. Nuclear plants are designed to run more than 90 percent of the time, but can’t ramp up or down on short notice.
It is hard to make a business case for building new nuclear plants, even in regulated states like Georgia and South Carolina where utilities are allowed to recover construction costs from their customers. In deregulated Northeast and Midwest power markets, where generators compete to deliver electricity at the lowest cost, no new nuclear unit has been permitted for construction since 1977.
How essential is nuclear power?
Although the Trump administration is slowing or reversing its predecessor’s climate change policies, recent polls show that a majority of Americans believe climate change is caused mainly by human activities and want the government to take action to slow it. State regulators are starting to factor future damages from carbon emissions into energy decisions.
Today about 20 percent of U.S. electric power, and 60 percent of our zero-carbon electricity, comes from nuclear generation. Nearly half of U.S. nuclear plants are at or near the end of their 40-year licensed operating lives. These units have received 20-year license extensions, but starting around 2030 they will reach their 60-year limits. At this point, they must receive a second license extension or retire.
The owners of at least three units plan to pursue second renewals. But others will not want to contend with the economic and engineering challenges of running reactors into their eighth decade. Some may be denied new licenses due to age-related safety concerns.
Many analyses suggest that nuclear generation is essential for reducing U.S. carbon emissions. In late 2016, the Obama administration published a Mid-Century Strategy for Deep Decarbonization, designed to reduce U.S. greenhouse gas emissions 80 percent or more below 2005 levels by 2050. Every scenario called for expanding nuclear power. A 2016 study by the Rhodium Group, an international consulting company, projected that if all “at risk” U.S. nuclear plants retire by 2030, greenhouse gas emissions from the U.S. power sector will double from 2020 to 2030.
My colleagues at Duke University’s Nicholas Institute for Environmental Policy Solutions have examined the potential impact of nuclear plant retirements in the Southeast, where they provide about one-quarter of our region’s electricity. They concluded that utilities should explore how the potential loss of these plants interacts with other regional challenges, and that southeastern states should start planning now for the potential loss of their largest carbon-free generation source.
The case for pricing carbon emissions
What’s the best way to resolve this tension between nuclear power’s failing market prospects and its importance to U.S. climate strategy? The Vogtle decision offers some lessons.
By moving the project forward, Georgia regulators explicitly recognized the importance of fuel diversity and the long-term benefits of reliability and carbon-free generation. But they also conditioned approval on continuation of federal tax credits for generating power from new nuclear plants, which were extended in a budget bill enacted in early February. In effect, they recognized that nuclear offers societal benefits worth underwriting.
This step is critical because new nuclear power plants are unlikely to be built without some mechanism for monetizing social benefits from carbon-free generation. In Georgia, regulators decided to socialize the cost by allowing Georgia Power to pass some (although not all) construction expenses on to its customers, and to assume that the nation’s taxpayers would share some of the cost through the extension of federal tax credits.
It could be cleaner and more direct to develop a market-based mechanism that puts a price on the carbon emissions that nuclear power avoids. Such a policy would typically require utilities to either buy permits to emit carbon or pay a tax on carbon emissions. This approach would reward all low-carbon energy sources – but if it was sufficiently robust, it could change the economics of nuclear power.
Illinois and New York are taking a narrower approach that requires utilities to produce specific fractions of the electricity they supply from zero-carbon sources. This strategy ensures a market need for some zero-emitting power. New Jersey lawmakers are debating a bill that would provide economic credit to utilities for generating carbon-free electricity from nuclear plants.
Another strategy could be to aggressively promote mature new nuclear reactor designs that could take up some demand currently met by retiring plants. New, more flexible small modular reactors could be competitive in the modern marketplace, but are still under development. Helping these technologies mature at a pace that might fill the coming void from looming nuclear retirements would require aggressive efforts to promote their demonstration and deployment. Recent studies suggest that with such support, these new designs can become competitive in the market in the coming years.
Controversy over completing Vogtle demonstrates how difficult the current environment is for nuclear power. It will require proactive and aggressive strategies to maintain nuclear power’s role in the electric grid and avoid opening a gaping hole in U.S. climate change strategy.
Tim Profeta served from 2000 to 2005 as Counsel for the Environment to U.S. Senator Joseph I. Lieberman, where he was deeply involved in efforts to enact national legislation addressing climate change. He is Chairman Emeritus of 8 Rivers Capital in Durham, N.C., where he was Chair from 2010 to 2017, and has served on the Board of the Climate Action Reserve since 2012.
On Valentine’s Day, 19-year-old Nikolas Cruz opened fire at Marjory Stoneman Douglas High School in Parkland, Florida. He killed 17 students and teachers and injured at least a dozen others. The Parkland shooting is currently the ninth deadliest single-day mass shooting on U.S. soil.
Like other recent mass shootings, the events in Parkland were quickly followed by a public outcry for increased gun control. On Feb. 19, Teens for Gun Reform hosted a “lie-in” in front of the White House to demand tougher gun laws. Others gathered in protest outside of the National Rifle Association headquarters on Feb. 16. Speaking at that event, Rep. Gerald Connolly, D-Va., argued for an assault weapons ban, universal background checks and closing gun show purchasing loopholes.
Florida legislators are currently drafting a bill that would increase the minimum age for purchasing an assault rifle to 21 and impose a three-day waiting period for purchases. President Trump has called for regulations on so-called bump stocks that convert semiautomatic weapons to fully automatic machine guns, as used in the 2017 shooting in Las Vegas. But will these laws prevent another mass shooting? Is there a better policy option?
Unfortunately, the research we need to answer these questions doesn’t exist – and part of the problem is that the federal government largely doesn’t support it.
1. Why do we need research about guns?
Gun violence is a public health issue. It’s a leading cause of premature death in the United States, killing more people each year than diseases like HIV, hypertension and viral hepatitis.
While violent crime has generally been on the decline since the mid-1990s, the latest reports from the FBI suggest crime rates may be starting to increase. Gun crime has been a persistent problem. According to the Centers for Disease Control and Prevention, 33,594 individuals were killed by firearms in 2014 alone. That’s only about 200 less than the number of people killed in motor vehicle accidents. In 2015, roughly 85,000 people were injured by firearms, including nearly 10,000 children.
In order to prevent gun injuries and deaths, we need accurate information about how they occur and why. While police reports and FBI data can provide some detail, they don’t include the thousands of cases that go unreported each year. Between 2006 and 2010, the Bureau of Justice Statistics estimated that more than a third of victims of crimes involving a firearm did not report the crime to police. The National Crime Victimization Survey, which collects victimization data from about 90,000 households each year, helps to fill in this gap. However, even this survey has its drawbacks. It doesn’t collect data from youth younger than 12, it doesn’t include murder, and it doesn’t help us fully understand the offender’s motivations and beliefs.
Social scientists like me need more research in order to get the level of detail we need about gun crime. There’s just one major roadblock: The federal government won’t fund it.
2. How much federal money is there?
In 1996, Congress passed the Dickey amendment. The legislation stated that “none of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control.” While that wording did not ban CDC gun research outright, the legislation was accompanied by a US$2.6 million budget cut. That amount happened to match the amount the CDC had spent on firearms research the previous year. The message was clear. From 1996 to 2013, CDC funding for gun research dropped by 96 percent.
The CDC wasn’t the only federal agency affected. In 2011, Congress added a similar clause to legislation that regulated funding for the National Institutes of Health. However, due to a directive from the Obama administration, the NIH continued to provide funding for gun research. That push faded as the Obama administration left office.
Earlier this year, the NIH discontinued its funding program that specifically focused on firearm violence. While firearms researchers can still apply for funding through more general NIH funding opportunities, critics say that makes funding for gun research less likely.
3. What prompted these funding restrictions?
The Dickey amendment was passed after a CDC-funded study, led by physician and epidemiologist Arthur Kellerman, found that having a gun in the home increased homicide risk. After the results were published, the National Rifle Association pressured lawmakers, arguing that the CDC was inappropriately using its funds to advocate for gun control.
Opposition from the NRA is serious business for lawmakers. The NRA is one of the most powerful special interest lobbying organizations in the U.S. In 2014 alone, the NRA spent more than $3.3 million on lobbying activities – things like meeting with politicians, drafting model legislation and advertising.
The NRA also spends additional millions to advocate or oppose political candidates. In 2016, the NRA spent nearly $20 million on efforts opposing Hillary Clinton and nearly $10 million on efforts supporting Donald Trump.
Not surprisingly, the NRA has successfully blocked gun control legislation in the past, including the renewal of the 2004 assault weapons ban.
4. Can state or private sector dollars fill the gap?
Another potential option for research is to seek out funding from private agencies or philanthropists. But few of these opportunities are available.
Private funding is also somewhat risky for researchers. If a funder has a political leaning on gun-related issues, the researcher may be under pressure to produce the “right” results. Even just the implication that a researcher could have a conflict of interest can undermine a study’s results and perceived legitimacy.
State funding may be another option. In 2016, California announced its intent to fund the University of California Firearm Violence Research Center. This is the first time a state has stepped forward to fund a research center focused on guns. California remains the only state to take this step.
5. Has gun research stopped?
The lack of funding has discouraged firearms research. Many researchers are employed within academia. In this publish-or-perish environment, researchers are under pressure to publish their work in academic journals and fund it through sources beyond their home institution. Without outside funding, their research often isn’t possible. Leading firearms researcher Wintemute says “no more than a dozen active, experienced investigators in the United States have focused their careers primarily on firearm violence.”
Lack of funding leaves some researchers, like myself, limited to small-scale studies with a low budget. The problem with studies like these is that they are often based on samples that are not nationally representative. That means we can’t generalize from the findings or address all the questions we might have.
Without increased funding for gun research, it will be extremely difficult for researchers to provide accurate answers to the gun policy questions currently under debate.
6. Does Parkland change the conversation?
A Washington Post-ABC News poll conducted after the Parkland shooting found that 77 percent of Americans felt that Congress was not doing enough to prevent mass shootings. Health and Human Services Secretary Alex Azar stated that he and those he represents will “be proactive on the research initiative” regarding guns. And some elected officials are saying it’s time to get rid of the Dickey amendment. Public pressure and support from those in office may be enough to make more gun research possible.
This is an updated version of an article originally published on Oct. 18, 2017.
Lacey Wallace does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
But those years were fraught with other kinds of trouble: Security breaches and electronic espionage affected nearly every adult in the U.S., along with the power grid in Ukraine and the 2016 U.S. presidential campaign, to name a few. As a scholar of cybersecurity policy, I think it’s time that my own industry took some lessons from one of the safest high-tech transportation methods of the 21st century.
Like today in cybersecurity, the early days of U.S. air travel weren’t regulated particularly closely. And there were a huge number of accidents. Only after public tragedies struck did changes occur. In 1931, a plane crash in Kansas killed legendary Notre Dame football coach Knute Rockne. And in 1935, U.S. Sen. Bronson Cutting of New Mexico died in the Missouri crash of TWA flight 6. These events helped contribute to the 1938 creation of the first U.S. Air Safety Board. But it took until 1967 for the new Department of Transportation to be created with an independent National Transportation Safety Board.
Since then, the NTSB has rigorously investigated all airplane crashes and other transportation incidents in the U.S. Its public reports about its findings have informed changes in government regulations, corporate policies and manufacturing standards, making air travel safer in the U.S. and around the world.
As cybersecurity incidents proliferate around the country and the globe, businesses, government agencies and the public shouldn’t wait for an inevitable disaster before investigating, understanding and preventing these failures. Nearly a century after the original Air Commerce Act in 1926, calls, including my own, are mounting for the information industry to take a page from aviation and create a cybersecurity safety board.
The flight plan to safer skies
The creation of the National Transportation Safety Board was the first independent agency charged with investigating the safety of various transportation systems, from highways and pipelines to railroads and airplanes. Since 1967, the NTSB has investigated more than 130,000 accidents.
These investigations are vital since they help establish “the who, what, where, when, how and [perhaps] why behind an incident.” After the facts are determined, policymakers can back up, and often have backed up, NTSB recommendations with new regulations. Failing that, it is common for air carriers, for example, to voluntarily implement changes it suggests. A similar approach could help improve the internet, a new technology that, like airplanes, is tying the world closer together even as it threatens our shared security.
The case for a cybersecurity safety board
Two elements of the NTSB may be particularly useful for enhancing cybersecurity. First, it separates fact-finding proceedings from any questions of legal liability. Second, these investigations are broad, involving various stakeholders like manufacturers and airline companies. Cyberspace is similarly made up of a wide range of companies and technologies.
A cybersecurity safety board need not in fact be national. It could begin from the bottom up, with companies partnering together to protect their customers by sharing best practices.
Critics of establishing a cybersecurity safety board would likely contend that the speed at which technologies change makes it difficult for any recommendations, even if they were quickly implemented, to sufficiently protect organizations from cyber attacks. NTSB investigations can take a year or more; to ensure findings were still relevant, cybersecurity inquiries would need to be faster, such as by streamlining cyberforensics and relying on widely used tools such as the National Institute for Standards and Technology Cybersecurity Framework.
Other challenges include standardizing terminology across the industry and identifying the right experts to look into data breaches, which might be easier said than done given the talent shortage among cybersecurity professionals. Broad-based cybersecurity educational programs, like a new partnership between the law, business and computer science schools here at Indiana University, should be encouraged to help address this shortfall.
A path forward
Additional measures would likely be required to make a cybersecurity safety board successful, such as launching investigations only for serious breaches like those involving critical infrastructure.
More nations and regions – including the European Union – are imposing stringent requirements on companies that suffer data breaches, including mandatory reporting of cyberattacks within 72 hours and more rigorous preventive measures. Businesses, governments and scholars around the world are working on how to improve data security. If they came together to support a global network of cybersecurity safety boards, their efforts could promote cyberpeace for people and institutions alike.
All that is needed is the will to act, the desire to experiment with new models of cybersecurity governance and the recognition that we should learn from history. As President Franklin D. Roosevelt famously said, “It is common sense to take a method and try it: If it fails, admit it frankly and try another. But above all, try something.”
Scott Shackelford does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Octopuses have long arms and plenty of smarts, but they don’t point. Nor do chimps, gorillas or other apes, at least not in the wild.
Humans, on the other hand, are prodigious pointers. Infants use the gesture before they can talk, often around 1 year of age. By 2, they’ll waddle around, their forefingers sweeping over the world like searchlights.
Pointing seems to be in our nature: When people want to draw attention to something, we instinctively extend an index finger. This gesture has been observed across the globe, suggesting that it’s a universal human impulse, perhaps like yawning or laughing.
But research my collaborators and I recently published shows that pointing is not simply a matter of human nature. How we point is also a matter of culture. These findings suggest that cognitive scientists still have a lot to learn from other cultures about why humans behave in the ways that they do.
A scrunch and a glance
In 2009, my collaborator, Rafael Núñez, and I joined a fieldwork project in Papua New Guinea’s remote interior. The goal was to study the language and culture of the Yupno, an indigenous group of some 8,000 people.
While conducting our interviews, we noticed a distinct way the Yupno would point: They would scrunch their noses while looking toward wherever they wanted to direct your attention. To outsiders, it could easily look like an expression of disgust. But there’s nothing negative about it.
This “nose-pointing” gesture, it turned out, was essentially undocumented. After we returned to the U.S., we published some preliminary observations using examples from our videos. But the study left a bunch of questions unanswered. One in particular kept popping up: Was facial pointing just an infrequent quirk, or did the Yupno use it as much – or even more than – hand-pointing? We didn’t have a good answer.
Other researchers had previously contested the significance of facial pointing. One dismissed it as merely an “occasional alternative” to pointing with the hand. Others held that, in indigenous cultures such as the Cuna of Panama or the Pirahã of Brazil, pointing with the face is actually preferred over pointing with the hand.
None of these claims, however, were supported by systematically collected evidence. So when we returned to the Yupno valley – now joined by another collaborator, James Slotta – we set out to document Yupno pointing more rigorously and, in the process, to weigh in on a bigger question: Do humans universally prefer to point with their hands?
We devised a simple communication game that’s played in pairs. One person sits down with five square cloths on the ground in front of them, forming a plus sign. Off to one side is a tray with a number of small, colorful objects on it – beanbags, cylinders and cubes. The person is shown a photo with eight of the objects arranged on the square cloths in a particular way. Their task is to tell their partner how to arrange the objects to match the photo. The instructions don’t mention pointing. It’s assumed that the players will spontaneously point to instruct their partner.
We played this game with 16 Yupno adults and then, later, 16 undergraduates in California. The Yupno and Americans pointed at about the same rate. But how they pointed was a different story. As you might guess, the Americans almost always used their hands – 95 percent of the time, in fact. But the Yupno participants used their hands much less – only 34 percent of the time. The rest of the time they pointed with the scrunched nose gesture or just a toss of the head.
In the Yupno, at least, pointing with the face is not just an “occasional alternative.” Our experiment shows it’s how they respond to the impulse to point.
Why do the Yupno point like this? We don’t know the full answer yet.
The Yupno do place a high value on discrete communication, so it could be they use facial pointing because it’s less conspicuous than finger-pointing. Or it could have to do with the Yupno language, which boasts an unusually large set of demonstratives. (Words like “this” and “that” are often used along with pointing.) Or it could simply be that Yupno people’s hands are so often occupied with everyday tasks that they’ve just gotten used to pointing without them.
Even in American culture, pointing doesn’t always take the same form. When our hands are full and we need to point, we’ll jerk our head; when trying to be discreet, we’ll use our eyes to cast a gaze.
Other cultures have more conspicuous – and, to Western eyes, more colorful – conventions for pointing without the hands. In parts of South America, Asia and Africa, it’s common to point with the lips. To do this, you purse, protrude or pout your lips, while looking at whatever it is you want to direct attention to.
Whenever a study finds that another culture does something differently from us, it is natural to ask why “they” do what they do. But cross-cultural findings also often raise questions about why “we” do what we do. Our research is no exception. Facial pointing in one form or another is totally commonplace in indigenous communities all over the world. But it’s utterly absent from major metropolises. You don’t see Brooklynites scrunching their noses to point or Londoners lip-pointing. Why not? Future work may well reveal something that explains our culture’s penchant for the index finger.
Our study is hardly the first to encourage a rethinking of what is “natural.” Cognitive scientists once assumed that people everywhere favor the words “left” and “right” over “east” and “west” when talking about space. They also thought humans were universally good at counting and bad at describing smells.
Not any more. In each case, we now know these behaviors have their roots in both nature and culture. Our bodies and brains certainly set broad bounds on our behaviors. But we’re still figuring out where those bounds are.
And that’s where cross-cultural research like ours steps in. If we want to understand where our behaviors come from, we have to ask how those behaviors vary from one group to the next. So often – even in the case of our most familiar behaviors – there’s more to the answer than we would have guessed.
This research was supported by a grant from the National Geographic Society.
Free traders have vilified President Donald Trump as a pernicious protectionist because of policies such as hiking tariffs, abandoning the Trans-Pacific Partnership and saying he’s prepared to walk away from the North American Free Trade Agreement.
They fear his policies will hurt the U.S. economy by restricting access to foreign goods. But are these policies really so radically different from past administrations?
Absolutely not. The fact is the U.S. has never been a truly free trade country – one with virtually no barriers to trade with other nations – as some people seem to think. The idea that the U.S. ever was is a myth.
This is among the topics I’ve been exploring for an upcoming book, titled “The Rise of the Guardian State.” My research shows the reason both Republicans and Democrats have pursued protectionist policies – despite their rhetoric – is part of the fabric of the American democratic political process.
The ‘guardian state’
Long before the U.S. became the so-called defender of the liberal economic world order after World War II, it had become a guardian of another kind.
U.S. policymakers’ free trade rhetoric has always been tempered by what I call a “guardian” mentality intended to shelter domestic industries and workers from the full impact of globalization and open trade. In other words, the U.S. government tends to talk a lot about free trade but then maintains a protective layer of trade barriers across the economic landscape.
A paper I published in 2000 showed how this is a natural outgrowth of democracy and the granting of suffrage to more people. As the masses gained greater political power at the turn of the 19th century, politicians faced greater pressure to protect their constituents from the vicissitudes of trade.
For example, at the ballot box, the issues of economic growth and unemployment have often been critical to outcomes. In fact, in every presidential election since World War II, economic issues have figured prominently if not centrally.
And on a broader level, special interests such as unions and business groups brought their political weight to bear at the doors of lawmakers to protect their members.
An axis of guarded trade
Hence what we have perceived as a partisan schizophrenia in trade policy in American history between the so-called Republican free traders and Democratic economic nationalists has usually been nothing more than moderate shifts back and forth along an axis of guarded trade.
Even the pro-free trade Republican administrations of Presidents Ronald Reagan and George W. Bush promoted significant barriers to trade. For example, Reagan pushed Japan to unilaterally limit the number of automobiles it exported to the U.S., while Bush erected tariffs against foreign steel.
And in the 2016 presidential election, there was scant fundamental difference between the two main candidates’ trade policies. Both Hillary Clinton and Trump, notwithstanding some minor differences on trade, questioned American support of multilateral trade agreements and engaged in worker-centric and populist rhetoric on globalization.
Trade’s costs and benefits
But if America could get very close to a free trade policy, would that be a good thing?
No major country has ever been a purely free trader in modern history. My research for an article published in 1985 demonstrates that Great Britain came closest from 1860 to World War I, when the country eliminated virtually all tariffs.
Among major nations in the post-World War II period, the U.S. has been closest to the free trade pole. But as noted above, with a plethora of tariffs and quotas on foreign goods, the U.S. is still some distance away from late 19th-century Britain’s free trading ways.
We know trade carries great benefits, as is clear from the fact that the most prosperous nations today embrace trade as a vehicle to greater wealth. But trade concomitantly generates costs.
While the benefits of freer trade are spread over society as a whole in the form of rising real incomes and access to superior products, some localities experience costs that severely plague specific groups. The “destructive” part of “creative destruction” – coined by political economist Joseph Schumpeter to characterize capitalist competition – is synonymous with industries failing and their workers losing jobs.
While in theory such dislocation can be overcome over time by people migrating to more competitive industries and wealthier regions, in the short run it is devastating for families that are less mobile than others. And in fact people are far less mobile than liberal theorists like to contemplate (especially older blue-collar and unskilled workers).
Indeed an overwhelming amount of research suggests that theories upon which free trade are based often fail quite significantly in the face of reality.
And that’s where protective barriers come in. They guard these groups from the economic dislocation of unrestricted competition across national boundaries. This renders a capitalist society more tolerable.
‘Nothing in excess’
That being said, going too far in a protectionist direction is surely as devastating, if not more so than a world of purely free trade. As in so many other dimensions of human life, the famous Greek aphorism “nothing in excess” rings true.
The present article is not an attempt to paint Trump as mainstream on trade policy in any way. Indeed, he has pursued a very aggressive protectionist agenda, even when measured against the most protectionist Democratic administrations.
But I do wish to suggest that the debate over trade in American history is not as bipolar as most believe and that the differences between Trump’s and past administrations are more a matter of degree than kind.
And if trade goes along the lines of Trump’s other political priorities, we may in fact see that U.S. trade practices will not change as significantly as many believe. His protectionist bark is likely bigger than his bite.
Giulio Gallarotti does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Every vote counts. It’s the key principle underlying democracy. Through the history of democratic elections, people have created many safeguards to ensure votes are cast and counted fairly: paper ballots, curtains around voting booths, locked ballot boxes, supervised counting, provisions for recounting and more.
With the advent of computer technology has come the prospect of faster counting of votes, and even, some hope, more secure and accurate voting. But the internet has also enabled hackers to attack voting systems and has given disinformation campaigns new tools to influence public opinion. Here are highlights of The Conversation’s coverage of these issues.
1. Voting machines are old
After the debacle of the 2000 election’s efforts to count votes, the federal government handed out massive amounts of money to the states to buy newer voting equipment that, everyone hoped, would avoid a repeat of the “hanging chad” mess. But almost two decades later, as Lawrence Norden and Christopher Famighetti at the Brennan Center for Justice at New York University explain, that one-time cash infusion has left a troubling legacy of aging voting machines:
“Imagine you went to your basement and dusted off the laptop or mobile phone that you used in 2002. What would happen if you tried to turn it on?”
That’s the machinery U.S. democracy depends on.
2. Not everyone can use the devices
Most voting machines don’t make accommodations for people with physical disabilities that affect how they vote. Juan Gilbert at the University of Florida quantifies the problem:
“In the 2012 presidential election, … The turnout rate for voters with disabilities was 5.7 percent lower than for people without disabilities. If voters with disabilities had voted at the same rate as those without a disability, there would have been three million more voters weighing in on issues of local, state and national significance.”
To date, most efforts to solve the problems have involved using special voting equipment just for people with particular disabilities. That’s expensive and inefficient – and remember, separate is not equal. Gilbert has invented an open-source (read: inexpensive) voting machine system that can be used by people with many different disabilities, as well as people without disabilities.
With the system, which has been tested and approved in several states, voters can cast their ballots using a keyboard, a joystick, physical buttons, a touchscreen or even their voice.
3. Machines are not secure
In part because of their age, nearly every voting machine in use is vulnerable to various sorts of cyberattacks. For years, researchers have documented ways to tamper with vote counts, and yet few machines have had their cyberdefenses upgraded.
The fact that the election system is so widespread – with multiple machines in every municipality nationwide – also makes it weaker, writes Richard Forno at the University of Maryland, Baltimore County: There are simply more opportunities for an attacker to find a way in.
“Voter registration and administration systems operated by state and national governments are at risk too. Hacks here could affect voter rosters and citizen databases. Failing to secure these systems and records could result in fraudulent information in the voter database that may lead to improper (or illegal) voter registrations and potentially the casting of fraudulent votes.”
4. Even without an attack, major concerns
Even if an attack never happens – or if nobody can prove one happened – public trust in elections is vulnerable to sore losers taking advantage of the fact that cyberweaknesses exist. Just that prospect could destabilize the country, argues Herbert Lin of Stanford University:
“State and local election officials can and should provide for paper backup of voting this (and every) November. But in the end, debunking claims of election rigging, electronically or otherwise, amounts to trying to prove something didn’t happen – it can’t be done.”
5. The Russians are a factor
American University historian Eric Lohr explains the centuries of experience Russia has in meddling in other countries’ affairs, but notes that the U.S. isn’t innocent itself:
“In fact, the U.S. has a long record of putting its finger on the scales in elections in other countries.”
Neither country is unique: Countries have attempted to influence each other’s domestic politics throughout history.
6. The real problems aren’t technological at all
In any case, the major threats to U.S. election integrity have to do with domestic policies governing how voting districts are designed, and who can vote.
Penn State technologist Sascha Meinrath discusses how partisan panels have “systematically drawn voting districts in ways that dilute the power of their opponent’s party,” and “chosen to systematically disenfranchise poor, minority and overwhelmingly Democratic-leaning constituencies.”
There’s plenty of work to be done.
Editors’ note: This is an updated version of an article originally published Oct. 18, 2016.
On Feb. 6, technology companies, educators and others mark Safer Internet Day and urge people to improve their online safety. Many scholars and academic researchers around the U.S. are studying aspects of cybersecurity and have identified ways people can help themselves stay safe online. Here are a few highlights from their work.
1. Passwords are a weakness
With all the advice to make passwords long, complex and unique – and not reused from site to site – remembering passwords becomes a problem, but there’s help, writes Elon University computer scientist Megan Squire:
“The average internet user has 19 different passwords. … Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.”
That’s a good start.
2. Use a physical key
To add another layer of protection, keep your most important accounts locked with an actual physical key, writes Penn State-Altoona information sciences and technology professor Jungwoo Ryoo:
“A new, even more secure method is gaining popularity, and it’s a lot like an old-fashioned metal key. It’s a computer chip in a small portable physical form that makes it easy to carry around. The chip itself contains a method of authenticating itself.”
Just don’t leave your keys on the table at home.
3. Protect your data in the cloud
Many people store documents, photos and even sensitive private information in cloud services like Google Drive, Dropbox and iCloud. That’s not always the safest practice because of where the data’s encryption keys are stored, explains computer scientist Haibin Zhang at University of Maryland, Baltimore County:
“Just like regular keys, if someone else has them, they might be stolen or misused without the data owner knowing. And some services might have flaws in their security practices that leave users’ data vulnerable.”
So check with your provider, and consider where to best store your most important data.
4. Don’t forget about the rest of the world
Sadly, in the digital age, nowhere is truly safe. Jeremy Straub from North Dakota State University explains how physical objects can be used to hijack your smartphone:
“Attackers may find it very attractive to embed malicious software in the physical world, just waiting for unsuspecting people to scan it with a smartphone or a more specialized device. Hidden in plain sight, the malicious software becomes a sort of ‘sleeper agent’ that can avoid detection until it reaches its target.”
It’s a reminder that using the internet more safely isn’t just a one-day effort.
Editor’s note: This is a roundup of material from The Conversation’s coverage of the 2018 World Economic Forum.
The annual gathering of global elites known colloquially as Davos is over – in case you missed it. So what does it matter?
While the rare attendance of the president of the United States may have made the biggest splash, several thousand other leaders in business, politics, academia and, well, celebritydom ensured there was plenty of other star wattage at the World Economic Forum’s three-day meeting in the Alps.
A few heads of state auditioned to take the U.S. president’s place as “leader of the free world,” while other attendees assessed the risks likely to batter the world economy in coming months and years. And the parties this year – we hear – were epic.
To help readers pierce the luxury bubble that surrounded the Alpine town of Davos from Jan. 23-26, we’ve been asking experts to examine some of the key themes and speeches.
1. A Davos primer and reading list
Since it was President Donald Trump’s first time attending Davos, we thought he might like a primer.
To that end, University of St. Thomas professor of ethics and business law Christopher Michaelson, who has previously attended the meeting, offered a little history on the World Economic Forum and suggested three novels and a children’s book to help the president acclimate himself to the unique culture at Davos.
“If only Trump could get over his distaste for books and read them,” Michaelson wrote.
2. Inequality, dishwashers and Elton John
The main theme at Davos was “Creating a Shared Future in a Fractured World,” which put growing concerns about inequality front and center. While receiving an award, musician and philanthropist Elton John decried economic inequality as “disgraceful.”
Michele Gilman, a professor of law at the University of Baltimore, gave the Davos elite credit for grappling with inequality but cautioned the issue won’t be meaningfully addressed until the forum panels “include the voices and concerns of everyone who doesn’t get to go there.”
“Politicians have little incentive to tackle these problems because they are beholden to the interests of their donors,” she wrote. “Until the drivers and dishwashers toiling in the chalets at Davos – along with their working brothers and sisters across the globe – get the microphones and push the levers of policy, economic inequality is likely to persist and perhaps get worse.”
3. Macron makes his own splash
French President Emmanuel Macron, in his own special address at the forum, declared “France is back!” – in English – as he touted his government’s reforms meant to improve French productivity and competitiveness.
But when Macron switched to French, he made a very different argument, wrote University of Michigan history professor Joshua Cole, in which he decried growth at any cost and called for a new “global contract” to replace globalization and fend off nationalists. It’s a “compelling vision” that may be hard to achieve, Cole argued.
“For Macron this is the danger of the present moment: a relapse into a sterile nationalism that is incapable of addressing the real challenges posed by the present,” he wrote. “Macron’s hope is that a united Europe might play this mediating role. A democratic Europe might thread the needle between the unregulated capitalism often endorsed by the United States and the statist and anti-democratic model provided by China.”
4. A coal clash
During the speech, Macron also pledged to end his country’s use of coal within four years – in sharp contrast to his American counterpart’s express interest in pushing policies that favor the fossil fuel. Jay Zagorsky, an Ohio State University economist, explained why the rhetoric of both leaders won’t be able to change the laws of economics.
“Politicians can claim all they want that they are for or against coal in Davos’ forums, making grand promises about ending the use of the dirty fuel or declaring their plans to make it cheaper to use as a way to protect jobs,” he wrote. “No matter what they say in speeches, however, economic forces will inevitably dictate whether reality can match their words.”
5. Merkel looks back and forward
German Chancellor Angela Merkel, another candidate for leader of the free world, spoke about a digital divide separating generations of Germans. Elizabeth Heineman, of the University of Iowa, delved into the history that makes Germans more cautious than Americans about turning their data over to businesses or the state.
“Merkel’s – and Europe’s – quandary is this: how to move forward in the digital age when Europe’s contribution is to seek balance between state power, individual rights and the dynamism of capitalism,” she wrote.
In her speech, Merkel worried about Germany being steamrolled as it conducted “philosophical debates about the sovereignty of data.”
But commentator Heineman argues that “achieving any balance – means slowing things down. It means philosophizing.”
6. A currency crash
The Trump administration was already making headlines well before the president himself arrived in Davos on Jan. 25. Treasury Secretary Steve Mnuchin, for one, caused the dollar to nosedive after he broke with longstanding tradition and said a weak greenback would be “good” for the U.S.
Benjamin Cohen, professor of international political economy at the University of California, Santa Barbara, described what was so “alarming” about Mnuchin’s words.
“It came as a shock to many – including me – when Mnuchin declared that a depreciation of the greenback is ‘obviously’ welcome,” Cohen wrote. “Prolonged depreciation could severely erode the dominant position of the greenback as the world’s leading currency.”
7. Trump strikes a chord
Trump, in his 15-minute speech to forum delegates, sought to make his “America First” policies palatable to an audience much more inclined to international alliances and plotting a course toward a shared future.
While the president’s words were well-received in some quarters for their perceived pragmatism, they struck a discordant note in the ears of at least one person in attendance. Stephen D. Smith, director of Shoah Foundation at the University of Southern California, argued that Trump’s promotion of a “my country first” brand of isolationism turns the main theme of the forum on its head, more likely fracturing the world than sharing it.
“Trump’s insistence that ‘putting America first’ is his duty as president – just as other heads of state must do so in their own countries – ignores the many political leaders at Davos, such as France’s Emmanuel Macron or Germany’s Angela Merkel, who were genuinely trying to find shared interests at the forum,” Smith wrote.
8. Bullish billionaires
Trump’s words had a very different impact on at least some of the business leaders and billionaires in attendance, who seem to have decided to look past the differences in style they find less appealing and focus on the impact of “pro-business” policies such as tax cuts and deregulation.
However, Georgia State political scientist Charles Hankla cautioned it’s a bit too soon to celebrate.
“It is important to remember that the long-term business costs of Trump’s destabilizing influence are likely to be much greater than any short-term policy benefits,” he argued. “This is because businesses must operate within a social and political context, one which influences their success at every step.”
In 2017 an evangelical perspective influenced many political decisions, as President Donald Trump embraced the key constituency that voted overwhelmingly in his favor. As recently as Dec. 6, President Trump announced that the U.S. would recognize Jerusalem as the capital of Israel, a move embraced by many evangelicals, for its significance to a biblical prophecy.
Earlier in the year Trump made several other announcements keeping in mind his conservative Christian supporters. He nominated Judge Neil M. Gorsuch, a conservative judge, to the Supreme Court. He also brought evangelical Christian leader Jerry Falwell Jr. to head the White House education reform task force, and Betsy DeVos, a conservative advocate of school choice, to serve as secretary of education.
I’ve been editing The Conversation’s ethics and religion desk since February 2017. As mainstream media outlets covered how Trump was embracing evangelical politics, at The Conversation we strived to provide historical context to these developments, as the following six articles exemplify.
1. History of the end-times narrative
Trump’s move on Jerusalem was widely understood as being linked to a biblical prophecy. Many evangelical Christians believe in an end-times narrative, that promises the return of Jesus to the Earth to defeat all God’s enemies, and establish God’s kingdom. The nation of Israel and the city of Jerusalem are crucial for the fulfillment of this prophecy. This is part of a theology considered to be a literal reading of the Bible.
However, Julie Ingersoll, religious studies professor at University of North Florida, explains that this theology is actually a relatively new interpretation that dates to the 19th century and relates to the work of Bible teacher John Nelson Darby.
Darby argued that the Jewish people needed to have control of Jerusalem and build a third Jewish temple on the site where the first and second temples were destroyed. This would be a precursor to the Battle of Armageddon, when Satan would be defeated and Christ would establish his earthly kingdom.
With the creation of the state of Israel in 1948, this theology suddenly seemed feasible and as Ingersoll further explained, the end-times framework became popularized in the 1970s and ‘80s through novels and movies.
“It’s impossible to overemphasize,” she writes, “the effects of this framework on those within the circles of evangelicalism where it is popular. A growing number of young people who have left evangelicalism point to end-times theology as a key component of the subculture they left. They call themselves 'exvangelicals’ and label teachings like this as abusive.”
2. The Moral Majority
Evangelicals have for decades played a prominent role in American politics. As the senior director of research and evaluation at USC Dornsife Richard Flory wrote, President Trump’s appointing Jerry Falwell Jr. to spearhead education reform is best explained by his family’s legacy.
Falwell Jr. is a relatively minor political and religious figure. It was his late father, Jerry Falwell Sr. who was, and continues to be, enormously influential in American politics.
Falwell founded the Moral Majority in 1979 as a conservative Christian political lobbying group that promoted “traditional” family values and prayer in schools and opposed LGBT rights, the Equal Rights Amendment and abortion – all key issues in Trump administration as well.
“Republican candidates for office, dating back to Reagan and George H.W. Bush,” Flory says, recognized the power of the religious right as a voting bloc.“
3. Billy Graham and Eisenhower
Before Falwell, it was evangelist Billy Graham who left a deep impact on conservative politics.
The National Prayer Breakfast, now an annual political tradition in Washington D.C., attended by the American president was a result of Billy Graham’s efforts with President Dwight Eisenhower.
As USC Annenberg religion scholar Diane Winston writes,
"Soon after his election in 1952, Eisenhower told Graham that the country needed a spiritual renewal. For Eisenhower, faith, patriotism and free enterprise were the fundamentals of a strong nation. But of the three, faith came first.”
It was, indeed, under Eisenhower that Congress voted to add “under God” to the Pledge of Allegiance, and “In God We Trust,” to the nation’s currency.
And readers may recall that it was at the 65th National Prayer Breakfast that President Trump made an announcement to repeal the Johnson Amendment and allow religious leaders to endorse candidates from the pulpit, a pledge he made on the 2016 campaign trail. It’s another matter that the repeal was eventually dropped from the Republican tax reform bill.
4. Christian movements
Besides these prominent individual conservative voices, there are other Christian groups trying to shape American politics and the religious landscape.
Two of our contributors pointed in particular to a fast-growing Christian movement, that aims to bring God’s perfect society to Earth by placing “kingdom-minded people” in “powerful positions at the top of all sectors of society.”
Writing about this movement, Brad Christerson, professor of sociology at Biola University together with USC’s Richard Flory, explain how this movement regards Trump as part of that plan. Other kingdom-minded people include Secretary of Energy Rick Perry, Secretary of Education Betsy DeVos and Secretary of Housing and Urban Development Ben Carson.
Christerson and Flory believe this to be the fastest-growing Christian group in America and possibly in the world. Between 1970 and 2010, Protestant churches shrunk by an average of .05 percent per year but this group grew by an average of 3.24 percent per year. This number, they say, was “striking,” when considering the fact that U.S. population grew an average of 1 percent per year during this time period.
5. History of pluralism
Over the past year our scholars also pointed out how American politicians – starting with the nation’s founding fathers – have strived to be inclusive.
University of Texas historian Denise A. Spellberg told the story of a 22-year-old Thomas Jefferson purchasing a copy of the Quran, when he was a law student in Williamsburg, Virginia, 11 years before drafting the Declaration of Independence.
As she explains, Muslims who arrived in North America as early as the 17th century, eventually comprised 15 to 30 percent of the enslaved West African population of British America. The book purchase, she says, was not only “symbolic” of this connection, but also showed America’s early view of religious pluralism.
In Jefferson’s private notes was a paraphrase of the English philosopher John Locke’s 1689 “Letter on Toleration”:
“[he] says neither Pagan nor Mahometan [Muslim] nor Jew ought to be excluded from the civil rights of the commonwealth because of his religion.”
6. An inclusive nation?
Coming to the present, the question is how far has President Trump shifted the rhetoric of inclusiveness?
Trump, too, walked the “well-worn path,” in “proclaiming tolerance and highlighting commonality with Muslims,” wrote David Mislin, an assistant professor at Temple University. Analyzing President Trump’s address to leaders of some 50 Muslim nations during his visit to Saudi Arabia in May 2017, Mislin explained that Trump “used the language of a shared humanity and common God.”
However, Mislin also pointed out, that there was no acknowledgment in Trump’s speech of the Muslim population in the United States or of its contribution to American society. And that, “Islam remains something foreign” for Trump.
Indeed, in this administration – backed by over 80 percent of the white evangelical vote – “the legacy of Falwell Sr. lives on,” writes Richard Flory, “at least for the near term.”
Over the course of 2017, people in the U.S. and around the world became increasingly concerned about how their digital data are transmitted, stored and analyzed. As news broke that every Yahoo email account had been compromised, as well as the financial information of nearly every adult in the U.S., the true scale of how much data private companies have about people became clearer than ever.
This, of course, brings them enormous profits, but comes with significant social and individual risks. Many scholars are researching aspects of this issue, both describing the problem in greater detail and identifying ways people can reclaim power over the data their lives and online activity generate. Here we spotlight seven examples from our 2017 archives.
1. The government doesn’t think much of user privacy
One major concern people have about digital privacy is how much access the police might have to their online information, like what websites people visit and what their emails and text messages say. Mobile phones can be particularly revealing, not only containing large amounts of private information, but also tracking users’ locations. As H.V. Jagadish at University of Michigan writes, the government doesn’t think smartphones’ locations are private information. The legal logic defies common sense:
“By carrying a cellphone – which communicates on its own with the phone company – you have effectively told the phone company where you are. Therefore, your location isn’t private, and the police can get that information from the cellphone company without a warrant, and without even telling you they’re tracking you.
2. Neither do software designers
But mobile phone companies and the government aren’t the only people with access to data on people’s smartphones. Mobile apps of all kinds can monitor location, user activity and data stored on their users’ phones. As an international group of telecommunications security scholars found, ”More than 70 percent of smartphone apps are reporting personal data to third-party tracking companies like Google Analytics, the Facebook Graph API or Crashlytics.“
Those companies can even merge information from different apps – one that tracks a user’s location and another that tracks, say, time spent playing a game or money spent through a digital wallet – to develop extremely detailed profiles of individual users.
3. People care, but struggle to find information
Despite how concerned people are, they can’t actually easily find out what’s being shared about them, when or to whom. Florian Schaub at the University of Michigan explains the conflicting purposes of apps’ and websites’ privacy policies:
That can leave consumers without the information they need to make informed choices.
4. Boosting comprehension
Another problem with privacy policies is that they’re incomprehensible. Anyone who does try to read and understand them will be quickly frustrated by the legalese and awkward language. Karuna Pande Joshi and Tim Finin from the University of Maryland, Baltimore County suggest that artificial intelligence could help:
“What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to pay particular attention to certain issues, like when an email address is shared, or whether search engines can index personal posts.”
That would certainly make life simpler for users, but it would preserve a world in which privacy is not a given.
5. Programmers could help, too
Jean Yang at Carnegie Mellon University is working to change that assumption. At the moment, she explains, computer programmers have to keep track of users’ choices about privacy protections throughout all the various programs a site uses to operate. That makes errors both likely and hard to track down.
Yang’s approach, called “policy-agnostic programming,” builds sharing restrictions right into the software design process. That both forces developers to address privacy, and makes it easier for them to do so.
6. So could a new way of thinking about it
But it may not be enough for some software developers to choose programming tools that would protect their users’ data. Scott Shackelford from Indiana University discussed the movement to declare cybersecurity – including data privacy – a human right recognized under international law.
He predicts real progress will result from consumer demand:
“As people use online services more in their daily lives, their expectations of digital privacy and freedom of expression will lead them to demand better protections. Governments will respond by building on the foundations of existing international law, formally extending into cyberspace the human rights to privacy, freedom of expression and improved economic well-being.”
But governments can be slow to act, leaving people to protect themselves in the meantime.
7. The real basis of all privacy is strong encryption
The fundamental way to protect privacy is to make sure data is stored so securely that only the people authorized to access it are able to read it. Susan Landau at Tufts University explains the importance of individuals having access to strong encryption. And she observes police and the intelligence community are coming around to understanding this view:
“Increasingly, a number of former senior law enforcement and national security officials have come out strongly in support of end-to-end encryption and strong device protection …, which can protect against hacking and other data theft incidents.”
One day, perhaps, governments and businesses will have the same concerns about individuals’ privacy as people themselves do. Until then, strong encryption without special access for law enforcement or other authorities will remain the only reliable guardian of privacy.
Editor’s note: The following is a roundup of stories The Conversation has published on the GOP’s sweeping 2017 tax bill.
The Senate’s passage of the Republican tax plan on a party-line vote on Dec. 2 means the most significant overhaul of the U.S. tax code in a generation may be just days away from becoming the law of the land. All that remains is reconciling the Senate’s version with the one passed by the House in mid-November, a couple of additional votes and the president’s signature.
The Senate bill’s nearly 500 pages, some aspects of which were handwritten in the margins within hours of the 2 a.m. vote, contain changes that will greatly affect every facet of the economy, from health care to higher education and housing.
1. Sickness and the economy
Under the Affordable Care Act’s individual mandate, Americans must buy health insurance or face a penalty. But the Senate’s tax bill would eliminate this rule. According to the Congressional Budget Office, that will lead to 13 million more uninsured Americans by 2027.
That’s bad for the economy, says Diane Dewar, who studies health policy at the University at Albany, SUNY. When people don’t have health insurance, they’re more likely to get sick and miss work. They also may wait to go to the doctor until it’s absolutely necessary, racking up bills for uncompensated care that end up being absorbed by hospitals, insurance companies and others.
“If Americans become less healthy and have less access to health care,‘ she argues, "then everyone loses.”
2. An attack on higher education
The tax plans passed by the House and Senate contain several provisions that would have a big impact on universities and students alike, such as a tax on university endowments and changes that could significantly increase taxes for some students.
Benjamin Cohen, a professor of international political economy at the University of California, Santa Barbara, explains how these “appalling” provisions “will have adverse economic effects that will be both substantial and long-lasting.”
“Many schools will see their budgets cut,” he continued. “Faced with higher fees and tuition, many students will be forced to drop out – their dreams shattered, their earning potential stunted, their contribution to the American economy significantly curtailed.”
3. It’s not a popularity contest
So will voters punish Republican lawmakers for passing such an unpopular bill during next year’s midterms?
David Barker, director of the Center for Congressional and Presidential Studies at American University sees the opposite – positive outcomes for incumbents.
“Nothing is more central to Republican orthodoxy than tax cuts for the wealthy,” Barker writes. “If Republican lawmakers hadn’t gotten this done now, while they had the chance, they could have expected donors to ignore their calls next year.”
Of course, passage of the law was also a win for the president – and may help him win reelection in 2020.
“And if Trump wins reelection, everything else that we associate with his candidacy and his presidency may be validated and copied by future politicians, on both sides, as 'the way to win,’” Barker concludes, “leaving a political legacy that may far outlast the consequences of this tax bill.”
4. Tax ‘reform’?
Many refer to the Republican tax plan as a “reform” that will make the tax code simpler.
Taxpayers do want reform – just not the kind in the bills that just passed the House and Senate, argues Stephanie Leiser, a lecturer in public policy at University of Michigan.
“Research has shown that their most important gripe about taxes is the demoralizing feeling that the system is hopelessly complex and that other people are getting away with not paying their fair share. To use the president’s words, people think the ‘system is rigged.’”
The Republican plan, she writes, “will only exacerbate that feeling.”
5. ‘Dire’ impact on affordable housing
You might not expect a tax plan to have a big affect on the supply of affordable housing, yet Georgetown research fellow Michelle D. Layser explains how it will do just that.
“The supply of affordable housing is so low that there is no state, city or county in the country where a full-time minimum wage employee can afford to rent a two-bedroom unit,” she wrote. “These housing woes are sure to become more dire.”
Editor’s note: The following is a roundup of stories from The Conversation’s archive.
On Sept. 20, 2017, Hurricane Maria tore across Puerto Rico. The Category 4 storm was so massive – 300 miles wide – that it enveloped the island entirely, battering it with 155 mph winds and dropping almost two feet of rain.
The next day, Puerto Ricans awoke to a radically altered reality. Two months after the storm, the island still faces shortages of food, water, electricity, transportation, cell service and medical services – an American humanitarian crisis that even today shows few signs of improvement.
Here, experts answer five key questions about Puerto Rico in the aftermath of Hurricane Maria.
1. What has life been like since the storm?
For Puerto Ricans who stayed on the island, life has been extraordinarily hard. Just finding enough food and water to survive can be a daily struggle, especially for people in rural areas and places cut off from help by washed-out bridges or mudslides.
Evelyn Milagros Rodriguez, a librarian at the University of Puerto Rico’s Humacao campus, offers an insider’s view of what life is like on the island’s eastern shore (in Spanish here).
She hasn’t been able to go to work since the storm, she says, because the library is “mold-infested and the roof is leaking. The mold has gotten into our collection…most of the furniture and computers will have to be replaced.”
After three weeks of a near-total communication blackout, radio, television, telephones and internet are starting to recover. Still, Rodriguez says, electricity comes and goes, and “it took me more than two weeks just to write this article, between finding somewhere to charge my laptop and locating an internet connection strong enough to research the data and send a file by email.”
Hurricane Maria demolished an estimated 100,000 homes and buildings, and 90 percent of the island’s infrastructure is damaged or destroyed.
It’s also unsafe to venture outside at night. An island-wide curfew was lifted in October, but without streetlights, stoplights or police, driving and walking are dangerous after dark.
“Nothing is easy,” Rodriguez reports.
2. Why are things still so bad?
After a decade of fiscal decline and a May 2017 bankruptcy, Puerto Rico was exceptionally vulnerable by the time Maria hit. Before the hurricane, people already struggled with food insecurity, poor health care and crumbling public infrastructure, the result of both damaging U.S. policy and deepening financial crisis.
Now these problems are complicating Puerto Rico’s recovery, asserts Lauren Lluveras, a policy analyst at the University of Texas, Austin.
Lluveras, whose family is Puerto Rican, notes that in addition to preexisting financial hardship, a lackluster federal relief effort has made storm recovery much harder. The Trump administration delayed dispatching military personnel and material relief until after the hurricane made landfall and allowed a waiver to the Jones Act – a law dictating that only U.S.-made or U.S.-staffed vessels can do commerce in American waters – to lapse.
That “reduce[s] the number of ships that can bring aid to the island,” she says. Both of these federal actions “have slowed Puerto Rico’s recovery considerably.”
On Nov. 13, Puerto Rico’s governor asked the federal government for US$94.4 billion in aid. Previously, Congress had approved just $5 billion in disaster funding for Puerto Rico.
3. Why is the power still out?
Two months after Hurricane Maria, something like 75 percent of Puerto Ricans still don’t have electricity. At times, hundreds of thousands of households – particularly in San Juan and other urban areas – have seen power restored, only to be plunged into darkness again by a system failure.
That’s because almost half of Puerto Rico’s power generation comes from “old, very expensive oil-fired plants,” writes Peter Fox-Penner, director of Boston University’s Institute for Sustainable Energy.
Before it went bankrupt in 2017, PREPA, the island’s sole energy provider, had been hoping to upgrade these aged facilities and incorporate renewable energy sources like solar and gas. Then Maria knocked out the entire grid, and all of PREPA’s resources have gone toward just getting Puerto Rico’s lights turned back on.
The island’s extended outage is “a humanitarian crisis that has yet to be resolved,” Fox-Penner writes. He believes that any hope of Puerto Rico emerging from this storm with a greener, more durable grid – one better able to withstand future hurricanes – have been dashed.
On Nov. 17, PREPA’s director resigned.
4. How does living without power for so long affect people?
Shao Lin, Professor of Public Health at SUNY-Albany, has researched how prolonged blackouts impact health. She believes that Puerto Ricans can expect to see numerous lasting effects from this power outage, including mental health issues.
After Hurricane Sandy, the power was out for about 12 to 14 days in some parts of New York City. For months afterwards, Lin found, residents reported more emergency department visits due to anxiety and mood disorder. They were also more prone to excess drinking and problematic drug use.
The power outage in Puerto Rico has already lasted eight weeks, much longer than the blackout in New York City.
As a result, “We should expect to see a corresponding increase in disease – not only mental health issues, but also diseases that depend on electricity for treatment, such as renal failure, asthma and chronic obstructive pulmonary disease,” warns Lin.
5. Will this crisis change how the US treats Puerto Rico?
Pedro Caban, a professor at SUNY-Albany, thinks that the appalling aftermath of Hurricane Maria could improve Puerto Rico’s political status, moving the needle on longstanding harmful American policies.
Puerto Rico is an unincorporated territorial possession of the United States, meaning that the Puerto Rican government exercises only those powers that the Congress allows. “In other words,” says Caban, “it is still a colony.”
The humanitarian crisis there has prompted the Puerto Rican diaspora in the U.S. to fight for their island. They are actively lobbying against some of the most restrictive colonial policies, among them the Jones Act and the oversight board that has controlled Puerto Rico’s budget since it declared bankruptcy earlier this year.
This could be a “watershed moment that redefines U.S. treatment of Puerto Rico,” writes Caban.
Beyond pressuring local officials and the federal government, Puerto Ricans across the U.S. have organized a nationwide campaign to raise funding and collect donations for Puerto Rico. An outraged and emboldened diaspora, it turns out, may finally get the federal government to resolve Puerto Rico’s damaging colonial status.
Editor’s note: The following is a roundup of archival stories.
Federal investigators following up on the mass shooting at a Texas church on Nov. 5 have seized the alleged shooter’s smartphone – reportedly an iPhone – but are reporting they are unable to unlock it, to decode its encryption and read any data or messages stored on it.
The situation adds fuel to an ongoing dispute over whether, when and how police should be allowed to defeat encryption systems on suspects’ technological devices. Here are highlights of The Conversation’s coverage of that debate.
#1. Police have never had unfettered access to everything
The FBI and the U.S. Department of Justice have in recent years – especially since the 2015 mass shooting in San Bernardino, California – been increasing calls for what they term “exceptional access,” a way around encryption that police could use to gather information on crimes both future and past. Technology and privacy scholar Susan Landau, at Tufts University, argues that limits and challenges to investigative power are strengths of democracy, not weaknesses:
“[L]aw enforcement has always had to deal with blocks to obtaining evidence; the exclusionary rule, for example, means that evidence collected in violation of a citizen’s constitutional protections is often inadmissible in court.”
Further, she notes that almost any person or organization, including community groups, could be a potential target for hackers – and therefore should use strong encryption in their communications and data storage:
“This broad threat to fundamental parts of American society poses a serious danger to national security as well as individual privacy. Increasingly, a number of former senior law enforcement and national security officials have come out strongly in support of end-to-end encryption and strong device protection (much like the kind Apple has been developing), which can protect against hacking and other data theft incidents.”
#2. FBI has other ways to get this information
The idea of weakening encryption for everyone just so police can have an easier time is increasingly recognized as unworkable, writes Ben Buchanan, a fellow at Harvard’s Belfer Center for Science and International Affairs. Instead,
“The future of law enforcement and intelligence gathering efforts involving digital information is an emerging field that I and others who are exploring it sometimes call "lawful hacking.” Rather than employing a skeleton key that grants immediate access to encrypted information, government agents will have to find other technical ways – often involving malicious code – and other legal frameworks.“
Indeed he observes, when the FBI failed to force Apple to unlock the San Bernardino shooter’s iPhone,
"the FBI found another way. The bureau hired an outside firm that was able to exploit a vulnerability in the iPhone’s software and gain access. It wasn’t the first time the bureau had done such a thing.”
#3. It’s not just about iPhones
When the San Bernardino suspect’s iPhone was targeted by investigators, Android researchers William Enck and Adwait Nadkarni at North Carolina State University tried to crack a smartphone themselves. They found that one key to encryption’s effectiveness is proper setup:
“Overall, devices running the most recent versions of iOS and Android are comparably protected against offline attacks, when configured correctly by both the phone manufacturer and the end user. Older versions may be more vulnerable; one system could be cracked in less than 10 seconds. Additionally, configuration and software flaws by phone manufacturers may also compromise security of both Android and iOS devices.”
#4. What they’re not looking for
What are investigators hoping to find, anyway? It’s nearly a given that they aren’t looking for emails the suspect may have sent or received. As Georgia State University constitutional scholar Clark Cunningham explains, the government already believes it is allowed to read all of a person’s email, without the email owner ever knowing:
“[The] law allows the government to use a warrant to get electronic communications from the company providing the service – rather than the true owner of the email account, the person who uses it.
"And the government then usually asks that the warrant be "sealed,” which means it won’t appear in public court records and will be hidden from you. Even worse, the law lets the government get what is called a “gag order,” a court ruling preventing the company from telling you it got a warrant for your email.“
#5. The political stakes are high
With this new case, federal officials risk weakening public support for giving investigators special access to circumvent or evade encryption. After the controversy over the San Bernardino shooter’s phone, public demand for privacy and encryption climbed, wrote Carnegie Mellon professor Rahul Telang:
"Repeated stories on data breaches and privacy invasion, particularly from former NSA contractor Edward Snowden, appears to have heightened users’ attention to security and privacy. Those two attributes have become important enough that companies are finding it profitable to advertise and promote them.
"Apple, in particular, has highlighted the security of its products recently and reportedly is doubling down and plans to make it even harder for anyone to crack an iPhone.”
It seems unlikely this debate will ever truly go away: Police will continue to want easy access to all information that might help them prevent or solve crimes, and regular people will continue to want to protect their private information and communications from prying eyes, whether that’s criminals, hackers or, indeed, the government itself.
Editor’s note: The following is a roundup of stories from The Conversation’s archive.
Once again, Americans are asking themselves the same familiar, heartsick questions:
How can gun violence be prevented? What policy or program could help save innocent lives? What is an approach that would be tolerable to people on both sides of the political spectrum?
#1. No ready answers
“Unfortunately, the research we need to answer these questions doesn’t exist – and part of the problem is that the federal government largely doesn’t support it,” explains Lacey Wallace, a criminal justice researcher at Penn State University.
Why not? In 1996, Congress passed the Dickey Amendment, which mandated that “none of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control.” From 1996 to 2013, CDC funding for gun research dropped by 96 percent.
It’s an urgent problem, Wallace writes. In 2015, for example, roughly 85,000 people were injured by firearms, including nearly 10,000 children.
#2. A public health emergency
One in four American children reports having easy access to a gun in the home, the researchers pointed out.
“Parents sometimes do not fully understand child and youth development, impulsiveness or curiosity,” Xuan wrote.
“A recent study shows that what parents report about their children’s access to guns often contradicts children’s reports,” he continued. “The kids reveal that they know the location of guns in the house and have handled the gun, while parents reported they did not. For injury prevention, it is far more effective and long-lasting to change the environment by changing modifiable policies and norms than to try to change the way children behave.”
#3. A new debate
A mass shooting often brings out partisan politics. Those who want to regulate guns face off with those who want to protect the Second Amendment, and the political debate never seems to progress.
“A new dialogue is desperately needed among policymakers and the public,” writes Timothy M. Smith, a professor of International Business at the University of Minnesota. “It could begin by shifting our focus away from the regulation of guns toward understanding (and mitigating) the social costs of firearm fatalities.”
From a purely economic perspective, Smith writes, the social costs of gun deaths likely exceeded US$300 billion in 2013. This is a staggering number, more than what the federal government spent on Medicaid in the same year. And that’s not including the more than 80,000 nonfatal firearm injuries each year.
Smith’s argument is not that you can put a price on human life, but that a government policy that discourages people from owning the most lethal types of guns – handguns – could protect society as a whole.
#4 Testing existing laws
Michael Siegel, a professor of community health services, and Molly Pahn of Boston University recently created a new database that offers insights into the effectiveness of existing laws across all 50 states for the past 27 years.
“This database is intended to help researchers evaluate the effectiveness of different state-level approaches to reducing gun violence,” Siegel and Pahn write. “By examining the relationship between changes in these laws over time and changes in firearm mortality, researchers may be able to identify which policies are effective and which are not.”
Editor’s note: On Friday, Oct. 20, “Third Rail with OZY” will ask: Is marriage dead?
This roundup of stories from The Conversation archive explores trends and pressures affecting the institution of marriage around the world.
#1: Fewer ‘I dos’
There’s no doubt: Fewer people are making a commitment to marriage.
Barely “more than half of adults in the U.S. say they’re living with a spouse,” writes Jay Zagorsky, an economist at The Ohio State University. “It is the lowest share on record, and down from 70 percent in 1967.”
What’s behind the trend?
“Some blame widening U.S. income and wealth inequality,” Zagorsky writes. “Others point the finger at the fall in religious adherence or cite the increase in education and income of women, making women choosier about whom to marry. Still others focus on rising student debt and rising housing costs, forcing people to put off marriage. Finally some believe marriage is simply an old, outdated tradition that is no longer necessary.”
However, Zagorsky writes, none of these factors alone can explain the trend.
#2: Delayed adolescence
It could be that marriage rates are down because America’s youth is suffering from a Peter Pan syndrome?
Today’s teens are in no hurry to grow up, according to Jean Twenge, a professor of psychology of San Diego State University.
“The teen pregnancy rate has reached an all-time low,” Twenge writes. “Fewer teens are drinking alcohol, having sex or working part-time jobs. And as I found in a newly released analysis of seven large surveys, teens are also now less likely to drive, date or go out without their parents than their counterparts 10 or 20 years ago.”
“‘Adulting’ – which refers to young adults performing adult responsibilities as if this were remarkable – has now entered the lexicon,” Twenge writes. “The entire developmental path from infancy to full adulthood has slowed.”
This slowed path could also mean a delay in walking down the aisle.
#3: No matchmaking skills
This downward trend in marriage is worth our attention because “stable, satisfying marriages promote physical and mental health for adults and their children,” according to psychology professors Justin Lavner, Benjamin Karney and Thomas Bradbury.
What’s more, they explain, the abandonment of marriage is more pronounced among low-income Americans – prompting the government to try to turn things around.
The professors explain that relationship education programs are the cornerstone of government efforts to strengthen low-income Americans’ relationships and encourage them to get, or stay, married.
By interviewing low-income couples in Los Angeles, the professors concluded that communication may not be the main driver of relationship satisfaction for these couples.
They asked the couples themselves about the biggest sources of disagreement in their marriages. Their responses? Money management, household chores, leisure time, in-laws and children.
#4: Pity the bare branches
The downward trend in marriage is not limited to the United States.
“The United Nations gathered data for roughly 100 countries, showing how marriage rates changed from 1970 to 2005,” Zagorsky notes. “Marriage rates fell in four-fifths of them.”
“Australia’s marriage rate, for example, fell from 9.3 marriages per 1,000 people in 1970 to 5.6 in 2005. Egypt’s declined from 9.3 to 7.2. In Poland, it dropped from 8.6 to 6.5.
"The drop occurred in all types of countries, poor and rich.”
Xuan Li, an assistant professor of psychology at NYU Shanghai introduced The Conversation readers to China’s involuntary bachelors.
Those “who fail to add fruit to their family tree are often referred to as "bare branches,” or guanggun,“ Li writes. "And the Chinese state has recently started to worry about the dire demographic trend posed by the growing number of bare branches.”
According to the 2010 national census data, “82.44 percent of Chinese men between 20 and 29 years of age have never been married, which is 15 percent more than women of the same age.”
Soon after the U.S. gained independence, Uncle Sam began to tax inherited wealth. These levies applied only intermittently, however, until 1916, when Congress and the Wilson administration established the modern estate tax in time for it to finance U.S. involvement in World War I.
Once a significant moneymaker that generated 10 percent of federal tax revenue, the estate tax now reaps only about 1 percent. Just one out of 500 estates left by people with at least US$5.5 million to their name – or couples with more than $11 million – get taxed today.
Still, if Congress were to end the estate tax, as the Trump administration and Republican lawmakers propose, the government might miss those funds. What’s more, nonprofits could see their budgets pinched by a decline in giving.
What happens after repeal
Without an estate tax, there are two likely scenarios. The people inheriting a larger share of great fortunes might give more of their windfalls to charity. Alternatively, they could keep more of the money to invest, enjoy or share with their families and friends.
The estate tax encourages giving by providing a dollar-for-dollar deduction from estate and gift tax liabilities matching any amount of money bequeathed to charities after death. Estates are officially taxed at a 40 percent rate now, but loopholes and workarounds push the average rate down to 17 percent, according to the Tax Policy Center.
When the price of anything rises – whether it be bacon or tennis balls – economists expect demand for that product or service to fall. Without an estate tax, there’s nothing to be gained, accounting-wise, from rich people writing posthumous charitable gifts into their wills.
The question is, do fewer multimillionaires write charities into their wills when this incentive goes away?
The money at stake is significant. Bequest giving has more than tripled in inflation-adjusted dollars over the last 40 years, rising to $30.36 billion in 2016 from $9.7 billion in 1976, according to the Giving USA report, which the Indiana University Lilly Family School of Philanthropy researches and writes in partnership with the Giving USA Foundation.
To be clear, the volume of bequests is often unpredictable over the short term and does not purely track tax policy changes. Frequent adjustments to the estate tax rate and exemption level, as well as market swings – which alter the value of assets like stocks, bonds and fine art – affect what happens in a given year.
So do some deaths. David Rockefeller, the successful banker and heir to a great fortune, who died this year at 101, had a net worth in excess of $3 billion despite giving $2 billion away during his lifetime.
When his estate auctions off an estimated $700 million in European ceramics, Chinese porcelain, paintings, furniture and other items from his assorted collections, the proceeds will go to charity, bumping up the total for bequests.
But I have found in my own research that, controlling for various factors, a 10 percent increase in the estate tax rate is associated with a nearly 7 percent increase in charitable bequest giving.
Conversely, raising the threshold for how large estates must be before they are subject to the tax, knows as the “exemption level,” is associated with decreases in bequest giving, especially when the exemption is at or above $3 million.
What I saw indicates that when more wealthy people were exempted from the estate tax altogether, fewer of them wrote charities into their wills. Alternatively, those that did leave money for a cause tended to make smaller donations.
In 2004 the Congressional Budget Office estimated a smaller decline of as little as 6 percent.
Since this research is fairly old and lots of things have changed since then, these studies may either underestimate or overestimate the effects of a repeal of the estate tax today.
Personally, I believe that these studies probably underestimate the effects of repeal because extrapolating what would happen with a hypothetical situation is always trickier than modeling an outcome based on a real-world event.
Until 2010, there was no relatively recent evidence of what might happen with a complete repeal. That year, the estate tax was essentially paused.
While its brevity and multiple idiosyncrasies limit what it shows, the episode does provide a case study of what might happen if the estate tax were repealed.
During the decade before 2010, bequests ranged from $18 billion to $24 billion a year – except in 2008 when it surged over $31 billion.
In 2009, bequest giving plummeted to $19 billion. This was for two reasons: The exemption level rose from $2 million to $3.5 million and the Great Recession dramatically drove down the value of stocks, bonds, real estate and other assets.
There were two options for the estates of people who died in 2010, when the Great Recession was over but its effects were lingering: Take a $5 million exemption and a 35 percent top marginal tax rate or a $0 exemption and a 0 percent top marginal tax rate.
Unsurprisingly, most chose the tax-free option.
Partly as a result of these two tracks, the Internal Revenue Service still collected $7 billion in estate and gift tax revenue in the 2010 fiscal year – even though theoretically the estate tax had been waived. And bequest giving, according to Giving USA, grew by 22 percent to $23.4 billion between 2009 and 2010 – admittedly from a Great Recession-induced, below-normal amount in 2009.
In 2011, once the estate tax was reinstated and the exemption returned to $5 million with a top marginal tax rate of 35 percent, bequest giving grew 7.6 percent to $25 billion. Since then, the exemption has been adjusted only for inflation. The estate tax rate held steady at 35 percent in 2012, later rising to 40 percent.
Bequest giving remained in the mid-$20 billion range for 2012 and 2013 then edged up to the low $30 billion range – about where it stood prior to the Great Recession in inflation-adjusted dollars.
What does all that mean? While there’s no clear pattern, suspending the estate tax didn’t eliminate bequest giving in 2010, even if it appears to have reduced it.
It’s hard to draw firm conclusions from this episode, as few estate lawyers or wealthy people anticipated this one-year repeal – even if it was rumored at the time that many rich families had prepared multiple wills to be deployed as needed according to the latest estate tax policies or took other steps to take advantage of the unusual and shifting circumstances.
And it’s important to remember that people give for many different reasons and that bequests are different from other kinds of donations – it is truly a last chance to support a charity or cause. As such, people usually don’t give just because of a tax deduction, but all the studies I have seen indicate that taxing inherited wealth makes a difference.
What’s more, research also suggests that estate taxes can encourage donors to give more during their lifetimes. Several studies have estimated that eliminating the estate tax would usher in a decline of non-bequest giving in the 6 percent to 12 percent range or more.
In other words, repealing the estate tax would probably reduce giving to charity both during donors’ lifetimes and after their deaths.
Patrick Rooney is affiliated with the public policy advisory committees for Independent Sector and The Philanthropy Roundtable. The author and the IU Lilly Family School of Philanthropy have received grants, contracts, and donations from many foundations, corporations, charities, and individuals. However, none of them funded this research. The views expressed in this essay are strictly my own and do not reflect policy stances of Indiana University or the Lilly Family School of Philanthropy.
Editor’s Note: On Friday, Oct. 6, “Third Rail with OZY” will discuss violence in the United States.
These stories from The Conversation archive explore how violence permeates different aspects of American society.
#1. Kids today
Do American parents teach their kids violent behavior through the use of corporal punishment?
A professor of psychiatry at SUNY Upstate Medical University and Tufts Medical School, Ronald Pies takes up the question “Is it OK to spank a misbehaving child once in a while?”
Pies begins by acknowledging that researchers and parents often disagree on this topic, but ultimately concludes “spanking a child may seem helpful in the short term, but is ineffective and probably harmful in the long term. The child who is often spanked learns that physical force is an acceptable method of problem solving.”
And yet, Pies doesn’t feel that parents who spank their children need a stern lecture – and certainly not an even stronger punishment.
“It isn’t that the parent is "evil” by nature or is a “child abuser,” Pies writes. “Often, the parent has been stressed to the breaking point, and is not aware of alternative methods of discipline – for example, the use of "time-outs,” removal of privileges and positive reinforcement of the child’s appropriate behaviors.“
#2. Paddling still frequent
Unfortunately, parents’ belief in corporal punishment often follows their children to school.
As Joseph Gagnon of the the University of Florida writes, "19 states still allow corporal punishment [in schools], despite research that clearly indicates such public humiliation is ineffective for changing student behavior and can, in fact, have long-term negative effects.”
According to Gagnon, every day approximately 838 students are paddled in American schools. And children in less affluent communities were more likely to be hit.
Why is this practice still so pervasive? Gagnon and his colleagues talked to school principals to find out. They learned, “principals cite pressure from parents as a primary reason for using corporal punishment. Despite the science, the idea that corporal punishment is effective, ‘Because that’s how I was raised,’ pervades the discussion.”
#3. A culture of aggression
Of course, schools aren’t the only institutions in the U.S. were physical violence takes place. The criminal justice system is another.
Paul Hirschfield of Rutgers University studies violence perpetuated by police in various countries.
“American police kill a few people each day, making them far more deadly than police in Europe,” Hirschfield writes.
Although the cause of police killings is complex, Hirschfield believes one factor is American gun culture – which causes the police to fear for their own safety in too many situations.
“American police are primed to expect guns …” Hirschfield writes. “It may make American policing more dangerous and combat-oriented. It also fosters police cultures that emphasize bravery and aggression.”
#4. Behind prison walls
Too few of us take the time to think about how that culture of aggression follows prisoners behind bars, writes Heather Ann Thompson, a professor of History and Afroamerican and African Studies at the University of Michigan.
“That so many are blissfully unaware of just how many people are, or have been, subject to containment or control is, perhaps, unsurprising,” Thompson writes. “Prisons are built to be out of sight and are, thus, out of mind.”
And yet, Thompson writes, “the closed nature of prisons remains a serious problem in this country” – and one that demands closer scrutiny.
“In September 2016, prisoners at facilities across the country erupted in protests for better conditions,” Thompson writes. “In March and April of 2017, prisons in Delaware and Tennessee similarly exploded. In each of these rebellions, the public was told little about what had prompted the chaos and even less about what happened to the protesting prisoners once order was restored.”
But, she writes, “it is obvious that much trauma takes place behind bars while we aren’t watching.”
Editor’s note: This is a roundup of gun control articles published by scholars from the U.S. and two other countries where deadly mass shootings are far less common.
An underresearched epidemic
Guns are a leading cause of death of Americans of all ages, including children. Yet “while gun violence is a public health problem, it is not studied the same way other public health problems are,” explains Sandro Galea, dean of Boston University’s School of Public Health.
That’s no accident. Congress has prohibited firearm-related research by the Centers for Disease Control and Prevention and the National Institutes of Health since 1996. Galea says:
“Unfortunately, a shortage of data creates space for speculation, conjecture and ill-informed argument that threatens reasoned public discussion and progressive action on the issue.”
The Australian model
The contrast with Australia is especially stark. Just as Congress was barring any research that might strengthen the case for tighter gun regulations, that country established very strict firearm laws in response to the Port Arthur massacre, which killed 35 people in 1996.
To clamp down on guns, the federal government worked with Australia’s states to ban semiautomatic rifles and pump action shotguns, establish a uniform gun registry and buy the now-banned guns from people who had purchased them before owning them became illegal. The country also stopped recognizing self-defense as an acceptable reason for gun ownership and outlawed mail-order gun sales.
“When it comes to firearms, Australia is far a safer place today than it was in the 1990s and in previous decades.”
There have been no mass murders since the Port Arthur massacre and the subsequent clampdown on guns, Chapman observes. In contrast, there were 13 of those tragic incidents over the previous 18 years – in which a total of 104 victims died. Other gun deaths have also declined.
Concerns about complacency
After so many years with no mass killings, some Australian scholars fear that their country may be moving in the wrong direction.
Twenty years after doing more than any other nation to strengthen firearm regulation, “many people think we no longer have to worry about gun violence,” say Rebecca Peters of the University of Sydney and Chris Cunneen at the University of New South Wales. They write:
“Such complacency jeopardizes public safety. The pro-gun lobby has succeeded in watering down the laws in several states. Weakening the rules on pistols so that unlicensed shooters can walk into a club and shoot without any waiting period for background checks has resulted in at least one homicide in New South Wales.”
In the UK
Like Australia, the U.K. tightened its gun regulations following its own 1996 tragedy – when a man killed 16 children and their teacher at Dunblane Primary School, near Stirling, Scotland.
Subsequently, the U.K. banned some handguns and bought back many banned weapons. There, however, progress has been less impressive, notes Helen Williamson, a researcher at the University of Brighton. On the one hand, the number of firearms offenses has declined from a high of 24,094 in 2004 to 7,866 in 2015. On the other, criminals are growing more “resourceful in identifying alternative sources of firearms,” she says, adding:
“Although the availability of high-quality firearms may have fallen, the demand for weapons remains. This demand has driven criminals to be resourceful in identifying alternative sources of firearms. There are growing concerns about how they could acquire instructions online on how to build a homemade gun, or even 3D-print a functioning pistol.”
Editor’s note: the following is roundup of previously published articles.
Passwords are everywhere – and they present an impossible puzzle. Social media profiles, financial records, personal correspondence and vital work documents are all protected by passwords. To keep all that information safe, the rules sound simple: Passwords need to be long, different for every site, easy to remember, hard to guess and never written down. But we’re only human! What is to be done about our need for secure passwords?
Get good advice
Sadly, much of the password advice people have been given over the past decade-plus is wrong, and in part that’s because the real threat is not an individual hacker targeting you specifically, write five scholars who are part of the Carnegie Mellon University passwords research group:
“People who are trying to break into online accounts don’t just sit down at a computer and make a few guesses…. [C]omputer programs let them make millions or billions of guesses in just a few hours…. [So] users need to go beyond choosing passwords that are hard for a human to guess: Passwords need to be difficult for a computer to figure out.”
To help, those researchers have developed a system that checks passwords as users create them, and offers immediate advice about how to make each password stronger.
Use a password manager
All that computing power can work to our advantage too, writes Elon University computer scientist Megan Squire:
“The average internet user has 19 different passwords. It’s easy to see why people write them down on sticky notes or just click the ‘I forgot my password’ link. Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.”
That sounds like a good start.
Getting emoji – 🐱💦🎆🎌 – into the act
Then again, it might be even better not to use any regular characters. A group of emoji could improve security, writes Florian Schaub, an assistant professor of information and of electrical engineering and computer science at the University of Michigan:
“We found that emoji passcodes consisting of six randomly selected emojis were hardest to steal over a user’s shoulder. Other types of passcodes, such as four or six emojis in a pattern, or four or six numeric digits, were easier to observe and recall correctly.”
Still, emoji are – like letters and numbers – drawn from a finite library of options. So they’re vulnerable to being guessed by powerful computers.
Drawing toward a solution
To add even more potential variation to the mix, consider making a quick doodle-like drawing to serve as a password. Janne Lindqvist from Rutgers University calls that sort of motion a “gesture,” and is working on a system to do just that:
“We have explored the potential for people to use doodles instead of passwords on several websites. It appeared to be no more difficult to remember multiple gestures than it is to recall different passwords for each site. In fact, it was faster: Logging in with a gesture took two to six seconds less time than doing so with a text password. It’s faster to generate a gesture than a password, too: People spent 42 percent less time generating gesture credentials than people we studied who had to make up new passwords. We also found that people could successfully enter gestures without spending as much attention on them as they had to with text passwords.”
Easier to make, faster to enter, and not any more difficult to remember? That’s progress.
A world without passwords
Any type of password is inherently vulnerable, though, because it is an heir to centuries of tradition in writing, writes literature scholar Brian Lennon of Pennsylvania State University:
“[E]ven the strongest password … can be used anywhere and at any time once it has been separated from its assigned user. It is for this reason that both security professionals and knowledgeable users have been calling for the abandonment of password security altogether.”
What would be left then? Only attributes about who we are as living beings.
The unknowable password
Identifying people based not on what they know, but rather their actual biology, is perhaps the ultimate goal. This goes well beyond fingerprints and retina scans, Elon’s Squire explains:
“[A] computer game similar to ‘Guitar Hero’ [can] train the subconscious brain to learn a series of keystrokes. When a musician memorizes how to play a piece of music, she doesn’t need to think about each note or sequence. It becomes an ingrained, trained reaction usable as a password but nearly impossible even for the musician to spell out note by note, or for the user to disclose letter by letter.”
That might just do away with passwords altogether. And yet if you’re really just longing for the days of deadbolts, padlocks and keys, you’re not alone.
Don’t just leave things to a password
User authentication using an electronic key is here, as Penn State-Altoona information sciences and technology professor Jungwoo Ryoo writes:
“A new, even more secure method is gaining popularity, and it’s a lot like an old-fashioned metal key. It’s a computer chip in a small portable physical form that makes it easy to carry around. (It even typically has a hole to fit on a keychain.) The chip itself contains a method of authenticating itself … And it has USB or wireless connections so it can either plug into any computer easily or communicate wirelessly with a mobile device.”
Just don’t leave your keys on the table at home.
On Friday, Sept. 8, “Third Rail with OZY” opened by asking: “Is truth overrated? Is lying the American way?”
Of course, lies have long been a big part of American politics, but fibs, tall tales and whoppers also affect our home and work lives.
We searched The Conversation archive for stories that explore how, why and when people lie – and what happens as a result.
Do you lie – and why?
Liars aren’t born – but they do start early.
Gail Heyman is a professor of psychology at the University of California, San Diego who studies how children as young as three-and-a-half years old need to develop before they can become successful liars. Heyman acknowledges the corrosive power of lying on relationships, organizations and institutions. But she also admits that lying is “a source of great social power, as it allows people to shape interactions in ways that serve their interests: They can evade responsibility for their misdeeds, take credit for accomplishments that are not really theirs, and rally friends and allies to the cause.”
Have you ever harnessed this “great social power” by telling a lie?
If you answered “no,” perhaps that’s true, but perhaps that’s just something you mistakenly believe – a falsehood.
Ronald W. Pies, a clinical professor of psychiatry at Tufts University School of Medicine, walked us through the difference between those two terms.
Someone “who deliberately misrepresents what he or she knows to be true is lying – typically, to secure some personal advantage,” Pies writes. “In contrast, someone who voices a mistaken claim without any intent to deceive is not lying. That person may simply be unaware of the facts, or may refuse to believe the best available evidence. Rather than lying, he’s stating a falsehood.”
Parsing lies from falsehoods requires us to understand another person’s motivation. That’s tricky business anytime – but it gets more complicated when the speaker you’re scrutinizing is the president of the United States.
Into the political realm
Donald Trump, of course, embraced what Kellyanne Conway later dubbed “alternative facts” in the first official act of his presidency – his inauguration speech.
During the speech, Trump claimed that unemployment went up under President Obama. It didn’t, as researchers at the University of Florida point out, but 67 percent of Trump’s supporters believed it at the time. Such misinformation contributes to Americans’ sense that there is a “reality gap” between conservatives and liberals in the United States.
But UF’s Lauren Griffin writes that these far-fetched claims aren’t “lies,” but something she sees as much more dangerous – bullshit.
Griffin quotes the philosopher Harry Frankfurt as explaining that a bullshitter “does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.”
Of course, “politically motivated skepticism of science is certainly not new,” as Elizabeth Suhay, an assistant professor of government at American University, observed. Trump didn’t invent the divide between scientists and politicians – or between science and policy.
“Science is consistently a political target precisely because of its political power,” Suhay writes. “The problem for science and evidence-based policy comes when politicians and other political actors decide to discredit the science on which a conclusion is based or bend the science to support their policy position. Call it "policy-based evidence” as opposed to “evidence-based policy.”
Could an embrace of “policy-based evidence” harm U.S. credibility in the world?
One example to consider is how the world reacted when American leadership turned its back on settled climate science and withdrew from the Paris Agreement. But perhaps embracing such hard truths is overrated.
Do you disagree? Make your voice heard in Third Rail’s online poll or tweet with the hashtag #ThirdRailPBS.
Nearly half a million people are expected to seek federal aid in the aftermath of the Category 4 hurricane, which already has dumped more than 30 inches on the Houston area.
While this horrible hurricane is extreme, the number of disasters has doubled globally since the 1980s, with the damage and losses estimated at an average US$100 billion a year since the new millennium, and the number of people affected also growing.
Hurricane Katrina in 2005 was the costliest natural disaster in the U.S., with estimates between $100 billion and $125 billion. The death toll of Katrina is still being debated, but we know that at least 2,000 were killed and thousands were left homeless.
Worldwide, the toll is staggering. The triple disaster of an earthquake, tsunami and nuclear meltdown that started March 11, 2011 in Fukushima, Japan killed thousands, as did the 2010 Haiti earthquake.
The challenges to disaster relief organizations, including nongovernmental organizations, are immense. The majority operate under a single, common, humanitarian principle of protecting the vulnerable, reducing suffering and supporting the quality of life. At the same time, they need to compete for financial funds from donors to ensure their own sustainability.
This competition is intense. The number of registered U.S. nonprofit organizations increased from 12,000 in 1940 to more than 1.5 million in 2012. Approximately $300 billion are donated to charities in the United States each year.
At the same time, many stakeholders believe that humanitarian aid has not been as successful in delivering on its goals due to a lack of coordination among NGOs, which results in duplication of services.
My team and I have been looking at a novel way to improve how we respond to natural disasters. One solution might be game theory.
Getting the right supplies to those in need is daunting
The need for improvement is strong.
Within three weeks following the 2010 earthquake in Haiti, 1,000 NGOs were operating in Haiti. News media attention of insufficient water supplies resulted in immense donations to the Dominican Red Cross to assist its island neighbor. As a result, Port-au-Prince was saturated with cargo and gifts-in-kind, so that shipments from the Dominican Republic had to be halted for multiple days. After the Fukushima disaster, there were too many blankets and items of clothing shipped and even broken bicycles.
In fact, about 60 percent of the items that arrive at a disaster site are nonpriority items. Rescue workers then waste precious time dealing with these nonpriority supplies, whereas victims suffer because they do not receive the critical needs supplies in a timely manner.
The delivery and processing of wrong supplies also adds to the congestion at transportation and distribution nodes, overwhelms storage capabilities and results in further delays of necessary items. The flood of donated inappropriate materiel in response to a disaster is often referred to as the second disaster.
The economics of disaster relief, on the supply side, is challenged as people need to secure donations and ensure the financial sustainability of their organizations. On the demand side, the victims’ needs must be fulfilled in a timely manner while avoiding wasteful duplication and congestion in terms of logistics.
Game theory in disasters
Game theory is a powerful tool for the modeling and analysis of complex behaviors of competing decision-makers. It received a tremendous boost from the contributions of the Nobel laureate John Nash.
Game theory has been used in numerous disciplines, from economics, operations research and management science even to political science.
In the context of disaster relief, however, there has been little work done in harnessing the scope of game theory. It is, nevertheless, clear that disaster relief organizations compete for financial funds and donors respond to the visibility of the organizations in the delivery of relief supplies to victims through media coverage of disasters.
We modeled the costs incurred in delivering relief supplies, including congestion, the gain from delivering goods (since these NGOs are nonprofits and also wish to do good) plus the financial donations they stand to acquire from media exposure at the disaster sites and compete for.
These comprised each NGOs “utility” function, which each sought to individually maximize. The NGOs also faced constraints in the volume of relief supplies that they had pre-positioned and could distribute to victims of the disaster.
We examined two scenarios:
When the NGOs were free from satisfying common minimum and maximum amounts of the relief item demands at points of need (a Nash Equilibrium model);
When the NGOs had to make sure they delivered the minimum needed supplies at each demand point for the victims but did not exceed the maximum amounts set by a higher-level organization.
Such constraints guarantee that the victims would be served appropriately while, at the same time, minimizing materiel convergence and congestion associated with unnecessary supplies (a Generalized Nash Equilibrium model because of the common/shared constraints). Such bounds would correspond to policies imposed by a higher-level humanitarian or governmental organization.
Policies and implications
We used a case study of Hurricane Katrina, because of its historic catastrophic nature.
We built the models using publicly available data, with the NGOs corresponding to the Red Cross, the Salvation Army and “other” NGOs collectively. Since Louisiana suffered the brunt of the damages, we selected, as demand points, 10 parishes in Louisiana.
Applying computer-based algorithms, we computed the relief item flows and the utilities of the NGOs in the noncooperative games without imposed policies in the form of bounds (Nash Equilibrium) and with (Generalized Nash Equilibrium).
An actionable framework for NGO decision-makers
A comparison of the outcomes under the Nash and Generalized Nash Equilibria quantifiably showed that coordination is critical to achieving better outcomes in humanitarian relief operations.
The Generalized Nash solution is not only capable of eliminating the possibility of having under- or over-supply, it guarantees – through competition – the efficient allocation of resources once the minimum requirements are met.
Without such imposed bounds, relief organizations may choose an “easy” route in delivering supplies because it is less costly, rather than the route that will end in a destination where there are the most in need.
Therefore, the game theory framework has significant benefits both for the disaster victims and for the NGOs. In addition, we also demonstrated that, under certain circumstances, the Generalized Nash solution is capable of attracting more donations than the unrestricted, competitive solution.
Our study has numerous implications to guide coordinating authorities. It provides a strong argument for the importance of these coordinating bodies in successful humanitarian relief efforts.
Specifically, our research demonstrates that, if authorities can impose the constraints on upper and lower demand levels for relief supplies, they can provide an effective mechanism to improve the disaster response. Response teams need a certain amount of supplies to save lives but not so much that it results in congestion and waste.
Governmental agencies or NGOs need to come together to set these values.
The Generalized Nash Equilibrium Game Theory model provides managers of NGOs with a strategic framework to analyze their interactions with other NGOs, while also providing insights into their own operations. Moreover, as our study reveals, the framework answers fundamental questions that every NGO must address: (1) How and where should we provide aid? and (2) How can we finance those operations? A computer-based model that can answer these questions provides an actionable framework for NGO decision-makers.
Our study further suggests that, despite the competition among NGOs for fundraising, there are strong reasons for them to collaborate, thereby strengthening their disaster response and achieving better results for those in need. In fact, our game theory analysis quantifiably shows that cooperation among NGOs may increase financial donations to all NGOs.
This is an updated version of an article that ran in The Conversation on March 9, 2017.
Anna Nagurney does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The following is a roundup of previously published articles.
The U.S. electricity grid, the sprawling network that delivers power to our homes and businesses, is changing rapidly – a point few experts will debate. But how policies should guide the future of the grid – and specifically which fuel sources should be used – is a highly contentious question.
Department of Energy Secretary Rick Perry in April asked DOE staff to prepare a study, which was released on August 24, to assess the electricity markets and its reliability. News of the review caused great trepidation among solar and wind advocates because Perry had singled out the importance of nuclear and coal – a favorite of President Trump – in maintaining grid reliability.
In the end, the study said the sharp decline in natural gas prices over the past decade is the primary reason coal has become less economic, rather than the spread of wind and solar. The report also found that wind and solar, which provide power intermittently, have not caused any insurmountable problems in the grid’s functioning – yet.
Lessons from Texas
Reactions to the report’s release have been mixed, and it’s not clear what policies might follow from it. But academics have written extensively about the dramatic changes happening behind the scenes on the grid. Most notably, four energy experts from the University of Texas looked at what happened when wind energy surged on the Texas grid, known as ERCOT – much of it during Perry’s time as governor.
Wind power did not crash the Texas grid because the state reformed how it operates its wholesale energy markets, they said. Yes, grid operators need to rely more on natural gas plants to compensate for varying wind and solar, but the wholesale price for energy has gone down. They wrote:
“Research at UT Austin shows that while installing significant amounts of solar power would increase annual grid management costs by $10 million in ERCOT, it would reduce annual wholesale electricity costs by $900 million. The result of all this is that renewables compete with conventional sources of power, but they do not displace nearly as much coal as cheap natural gas. In fact, cheap gas displaces, on average, more than twice as much coal than renewables have in ERCOT.”
Nuclear power plant operators cheered the DOE’s report because it noted the crucial role of nuclear in the current grid and recommended faster reviews for new plant construction. But should the federal government provide subsidies, as New York has done in one case, to keep nuclear power plants in operation?
Nuclear engineering professor Arthur Motta argued that policies should recognize the fact that nuclear power is reliable and emits no emissions during operation.
“Subsidizing carbon-free sources is justifiable to provide for the future greater good of the country because they provide climate change and clean air benefits. Perversely, however, the U.S. Environmental Protection Agency and most states have declined to consider rewarding the same benefits from existing nuclear power plants.”
On the other hand, Peter Bradford from the Vermont Law School and a former Nuclear Regulatory Commission member said that nuclear has always struggled to be economic, and policies to favor nuclear will cost consumers. He wrote,
“Nuclear power producers want government-mandated long-term contracts or other mechanisms that require customers to buy power from their troubled units at prices far higher than they would pay otherwise. Providing such open-ended support will negate several major energy trends that currently benefit customers and the environment.”
The stated rationale behind the study was that the U.S. grid needs to ensure it has “baseload” power sources that can operate around the clock, as nuclear, coal and natural plants can. And the DOE study does note that it’s worth studying what happens with a deeper penetration of solar and wind because they could cause reliability issues in the future.
California is on the vanguard of this change. When it shut down its last nuclear plant, UCLA researchers Eric Daniel Fournier and Alex Ricklefs explained that the state will need a number of techniques, including energy storage, to meet its aggressive renewable energy targets. They wrote,
“Careful planning is needed to ensure that energy storage systems are installed to take over the baseline load duties currently held by natural gas and nuclear power, as renewables and energy efficiency may not be able to carry the burden.”
Meanwhile, the future of coal still does not look particularly bright – at least in the U.S., wrote Lucas Davis from University of California Berkeley. He wrote,
“This dramatic change has meant tens of thousands of lost coal jobs, raising many difficult social and policy questions for coal communities. But it’s an unequivocal benefit for the local and global environment. The question now is whether the trend will continue in the U.S. and, more importantly, in fast-growing economies around the world.”
Editor’s note: The following is roundup of archival stories.
On June 19, the U.S. Supreme Court announced that it would hear Gill v. Whitford, a case on partisan gerrymandering in Wisconsin.
This controversial practice – where states are carved up into oddly shaped electoral districts favoring one political party over another – has already ignited debates in a number of states, including North Carolina, Pennsylvania and Maryland.
The Supreme Court’s decision may provide some long-awaited guidance on whether gerrymandering is constitutional. To better understand what this news means, we turned to stories in our archive.
What is gerrymandering?
Gerrymandering is far from a new problem, explains Michel Balinski at École Polytechnique – Université Paris-Saclay, nor will this be the first time that the Supreme Court has considered it:
Practiced as a political art form for some two centuries, gerrymandering is now an exact science. Computer programs using vast data banks describing sociological, ethnic, economic, religious and political characteristics of the electorate determine districts – often of incredibly weird contours – that favor the party that drew the maps.
For an example of those weird contours, take a look at Ohio’s ninth district, nicknamed “the snake on the lake” for the way it stretches from Toledo to Cleveland.
“The representation of communities is made a mockery by maps that either splinter cities and counties or overwhelm them with voters ‘tacked’ into the district from distant rural areas,” writes Richard Gunther at The Ohio State University.
Americans often seem proud of their democracy, notes Pippa Norris at Harvard University, but experts rank U.S. elections among the worst in all Western democracies. According to one analysis, the U.S. scores only 62 on a 100-point assessment of election integrity.
There are many issues with our electoral process – including problems with campaign finance and voter registration – but gerrymandering stands out as the worst, writes Norris:
[A] large part of the blame can be laid at the door of the degree of decentralization and partisanship in American electoral administration. Key decisions about the rules of the game are left to local and state officials with a major stake in the outcome. For example, gerrymandering arises from leaving the processes of redistricting in the hands of state politicians, rather than more impartial judicial bodies.
Thanks to gerrymandering, Democrats likely won’t win back the House in 2018 or 2020, predict experts at Strathclyde University, University of Richmond, University of California, Irvine and California Polytechnic State University. They argue that it’s difficult for today’s politicians to claim that gerrymandered districts occurred by accident:
If a state government could have drawn unbiased districts, but chose to draw to biased districts instead, then it has engaged in deliberate gerrymandering. It cannot claim that it did not realize what it was doing – modern districting software has allowed enough people to see the partisan consequences.
In search of solutions
Federal law dictates that congressional districts “distribute population evenly, be connected and be ‘compact,’” explains Kevin Knudson at the University of Florida.
Scholars have proposed a handful of ideas of how to redraw congressional districts more fairly. States might consider changing how votes are tabulated or appointing an independent commission to redraw the lines. Or, they could turn to new mathematical techniques and run simulations in search of the best map.
Some voters might wonder why all the bother, says Knudson:
One approach is to do nothing and leave the system as it is, accepting the current situation as part of the natural ebb and flow of the political process. But when one political party receives a majority of votes nationally yet does not have control of the House of Representatives – as occurred in the 2012 election – one begins to wonder if the system needs some tweaks.
Not just politics
Gerrymandering is often discussed in the realm of politics. But Derek W. Black at the University of South Carolina explores a case in Alabama where school districts have been redrawn to create racially segregated schools. He notes that this seems to be an unfortunate pattern across the country:
In many areas, this racial isolation has occurred gradually over time, and is often written off as the result of demographic shifts and private preferences that are beyond a school district’s control.
To commemorate African-American Music Appreciation Month this June, California Senator Kamala Harris released a Spotify playlist with songs spanning genres and generations, from TLC’s “Waterfalls” to Marvin Gaye’s “What’s Going On.”
In a nod to the integral role African-American musicians play in the country’s rich musical legacy, we’ve decided to highlight our own “playlist” of articles, pieces that feature icons like Michael Jackson and Tupac Shakur, along with forgotten – but no less important – voices, from Elizabeth Taylor Greenfield to the Rev. T.T. Rose.
The first black pop star is born
Before Aretha Franklin, before Ella Fitzgerald, there was Elizabeth Taylor Greenfield. A self-taught opera singer born in 1820, Greenfield had to overcome the belief that blacks couldn’t actually sing.
Penn State music instructor Adam Gustafson tells the story of Greenfield’s rise, which made audiences reconcile their racism with their ears:
“Greenfield was met with laughter when she took to the stage. Several critics blamed the uncouth crowd in attendance; others wrote it off as lighthearted amusement. Despite the inauspicious beginning, critics agreed that her range and power were astonishing.”
By the early 20th century, Americans were clamoring for the albums of black artists. The music industry was eager to oblige, but cordoned them off into a distinct genre: “race music.”
One of the most prominent early race labels was Paramount Records. Between 1917 and 1932, Paramount recorded a breathtaking range of seminal African-American artists. Unfortunately, as Penn State’s Jerry Zoltan explains, black artists like the Rev. T.T. Rose and the Pullman Porters Quartet were ruthlessly exploited – and eventually forgotten.
“Bottom line: if record companies could get away with it, there was no bottom line. No negotiated contract to sign. No publishing. No royalties. Anonymity was also implicit in the deal, so many black artists were forgotten, their only legacy the era’s brittle shellac disks that were able to withstand the wear of time.”
University of Maryland, Baltimore County’s Clifford Murphy describes how these same industry forces tried to pigeonhole an ex-con named Huddie “Lead Belly” Ledbetter as a black blues artist.
But Lead Belly loved country stars like Gene Autry, and while he sang blues and spirituals, he also created songs influenced by the string band traditions of the white working class. Promoters, however, were interested in only a certain type of song:
“Though he had an immense repertoire, he was urged to record and perform songs like ‘Pick A Bale of Cotton,’ while songs considered ‘white,’ like ‘Silver Haired Daddy of Mine,’ were either downplayed or cast aside… Lead Belly was constrained by a commercial and cultural industry that wanted to present a certain archetype of African-American music.”
Michael Jackson breaks the mold
Only later would black artists be able to move freely across musical genres. Perhaps no artist stitched together a more diverse range of styles and influences than Michael Jackson, the King of Pop.
But Jackson was simultaneously derided as “Wacko Jacko,” a hopelessly deluded freak. McMaster University’s Susan Fast sees it differently. To Fast, the way Jackson lived his life was an extension of the risks he took in his music. Both were united by a central tenet: to collapse boundaries considered irrevocable.
“Michael Jackson – gender ambiguous; adored and reviled; human, werewolf, panther; black, white, brown; child, adolescent, adult – shattered the assumptions of a society that craves neat categories and compartmentalization. Order and normality are illusions, he said through his life and art.”
The triumph and tragedy of Tupac
In the 1980s, hip-hop – then a budding musical genre – found itself gravitating toward black nationalist messages. It was during this time that Tupac Shakur, the son of a Black Panther, came of age.
While R&B, soul and jazz musicians were largely silent about the challenges poor black communities faced, Tupac, in his music, directly confronted the hostile forces that threatened him and his peers: mass incarceration, poverty, illegal drugs and police brutality. But in Tupac’s meteoric rise and swift fall, UConn’s Jeffrey O.G. Ogbar sees the tragedies of an entire generation of black youth:
“Tupac’s life isn’t just an embodiment of the struggles, contradictions, creativity and promise of a generation. It also serves as a cautionary tale. His life’s abrupt end was a consequence of the allure of success, much like the pull of the streets.”
This article is based on a collection of archival stories.
When it comes to energy, perhaps the only thing President Trump loves more than coal is oil and gas. Just a day shy of 100 days into his presidency, Trump is expected on April 28 to sign an executive order to open more offshore oil drilling in U.S. waters.
The move is meant to spur the economy and reverse President Obama’s decision last December to ban drilling from large swaths of sensitive marine environments. Regardless of whether Trump succeeds in overturning Obama’s protections, it’s clear oil won’t be flowing from new offshore wells anytime soon. Why is that? And will Americans even support this promised offshore boom? Our academic experts offer some answers.
Obama used an obscure provision of a 1953 law to “indefinitely” protect 120 million acres of marine environments in the Arctic and Atlantic oceans. Environmental law professor Patrick Parenteau from the University of Vermont explains the legal justification for the ban and why Congress is a critical player in Trump’s plans to open up drilling.
The law “does not provide any authority for presidents to revoke actions by their predecessors. It delegates authority to presidents to withdraw land unconditionally. Once they take this step, only Congress can undo it,” Parenteau writes.
Lessons from Shell’s misadventures
Meanwhile, are oil and gas companies clamoring to get into the Arctic and other offshore sites? Two years ago, Royal Dutch Shell pulled out of the waters off Alaska, citing disappointing results from its exploratory well. It also ran into serious troubles, requiring help from the U.S. Coast Guard, after its offshore drilling rig broke loose and ran ashore. Despite all the technical challenges, though, oil majors have been eyeing the Arctic for decades – and facing opposition to their plans, writes historian Brian Black from Penn State.
“A dramatic emphasis on Arctic drilling reopens debate on the pros and cons of development, arguments that have remained largely unchanged since interest commenced in the 1960s. These include the challenges of technology and climate; impacts on wildlife and native peoples living in the region; and strong resistance from environmental organizations,” Black says.
When it comes to safety, Shell’s about-face in the Arctic was instructive, says Robert Bea, an expert on assessing and managing risk from the University of California, Berkeley. While guidelines for offshore drilling have been updated following the Deepwater Horizon blowout disaster in the Gulf of Mexico, the Department of Interior guidelines do not follow the best available safety processes, Bea says.
“Reliance is being placed on the Department of Interior best practices of experienced-based, ‘piece by piece’ prescriptive guidelines and regulations. These have not been proved or demonstrated to be adequate for the unique drilling systems, operations and environment involved in Shell’s operations in the Chukchi Sea this summer,” he wrote in reviewing Shell’s troubles.
Even with these safety concerns and relatively low oil prices, the industry is moving ahead in the Arctic, in part because of fracking, the drilling technique that revolutionized the energy industry onshore, from Pennsylvania to North Dakota.
“Although it has gone largely unnoticed outside the industry, foreign firms are partnering with American companies to pursue these new possibilities. I expect this new wave of Arctic development will help increase U.S. oil production and influence in world oil markets for at least the next several decades,” Scott Montgomery from the University of Washington wrote recently.
Meanwhile, how does the U.S. public feel about offshore drilling overall? Certainly, few consumers will complain about cheaper gasoline – one of the justifications for boosting oil production. But support for offshore drilling dipped substantially after the Deepwater Horizon spill in 2010, according to an analysis of public polling by David Konisky from Indiana University.
“The fluctuating nature of recent public opinion suggests sensitivity to external factors such as accidents, oil prices, and Middle East politics. But more broadly, one can reasonably interpret the U.S. public as divided on how to achieve the right balance between energy development and environmental protection,” he wrote.
Editor’s note: The following is a roundup of archival stories.
March 21 marks the anniversary of the third protest march from Selma led by Dr. Martin Luther King Jr. that culminated on the steps of the Capitol in Montgomery, Alabama, demanding voting rights for African-Americans.
The first march started on March 7, 1965, but ended in violence. The second march started on March 9. The third march started on March 21, with 3,200 people under the protection of federal troops. By the time the marchers reach the state Capitol in Montgomery on March 25, their numbers had swelled to 25,000.
Scholars writing for The Conversation have emphasized the relevance of King’s nonviolent – and successful – resistance movement today.
Here are some highlights from The Conversation’s coverage.
America’s crisis today
In considering the importance of looking to the past for role models among black leaders, Bowdoin College historian Brian J. Purnell points to the many problems in American cities in the 21st century and how they have led to the emergence of different forms of protest.
“The cost of living in American cities rises each year while for decades wages for working people have flat-lined. Public schools in cities like New York City and Philadelphia are now more racially segregated than they were in the 1960s. The Supreme Court has limited policies that promoted affirmative action and voting rights.”
Purnell highlights the “Black Lives Matter” movement:
“Like the civil rights movement of the 1950s and 1960s, today’s leaders are fighting for African-Americans’ human and civil rights.”
There is also, since November 2016, “a widespread and resolute discontent with the election of President Trump,” as Managing Director of the McCourtney Institute of Democracy, Pennsylvania State University, Christopher Beem, puts it. Many protesters, he says, want to “resist” and “to stop what they see as his degradation of our democracy.”
What can the protesters learn from King’s vision?
‘All men are created equal’
King’s vision was to build a more inclusive and just community. As Beem writes,
“At the very core of the Declaration of Independence and thus at the center of American life was the belief that ‘all men are created equal.’”
Beem argues that King led one of the most successful, nonviolent resistance movements in American history. Quoting historian August Meier, Beem says, King succeeded because he was “a conservative militant.”
He was not a “conservative in any political sense,” explains Beem. King was a “democratic socialist,” who opposed the Vietnam War and also emphasized that racism in America meant the United States was not living up to its own ideals. In King’s words, American culture was “the very antithesis” of what it claimed to believe.“
"The American ideal "all men are created equal” constituted what King called a “promissory note.” In each case, ordinary citizens demanded that that promise be honored. And through their actions, the nation was made more free and more just.
By framing the cause of civil rights in words and ideas that most Americans strongly identified with, King was able to appeal to their innate patriotism. What’s more, those who stood against his cause were, by implication, the ones who could be seen as un-American.“
King fought on behalf of poor people. In fact, argues Joshua F.J. Inwood, an ethicist at Pennsylvania State University’s Rock Ethics Institute, King’s later work related to ending poverty, although that is "often ignored.” Says Inwood,
“When King was assassinated in Memphis he was in the midst of building toward a national march on Washington, D.C. that would have brought tens of thousands of economically disenfranchised people to advocate for policies that would ameliorate poverty. This effort – known as the "Poor People’s Campaign” – aimed to dramatically shift national priorities to the health and welfare of working peoples.“
King’s idea of love
What then was at the core of King’s strategy?
Scholars argue that King strove to bring people together. Howard University’s Kenyatta Gilbert has studied the preaching of African-American ministers:
"King brought people of every tribe, class and creed closer toward forming "God’s beloved community” – an anchor of love and hope for humankind.“
However, Joshuan Inwood explains, love for King was not a "mushy or easily dismissed emotion.” King advocated “agape” – “a love that demanded that one stand up for oneself and tells those who oppress that what they were doing was wrong.”
Inwood writes that agape was at the center of the movement King was building. Agape made it a
“moral imperative to engage with one’s oppressor in a way that showed the oppressor the ways their actions dehumanize and detract from society.”
Why this matters now
“What would Martin Luther King be doing if he were alive today? The Selma of 1965 no longer exists. But the Selma, or Ferguson, or Staten Island, or Cleveland of 2015 shows that history isn’t finished.”
In reviewing the film “Selma,” Baick points out how King’s skills at oratory were only one of many. He explains,
King’s voice was only one of his tools. There was his vision, his ferocity, his strategic and tactical organizing skills, and his willingness to sacrifice.
Then there was King’s humility. As Beem suggests, in another article. humility is a much-needed virtue, while pointing to the “overwhelming” scientific evidence on human biases. But humility, as he writes, also “means that you aware of your own failures, and are respectful of those with whom you disagree.”
Beem explains that King acknowledged his limited perspective. In his letter from Birmingham jail, he wrote,
“If I have said anything in this letter that overstates the truth or indicates an unreasonable impatience, I beg you to forgive me.”
Editor’s note: The following is a roundup of archival stories related to the proposed American Health Care Act and the Affordable Care Act, commonly called Obamacare.
Turmoil around health care policy is reaching a fever pitch in Washington. But politicians have been working for decades to provide health insurance to the tens of millions of people in the U.S. who go without it. This number includes millions not covered by employer-sponsored insurance, those who were excluded for coverage because they had prior conditions or those who lacked the money to pay for health insurance on their own.
The Affordable Care Act, passed under President Barack Obama in 2010, made it illegal for insurers to deny coverage because of preexisting conditions. It also provided subsidies to millions who could not afford to buy insurance on their own. And to further help the poor, it expanded Medicaid to cover more lower-income people. To help offset new costs, the law required that all people of a certain income buy health insurance. This was called the “individual mandate.”
These changes brought about dramatic changes to the health care market, explained economists Darius Lakdawalla of the University of Southern California and Anup Malani of the University of Chicago:
“These insurance expansions are projected to cost roughly US$1.4 trillion over the first 10 years after the ACA’s implementation and cover between 22 and 32 million additional Americans. This expansion in coverage represents one of the, if not the, signal achievement of the ACA.”
The law was decried by Republicans who quickly dubbed it “Obamacare.” The law was challenged twice in the Supreme Court, with opponents arguing the individual mandate was unconstitutional. It withstood both challenges. Insurance experts, such as J.B. Silvers from Case Western Reserve University, explained why the mandate was essential from a business perspective.
“This so-called individual mandate also guaranteed business for the insurance companies, because it led healthy people into the risk pool.”
The implementation of the law had problems. Premium prices rose, and many insurers dropped out of the law’s marketplaces. Hillary Clinton vowed to fix the broken parts, but Donald Trump campaigned to replace what he called “the disaster that is Obamacare.”
Now, with Republicans in control of the House, Senate and presidency, they are working to create a plan that would be cheaper and provide better health insurance than the ACA did. On March 7, 2017, House Speaker Paul Ryan introduced their version of health care reform, the American Health Care Act, calling it an “act of mercy.” This bill has been criticized for providing tax cuts to the rich and because it would leave millions once more without insurance, detractors say.
Megan Foster Friedman detailed how the proposed law would affect health care for the poor:
“In addition, beginning in 2020, the bill would shift Medicaid to a per-capita cap. This would be a fundamental restructuring of the Medicaid program, affecting over 70 million people.”
The Congressional Budget Office announced March 13 that, by its analysis, 24 million would lose health care in the next 10 years.
Health services expert Bill Custer from Georgia State University explained how there could be other effects as well: even middle-class people could lose insurance. Without the individual mandate, insurers will have a harder time being profitable. If that happens, individual markets in some states could collapse.
“But when healthy individuals choose not to purchase health insurance, insurers are left with costs greater than their premium income. That forces insurers to increase their premiums, which in turn leads healthier individuals to drop coverage increasing average claims costs.”
Editor’s note: The following is a roundup of archival stories.
On March 14, or 3/14, mathematicians and other obscure-holiday aficionados celebrate Pi Day, honoring π, the Greek symbol representing an irrational number that begins with 3.14. Pi, as schoolteachers everywhere repeat, represents the ratio of a circle’s circumference to its diameter.
What is Pi Day, and what, really, do we know about π anyway? Here are three-and-bit-more articles to round out your Pi Day festivities.
A silly holiday
First off, a reflection on this “holiday” construct. Pi itself is very important, writes mathematics professor Daniel Ullman of George Washington University, but celebrating it is absurd:
The Gregorian calendar, the decimal system, the Greek alphabet, and pies are relatively modern, human-made inventions, chosen arbitrarily among many equivalent choices. Of course a mood-boosting piece of lemon meringue could be just what many math lovers need in the middle of March at the end of a long winter. But there’s an element of absurdity to celebrating π by noting its connections with these ephemera, which have themselves no connection to π at all, just as absurd as it would be to celebrate Earth Day by eating foods that start with the letter “E.”
And yet, here we are, looking at the calendar and getting goofily giddy about the sequence of numbers it shows us.
There’s never enough
In fact, as Jon Borwein of the University of Newcastle and David H. Bailey of the University of California, Davis, document, π is having a sustained cultural moment, popping up in literature, film and song:
Sometimes the attention given to pi is annoying. On 14 August 2012, the U.S. Census Office announced the population of the country had passed exactly 314,159,265. Such precision was, of course, completely unwarranted. But sometimes the attention is breathtakingly pleasurable.
Come to think of it, pi can indeed be a source of great pleasure. Apple’s always comforting, and cherry packs a tart pop. Chocolate cream, though, might just be where it’s at.
Of course π appears in all kinds of places that relate to circles. But it crops up in other places, too – often where circles are hiding in plain sight. Lorenzo Sadun, a professor of mathematics at the University of Texas at Austin, explores surprising appearances:
Pi also crops up in probability. The function f(x)=e-x², where e=2.71828… is Euler’s number, describes the most common probability distribution seen in the real world, governing everything from SAT scores to locations of darts thrown at a target. The area under this curve is exactly the square root of π.
It’s enough to make your head spin.
If you want to engage with π more directly, follow the lead of Georgia State University mathematician Xiaojing Ye, whose guide starts thousands of years ago:
The earliest written approximations of pi are 3.125 in Babylon (1900-1600 B.C.) and 3.1605 in ancient Egypt (1650 B.C.). Both approximations start with 3.1 – pretty close to the actual value, but still relatively far off.
By the end of his article, you’ll find a method to calculate π for yourself. You can even try it at home!
An irrational bonus
And because π is irrational, we’ll irrationally give you even one more, from education professor Gareth Ffowc Roberts at Bangor University in Wales, who highlights the very humble beginnings of the symbol π:
After attending a charity school, William Jones of the parish of Llanfihangel Tre’r Beirdd landed a job as a merchant’s accountant and then as a maths teacher on a warship, before publishing A New Compendium of the Whole Art of Navigation, his first book in 1702 on the mathematics of navigation. On his return to Britain he began to teach maths in London, possibly starting by holding classes in coffee shops for a small fee.
Shortly afterwards he published “Synopsis palmariorum matheseos,” a summary of the current state of the art developments in mathematics which reflected his own particular interests. In it is the first recorded use of the symbol π as the number that gives the ratio of a circle’s circumference to its diameter.
What made him realize that this ratio needed a symbol to represent a numeric value? And why did he choose π? It’s all Greek to us.
Faith, religious institutions and spirituality are all part and parcel of American life. But they are often misunderstood. That is why we are excited to launch today, with support from the Lilly Endowment Inc., our Ethics & Religion desk.
Research on topics such as the diversity of evangelical movements, the history of Islam in America and the ethics of genetic engineering is being done in hundreds of academic institutions across the U.S. But these scholars’ voices have not been regularly heard in the general media. Now they will be.
As you peruse our new section, you will find articles analyzing what the Bible says about welcoming refugees, examining what is left out of the teaching of Islam and telling the inside story of the National Prayer Breakfast.
As always, we’re keen to hear what you think. We very much welcome all our readers getting in touch with suggestions for topics to explore and stories to tell as we build The Conversation’s coverage of ethics and religion. And if you are an academic with an idea for an article that may fit in this or any section, pitch it our way.
Editor’s note: The following is a roundup of archival stories, and is an updated version of an article previously published Jan. 24, 2017.
Ajit Pai, President Trump’s chairman of the Federal Communications Commission, is a longtime foe of net neutrality. He has proposed completely repealing the Obama administration’s 2015 Open Internet Order, a decision the commission will likely vote to confirm on Dec. 14.
1. Public interest versus private profit
The basic conflict is a result of the history of the internet, and the telecommunications industry more generally, writes internet law scholar Allen Hammond at Santa Clara University:
Like the telephone, broadcast and cable predecessors from which they evolved, the wire and mobile broadband networks that carry internet traffic travel over public property. The spectrum and land over which these broadband networks travel are known as rights of way. Congress allowed each network technology to be privately owned. However, the explicit arrangement has been that private owner access to the publicly owned spectrum and rights of way necessary to exploit the technology is exchanged for public access and speech rights.
The government is trying to balance competing interests in how the benefits of those network services. Should people have unfiltered access to any and all data services, or should some internet providers be allowed to charge a premium to let companies reach audiences more widely and more quickly?
2. Media is the basis of democracy
Pai’s move against net neutrality, media scholar Christopher Ali at the University of Virginia writes, is just part of a larger effort at the FCC to accelerate the deregulation trend of the past 30 years. The stakes are high:
Media is more than just our window on the world. It’s how we talk to each other, how we engage with our society and our government. Without a media environment that serves the public’s need to be informed, connected and involved, our democracy and our society will suffer….
If only a few wealthy companies control how Americans communicate with each other, it will be harder for people to talk among ourselves about the kind of society we want to build.
3. Pushing back against corporate control
Competition is already fairly limited, it turns out. Across America, most people have very little – if any – choice in who their internet provider is. Communication studies professor Amanda Lotz at the University of Michigan explains the concerns raised by a monopoly marketplace and the potential effects of turning back the current policy of net neutrality:
The rules were created out of concern internet service providers would reserve high-speed internet lanes for content providers who could pay for it, while relegating to slower speeds those that didn’t – or couldn’t, such as libraries, local governments and universities. Net neutrality is also important for innovation, because it protects small and start-up companies’ access to the massive online marketplace of internet users.
In this view, the internet is a public utility that should be preserved and protected for all to access freely.
4. Getting around the rules
Even with net neutrality rules in place, companies were pushing the boundaries of what is legal. In recent years, many mobile internet providers have been simultaneously imposing and creating exemptions from limits on how much data their customers can use in a given month. Called “zero rating policies,” these exemptions omit from the monthly cap certain types of data, or certain companies’ data. For example, T-Mobile customers can listen endlessly to Spotify internet radio regardless of how much high-speed data they use for other purposes. Information systems scholars Liangfei Qiu, Soohyun Cho and Subhajyoti Bandyopadhyay at the University of Florida examined the effects of those policies on the marketplace:
At first glance, zero rating plans would seem to be good for consumers because they allow users to consume traffic for free. But our research suggests the variety of content may be reduced, which in the long run harms consumers.
Their findings suggest that keeping the internet open would be best for the public.
5. Regulation isn’t always a good solution
However, regulating with that sort of goal could be risky because of the fast-changing nature of the internet, writes technology policy scholar Scott Wallsten at Georgetown:
Today’s business models may not be viable in the future. Net neutrality rules run counter to that reality by freezing in place a particular industry structure, making it difficult for firms to respond to underlying changes in technology and consumer demand over time.
6. A vestige of the 20th century
Whether net neutrality rises or falls, however, the debate will continue. The rules and frameworks the government uses to try to regulate the internet are long out of date, and were written to address a very different time, when landline telephone service was not yet ubiquitous. Boston University communication and law professor T. Barton Carter explained what the real solution is:
The laws governing the internet were written in the early 20th century, decades before the companies that dominate the internet like Google and YouTube even existed. The only solution is a complete rewrite of the 80-year-old Communications Act – unfortunately a fool’s errand in today’s Washington.
7. Can net neutrality even happen?
And maintaining net neutrality itself could be a major challenge, if not a fool’s errand, thanks to important technical details that could make the ideal impossible, writes University of Michigan computer scientist Harsha Madhyastha:
If one user is streaming video and another is backing up data to the cloud, should both of them have their data slowed down? Or would users’ collective experience be best if those watching videos were given priority? That would mean slightly slowing down the data backup, freeing up bandwidth to minimize video delays and keep the picture quality high.
8. Check for yourself
Northeastern University computer scientist David Choffnes describes how his team built an app that can measure exactly how internet service providers handle different types of traffic:
The methods we used and the tools we developed investigate how internet service providers manage your traffic and demonstrate how open the internet really is – or isn’t – as a result of evolving internet service plans, as well as political and regulatory changes. Regular people can explore their own services with our mobile app for Android, which is out now; an iOS version is coming soon.
Letting people see whether, and how, their data service handles internet traffic may be the best way to show people the importance of an open internet.
9. Very large stakes
If net neutrality is repealed, it could spell disaster for America’s position as an international leader in online innovation, writes global business scholar Bhaskar Chakravorti at Tufts:
Based on our findings, I believe that rolling back net neutrality rules will jeopardize the digital startup ecosystem that has created value for customers, wealth for investors and globally recognized leadership for American technology companies and entrepreneurs. The digital economy in the U.S. is already on the verge of stalling; failing to protect an open internet would further erode the United States’ digital competitiveness, making a troubling situation even worse.
10. Setting clearer guidelines
If Pai’s proposal goes through, it will signal that future changes in partisan control in Washington, D.C., could also lead to major shifts in internet regulation. A key part of this potential problem is lack of clarity in the laws, meaning regulators and courts have to sort through major policy questions that would better be dealt with in Congress, writes Timothy Brennan, a former chief economist at the FCC who is now a public policy scholar at the University of Maryland, Baltimore County. He explains three steps Congress could take to simplify the debate – without even having to agree on the policy itself:
If Congress could enact legislation that removed the distinction between “telecommunication” and “information” services, reinforced the importance of the public interest in communications and restored antitrust enforcement power for regulators, the FCC would be better able to develop net neutrality regulations – whatever they may turn out to be – with solid substantive and legal foundations.
That could go a long way to furthering both public debate and public policy.