The Conversation: Where Journalists Associated with Universities Publish
Name: The Conversation (Visit The Conversation)
Type: Academic Paper Fueled News Outlet
Best Website For: Academic News and Commentary
Reason it's on The Best Sites:
The Conversation is a not-for-profit media outlet that uses content sourced from journalists who are affiliated with a university. The site is featured in MakeUseOf's "Best Websites on the Internet" article.
About a decade ago, a modest update to Apple’s iPhoto software showed me a new way to study architectural history. The February 2009 update added facial recognition, allowing users to tag friends and loved ones in their photos. After a few faces were tagged, the software would begin to offer suggestions.
But it wasn’t always accurate. Though Apple’s algorithm continues to improve, it had a tendency to find faces in objects – not just statues or sculptures of people, but even cats or Christmas trees. For me, the possibilities became clearest when iPhoto confused a human friend of mine – I’ll call him Mike – with a building called the Great Mosque of Cordoba.
The ceiling of the mosque’s forecourt supposedly resembled Mike’s brown hair. The layering of two Visigothic archways supposedly resembled the area between Mike’s hairline and the edge of his brow. Finally, the related alignment of the Moorish cusped arches with their striped stonework resembled Mike’s eyes and nose just enough that the software thought a 10th-century mosque was the face of a 21st-century human.
Rather than viewing this as a failure, I realized I had found a new insight: Just as people’s faces have features that can be recognized by algorithms, so do buildings. That began my effort to perform facial recognition on buildings – or, more formally, “architectural biometrics.” Buildings, like people, may just have biometric identities too.
Facing the building
In the late 19th century, railway stations were built across Canada and the Ottoman Empire, as both countries sough to expand control of their territory and regional influence. In each country, a centralized team of architects was charged with designing dozens of similar-looking buildings to be constructed throughout a vast frontier landscape. Most of the designers had never been to the places their buildings would go, so they had no idea whether there were steep slopes, large rock outcroppings or other terrain variations that might have led to design changes.
In both Canada and the Ottoman Empire, construction supervisors on the actual sites had to do their best to reconcile the official blueprints with what was possible on the ground. With communications slow and difficult, they often had to make their own changes to the buildings’ designs to accommodate local topography, among other variable conditions.
What’s more, the people who actually did the building came from an ever-changing multinational labor force. In Canada, workers were Ukrainian, Chinese, Scandinavian and Native American; in the Ottoman Empire, workers were Arab, Greek and Kurdish. They had to follow directions given in languages they didn’t speak and understand blueprints and drawings labeled in languages they didn’t read.
As a result, the engineers and workers’ own cultural notions of what a building should look like and how it should be constructed left their figurative fingerprints on what was built, and how it looked. In each place, there are subtle differences. Some stations’ wooden window frames are beveled, some roofs have finials, and some rounded arches are replaced with ever-so-slightly pointed arches.
Other design changes may have happened more recently, with renovations and restorations. Meanwhile, time has worn down materials, weather has damaged structures and, in some cases, animals have added their own elements – like birds’ nests.
The people behind the facades
In the Canadian and Ottoman case studies, many people had opportunities to influence the final building. The variations are quite like differences between people’s faces – most people have two eyes, a nose, a mouth and two ears, but exactly how those features are shaped and where they’re placed can vary.
Thinking about buildings as objects with biometric identities, I began to use analysis similar to facial recognition to find the subtle differences in each building. My team and I used laser scanners to take detailed 3-D measurements of railway stations in Turkey and Canada. We processed the raw data to create computerized models of those measurements.
That, in turn, revealed the hands of the builders, highlighting the geographic and multicultural influences that shaped the resulting buildings.
This evidence called into question previous assumptions that buildings, like a sculpture or a painting, are primarily influenced by just one person. Our work has shown that buildings really only begin with drawings, but then invite the input of a vast number of creators, most of whom never achieve the heroic status of architect or designer.
To date, there are no good methods to even try to identify these people and highlight their artistic choices. The absence of their voices has only tended to prop up the idea that architecture is made only by brilliant individuals.
As 3-D scanners become increasingly common, perhaps even elements of smartphones, our method will be available to almost anyone. People will use this technology on large objects like buildings, but small ones too. At present, our group is working with Paleoindian points, more commonly known as “arrowheads,” to explore a very different history, geography and set of circumstances than we did with the railway stations.
Peter Christensen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A Starbucks manager in Philadelphia called the police on two black men on April 12, leading to their arrest. The two men, who had been waiting for a friend at the store, were released without being charged.
Starbucks has since apologized and announced it will close more than 8,000 of its stores in the United States to provide “racial bias” training for its 175,000 employees. Starbucks’ COO Roz Brewer said the sessions would focus on “unconscious bias training,” a form of diversity education that focuses on the hidden causes of everyday racial discrimination.
Unconscious bias training has become a popular approach to diversity education. The trainings often begin with demonstrations of how the mind operates in ways that are outside of conscious awareness or control. These demonstrations show that people make, and sometimes act on, snap judgments based on the other person’s race, without any conscious intention.
Research shows that this source of racial discrimination can be reduced in a number of ways. For example, setting objective criteria for decision-making could have made a difference in the Starbucks incident. As Starbucks CEO Kevin Johnson described, the manager used personal judgment in calling the police. Formal rules that prevent the influence of racial bias in calling the police could have prevented the incident altogether.
Some unconscious bias trainings incorporate discussions of solutions such as these. But there is no standard format for trainings. Some involve little more than a series of narrated PowerPoint slides. Others involve expert instructors who hold small, intensive workshops that can last for days.
The novelty of unconscious bias training means there is little direct evidence about whether it works. To determine its potential, researchers have turned to clues from other types of training.
One study looked at older types of diversity trainings that focused on the negative legal consequences of discrimination. It found that such trainings can backfire when managers resent the possibility that they could be singled out for punishment.
By contrast, employees may be more open to unconscious bias training because it focuses on how bias is universal, rather than singling out a few “bad apples.”
However, other research shows that highlighting the prevalence of bias makes people more likely to express their bias.
Unconscious bias training will not solve the whole problem. Discrimination has other causes that aren’t fully dealt with in this kind of training, such as explicit prejudice or policies that have disparate impacts on people of different races. Effective solutions will require multiple approaches to addressing discrimination, not just one.
This story has been updated to reflect the correct date on which the incident at a Philadelphia Starbucks occurred.
Calvin K. Lai is the Director of Research at Project Implicit, a non-profit for research and education on implicit bias
The new film “A Quiet Place” is an edge-of-your-seat tale about a family struggling to avoid being heard by monsters with hypersensitive ears. Conditioned by fear, they know the slightest noise will provoke a violent response – and almost certain death.
Audiences have come out in droves to dip their toes into its quiet terror, and they’re loving it: It’s raked in over US$100 million at the box office and has a 95 percent rating on Rotten Tomatoes.
Like fairy tales and fables that dramatize cultural phobias or anxieties, the movie may be resonating with audiences because something about it rings true. For hundreds of years, Western culture has been at war with noise.
Yet the history of this quest for quietness, which I’ve explored by digging through archives, reveals something of a paradox: The more time and money people spend trying to keep unwanted sound out, the more sensitive to it they become.
Be quiet – I’m thinking!
As long as people have lived in close quarters, they’ve been complaining about the noises other people make and yearning for quiet.
In the 1660s, the French philosopher Blaise Pascal speculated, “the sole cause of man’s unhappiness is that he does not know how to stay quietly in his room.” Pascal surely knew it was harder than it sounds.
But in modern times, the problem seems to have gotten exponentially worse. During the Industrial Revolution, people swarmed to cities roaring with factory furnaces and shrieking with train whistles. German philosopher Arthur Schopenhauer called the cacophony “torture for intellectual people,” arguing that thinkers needed quietness in order to do good work. Only stupid people, he thought, could tolerate noise.
Charles Dickens described feeling “harassed, worried, wearied, driven nearly mad, by street musicians” in London. In 1856, The Times echoed his annoyance with the “noisy, dizzy, scatterbrain atmosphere” and called on Parliament to legislate “a little quiet.”
It seems the more people started to complain about noise, the more sensitive to it they became. Take the Scottish polemicist Thomas Carlyle. In 1831, he moved to London.
“I have been more annoyed with noises,” he wrote, “which get free access through my open windows.”
He became so triggered by noisy peddlers that he spent a fortune soundproofing the study in his Chelsea Row house. It didn’t work. His hypersensitive ears perceived the slightest sound as torture, and he was forced to retreat to the countryside.
The war on noise
By the 20th century, governments all over the world were engaged in an endless war on noisy people and things. After successfully silencing the tug boats whose tooting tormented her on the porch of her Riverside Avenue mansion, Mrs. Julia Barnett Rice, the wife of venture capitalist Isaac Rice, founded the Society for the Suppression of Unnecessary Noise in New York in order to combat what she called “one of the greatest banes of city life.”
Counting as members over 40 governors, and with Mark Twain as their spokesman, the group used its political clout to get “quiet zones” established around hospitals and schools. Violating a quiet zone was punishable by fine, imprisonment or both.
But focusing on noise only made her more sensitive to it. Like Carlyle, Rice turned to architects and built a quiet place deep under the ground, where her husband, Isaac, could work out his chess gambits in peace.
Inspired by Rice, anti-noise organizations sprang up around the globe. After World War I, with ears across Europe still ringing from explosions, the transnational culture war against noise really took off.
Cities all over the world targeted noisy technologies, like the Klaxon automobile horn, which Paris, London and Chicago banned by ordinance in the 1920s. In the 1930s, New York Mayor Fiorello La Guardia launched a “noiseless nights” campaign aided by sensitive noise-measuring devices stationed throughout the city. New York passed dozens of laws over the next several decades to muzzle the worst offenders, and cities throughout the world followed suit. By the 1970s, governments were treating noise as environmental pollution to be regulated like any industrial byproduct.
Planes were forced to fly higher and slower around populated areas, while factories were required to mitigate the noise they produced. In New York, the Department of Environmental Protection – aided by a van filled with sound-measuring devices and the words “noise makes you nervous & nasty” on the side – went after noisemakers as part of “Operation Soundtrap.”
After Mayor Michael Bloomberg instituted new noise codes in 2007 to ensure “well-deserved peace and quiet,” the city installed hypersensitive listening devices to monitor the soundscape and citizens were encouraged to call 311 to report violations.
Yet legislating against noisemakers rarely satisfied our growing desire for quietness, so products and technologies emerged to meet the demand of increasingly sensitive consumers. In the early 20th century, sound-muffling curtains, softer floor materials, room dividers and ventilators kept the noise from the outside from coming in, while preventing sounds from bothering neighbors or the police.
But as Carlyle, Rice and the family in “A Quiet Place” found out, creating a sound-free lifeworld is nearly impossible. Certainly, as Hugo Gernsback learned with his 1925 invention the Isolator – a lead helmet with viewing holes connected to a breathing apparatus – it was impractical.
No matter how thoughtful the design, unwanted sound continued to be a part of everyday life.
Unable to suppress noise, disquieted consumers started trying to mask it with wanted sound, buying gadgets like the Sleepmate white noise machine or by playing recorded sounds of nature, from breaking waves to rustling forests, on their stereos.
Today, the quietness industry is a booming international market. There are hundreds of digital apps and technologies created by psychoacoustic engineers for consumers, including noise cancellation products with adaptive algorithms that detect outside sounds and produce anti-phase sonic waves, rendering them inaudible.
The marketing efforts for these products aim to convince us that noise is intolerable and the only way to be happy is to shut out other people and their unwanted sounds. This same fantasy is mirrored in “A Quiet Place”: The only moment of relief in the whole “silent horror film” is when Evelyn and Lee are wired in together, swaying gently to their own music and silencing the world outside their earbuds.
In a Sony ad for their noise canceling headphones, the company depicts a world in which the consumer exists in a sonic bubble in an eerily empty cityscape.
Content as some may feel in their ready-made acoustic cocoons, the more people accustom themselves to life without unwanted sounds from others, the more they become like the family in “A Quiet Place.” To hypersensitized ears, the world becomes noisy and hostile.
Maybe more than any alien species, it’s this intolerant quietism that’s the real monster.
Matthew Jordan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Traditionally at a college or university commencement before degrees are conferred, some well-respected, often scholarly figure gives a charge to the class. Historically this has been a speech that most people have to endure rather than embrace, praying that it is short so that they can get on with getting their diploma. I have three degrees. I only attended my doctoral ceremony because they were going to call my name, and I would walk across the stage.
But I have no idea who spoke for the commencement.
In recent years, colleges and universities have begun to see commencement as an advancement opportunity. A well-known celebrity or beloved public figure can lead to branding opportunities, raising the profile of the institution. Simultaneously, many students, having gone into debt for their degrees, began to expect to be rewarded with a memorable commencement speaker.
And while traditional commencement speakers are still the norm, from CEOs to politicians, celebrity commencement speakers are fairly routine. And we’re talking the biggest names: Oprah, Tom Hanks, Jim Carrey, J. K. Rowling, Ellen DeGeneres and Robert De Niro. Even Kermit the Frog gave the commencement address at Southampton College in 1996.
So why not rappers?
Once thought of as a fad that would disappear, hip-hop is undeniably the most influential art form today. It influences our daily language, is used extensively to market products and services, and is the most widely sold – and streamed – music form today. Hip-hop is popular culture, an American idea that has spread around the globe. In fact, the recent announcement that Kendrick Lamar was awarded a Pulitzer Prize for his album “Damn,” the first awarded to a hip hop artist, has spurred dozens of articles about hip-hop’s significance.
But because hip-hop is often viewed through its problematic elements, like profanity and misogyny, some might think that a rapper has no place at a commencement exercise. There is an idyllic view that the commencement speaker is some paragon of virtue, a role model presented for graduates not to just live the words that they speak, but the lives that they lead. Considering the number of disgraced or imprisoned men and women who have given commencement speeches at America’s most prestigious institutions, evidenced by the numerous times schools have rescinded honorary degrees from people who lived a lie that we bought, hip-hop artists’ honesty, even if uncomfortable, is refreshing.
I teach a class on hip-hop, sex, gender and ethical behavior. One semester, a local minister had us think about hip-hop artists in relation to Biblical figures. He argued that hip-hop takes a questioning stance, much like we find in the Psalms or in the book of Habakkuk. Historically, the mantra of hip-hop has been to “keep it real,” as artists often through their lyrics wrestle with the sacred, the secular and the profane.
In essence, hip-hop wrestles with real-life issues through imperfect people, just like the Bible. That’s why, in my view, hip-hop artists make for perfect commencement speakers.
Queen Latifah was one of the first hip-hop artists to give a commencement address in 2004 at Delaware State University. Since then we have seen Chuck D, Sean “Diddy” Combs, David Banner, Common, Kanye West and Pharrell Williams. A great deal of thoughtful music has been made by this group. This year, Queen Latifah, will.i.am, and at my institution, Chance The Rapper, will share their thoughts to graduates. Most are thoughtful people, not simply performers.
Chuck D of the group Public Enemy is famous for saying celebrity is the drug of choice in America. Universities should ask themselves how we make sure the drug is a stimulant to inspire innovation and initiative, rather than a depressant that numbs the senses. Hip-hop artists can be that stimulant, using their celebrity to instigate action.
More could step to the mic
I don’t expect schools to abandon the CEO, politician, journalist or author as commencement speakers. But there is a message within Questlove or Black Thought, MC Lyte or J. Cole, Rapsody or Dee-1, Kendrick Lamar or Jay-Z.
Walter M. Kimbrough does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Climate change is rapidly warming the Earth and altering ecosystems on land and at sea that produce our food. In the oceans, most added heat from climate warming is still near the surface and will take centuries to work down into deeper waters. But as this happens, it will change ocean circulation patterns and make ocean food chains less productive.
In a recent study, I worked with colleagues from five universities and laboratories to examine how climate warming out to the year 2300 could affect marine ecosystems and global fisheries. We wanted to know how sustained warming would change the supply of key nutrients that support tiny plankton, which in turn are food for fish.
We found that warming on this scale would alter key factors that drive marine ecosystems, including winds, water temperatures, sea ice cover and ocean circulation. The resulting disruptions would transfer nutrients from surface waters down into the deep ocean, leaving less at the surface to support plankton growth.
As marine ecosystems become increasingly nutrient-starved over time, we estimate global fish catch could be reduced 20 percent by 2300, and by nearly 60 percent across the North Atlantic. This would be an enormous reduction in a key food source for millions of people.
Ocean food production and the biological pump
Marine food production starts when the sun shines on the ocean’s surface. Single-celled, mostly microscopic organisms called phytoplankton – the plants of the oceans – use sunlight to photosynthesize and grow in a process called net primary production. They can only do this in the sunlit surface layer of the ocean, down to about 100 meters (330 feet). But they also need nutrients to grow, particularly nitrogen and phosphorus, which can be scarce in surface waters.
Phytoplankton are consumed by zooplankton (tiny animals), which in turn provide food for small fish, and so on all the way up the food chain to top predators like dolphins and sharks. Unconsumed phytoplankton and other organic matter, such as dead zooplankton and fish, decompose in surface waters, releasing nutrients that support new phytoplankton growth.
Some of this material sinks down into the deeper ocean, providing food for deep sea ecosystems. Carbon, nitrogen, phosphorus and other nutrients in this sinking organic matter ultimately are decomposed and released at depth.
This process, which is known as the biological pump, continually removes nutrients from surface waters and transfers them to the deeper ocean. Under normal conditions, winds and currents cause mixing that eventually brings nutrients back up to the sunlit surface waters. If this did not happen, the phytoplankton eventually would completely run out of nutrients, which would affect the entire ocean food chain.
Sea ice, winds and nutrient upwelling
Nutrients that sink to the deep ocean eventually return to the surface mainly in the Southern Ocean around Antarctica. North of Antarctica, strong westerly winds push surface waters away from Antarctica. As this happens, deep ocean waters that are rich in nutrients rise up to the surface all around Antarctica, replacing the waters that are being pushed away. The zone where this upwelling occurs is called the Antarctic Divergence.
Today there isn’t a lot of phytoplankton growth in the Southern Ocean. Heavy sea ice cover prevents much sunlight from reaching the oceans. Concentrations of iron (another key nutrient) in the water are low, and cold water temperatures limit plankton growth rates. As a result, most nitrogen and phosphorus that upwells in this area flows northwards in surface waters. Eventually, when these nutrients reach warmer waters throughout the lower latitudes, they support plankton growth over most of the Pacific, Indian and Atlantic oceans.
Trapping nutrients in the deep ocean
Our study demonstrated that sustained, multicentury global warming could short-circuit this process, leaving all ocean areas to the north of this Antarctic zone increasingly starved for nitrogen and phosphorus.
We used a climate model simulation that assumed nations continued to use fossil fuels until global reserves were exhausted. This climate path would raise mean surface air temperature by 9.6 degrees Celsius (17.2 degrees Fahrenheit) by 2300 – nearly 10 times the warming beyond pre-industrial levels recorded up to the present. Scientists already know that the poles are warming faster than the rest of the planet, and in this scenario that pattern continues. Eventually the oceans would no longer freeze over near the poles, even in winter.
Warmer ocean waters without sea ice, aided by shifts in winds that are also driven by strong climate warming, would greatly improve growth conditions around Antarctica for phytoplankton. This increased growth would trap nutrients that well up near Antarctica, preventing them from flowing northwards and supporting low-latitude ecosystems worldwide.
In our simulation, these trapped nutrients eventually mix back to the deep ocean and accumulate there. Nitrogen and phosphorus concentrations in the upper 1,000 meters (3,300 feet) of the ocean steadily decrease. In the deep ocean, below 2,000 meters, they steadily increase.
Far fewer fish
As marine ecosystems become increasingly nutrient-starved, phytoplankton growth and net primary production throughout most of the world’s oceans would decline. We estimate that as these impacts ripple up the food chain, global fish catches could be reduced 20 percent by 2300, with decreases of more than 50 percent across the North Atlantic and several other regions. Moreover, at the end of our simulation net transfer of nutrients to the deep ocean was still taking place, which suggests that ecosystem productivity and potential fisheries catch would decline even further beyond 2300.
Eventually, after more than a thousand years, most of the carbon dioxide that human activities have added to the atmosphere will be absorbed by the oceans, and the Earth’s climate will cool back down. Sea ice will return to polar oceans, suppressing phytoplankton growth around Antarctica and allowing more upwelled nutrients to flow north once again to lower latitudes. But even then, it will take centuries more for ocean circulation to fully replenish nutrients in the upper ocean.
Ocean resources are already stressed today. About 90 percent of the world’s marine fisheries are fully fished or overfished. World population is projected to increase from 7.3 billion in 2015 to 11 billion in 2100. The impacts that we found in our study would have serious implications for global food security. Expanding aquaculture, or even more drastic steps such as directly fertilizing the oceans to spur plankton growth, would not even come close to making up for the loss of nutrients to the deep ocean driven by sustained global warming.
Our simulation was based on a strong climate warming scenario. More research is needed to explore just how warm the climate has to get to melt sea ice and initiate Southern Ocean nutrient trapping. But clearly this is a tipping point that we don’t want to cross.
Jefferson Keith Moore receives funding from the National Science Foundation and the U.S. Department of Energy.
Residents of major U.S. cities are becoming used to seeing docks for bike sharing programs nestled into parking spaces or next to subway station entrances. Adorned with stylish branding and corporate sponsors’ logos, these facilities are transforming transportation in cities across the country.
The modern concept of bike sharing – offering bikes for short-term public rental from multiple stations in cities – was launched in Copenhagen in 1995, but U.S. cities only started piloting their own systems in the past decade. Washington D.C. led the way, launching SmartBike DC in 2008 and an expanded network called Capital Bikeshare in 2010. This program now boasts over 480 stations and a daily ridership of 5,700.
Within a few years, bike-share systems launched in Boston, New York, Chicago, San Francisco, Seattle and dozens of other cities. In 2016 there were 55 systems across the country with a total of over 40,000 bikes.
And momentum continues to grow. In 2017 Citi Bike in New York City added 2,000 bikes, increasing its fleet to a total of 12,000. San Francisco is expanding its system from just 700 bikes to 7,000, thanks to a sponsorship deal with Ford.
The newest twist in this rapid expansion is dockless bike sharing, which lets users park bikes anywhere within defined districts and lock and unlock their bikes with smartphone apps. Users don’t have to locate docking stations or worry about whether space will be available at their destination. These systems also are cheaper to set up, so providers can charge lower user fees. Some dockless bike-share companies offer rides for as little as US$1 for the first half hour.
Dockless systems are also helping to address equity issues posed by public dock-based systems, which often are located in more affluent and predominantly white urban neighborhoods. Because dockless systems don’t require stations, they can be rapidly deployed in zones that dock-based systems may be slow to reach.
Students at Beijing University developed this approach in 2014 to improve campus mobility. Dockless bike-share companies have flooded Chinese cities with bikes in the past two years, leading to massive piles of discarded bicycles in public spaces.
Seattle turned to dockless companies to fill the gap after a publicly funded dock bike-share system there failed in 2016. The city could soon have one of the largest bike-share systems in the country. Cities around Boston that are outside of the service area of Hubway, the area’s public bike-share system, just reached a deal to provide dockless bike-share service, expanding access to hundreds of thousands of people. And in San Francisco, Uber recently purchased Jump Bikes, a dockless electric bike-share startup, and soon will allow users to reserve electric bikes with their Uber app.
If recent examples are any indication, bike sharing in the United States will be a mix of complementary dock-based and dockless systems, run by both public entities and private companies. The humble bicycle, aided by smartphone technology, is resurging as an urban transportation option.
Douglas Johnson works for Howard Stein Hudson, a planning and engineering firm that includes bike share companies among its clients.
For many Americans, the financial crisis that plunged the global economy into recession a decade ago may seem like a distant memory.
Household net worth – the difference between assets and debts – reached a record US$98.7 trillion in the last quarter of 2017, up from $56.2 trillion in 2008.
Yet net wealth, by itself, masks a lot of information that could signal troubling trends. For example, this measure doesn’t tell us which households are getting richer. It also doesn’t reveal how much borrowing is fueling these ostensibly swelling balance sheets.
More specifically, it doesn’t show that for households headed by women, particularly poorer ones, the financial picture is still very cloudy. That’s in part because, as my soon-to-be-published research shows, low-income single women borrowed a lot more than single men in the years leading up to the crisis. And their indebtedness relative to their income and wealth remains far more elevated than is the case for pretty much everyone else.
This is especially worrying because female-headed households are vulnerable to begin with – and so are at risk again if another crisis looms on the horizon.
Why debt matters
To understand why debt is so integral to household financial health, it’s helpful to look at what happened during the 2008 financial crisis.
Overall household debt grew dramatically in the early 2000s, driven in large part by the subprime mortgage boom. This borrowing eventually reached levels that proved to be unsustainable and, after interest rates began rising in 2004, forced millions into foreclosure.
While things have recovered, the significant gains in net worth are illusory, in part because they have gone disproportionately to the richest households. Moreover, they have been financed through a lot more borrowing.
Total household debt reached a record $13.15 trillion at the end of 2017, up about $2 trillion since the most recent trough in 2013. Nonhousing debt like credit cards and student loans made up most of the increase.
To understand why net worth is misleading, consider two households with identical net worth of $10,000: One has $15,000 of assets and $5,000 of debts, while the other has $10,000 of assets and no debts.
Whether the $5,000 turns out to be unsustainable or not depends on the household’s ability to service the debt and pay down the principal. If its income becomes insufficient, the debt will accumulate, and eventually the family will have less and less money for the necessities of life – as occurred during the financial crisis.
Sustainable debt can quickly become unsustainable if a household suffers what economists call a “shock,” or any unexpected change to the family’s ability to make ends meet, like losing a job or caring for a sick relative. And some households are more vulnerable, or financially fragile, than others.
Unpredictable shocks can push such households over the edge.
The feminization of poverty
Female household heads are particularly at risk to shocks because of their greater economic insecurity and may be more likely to use high-cost borrowing to make ends meet.
For a start, single women’s median wealth is one-third that of single men. And single women – mothers in particular – have more frequent and longer poverty spells and higher unemployment rates than other households. They also experience high levels of economic risk from shocks such as divorce and unexpected care obligations. On top of all this, the social safety nets such as federal welfare programs that used to support female-headed households have been weakened.
Economists have also pointed to evidence of a “feminization of high-cost credit,” particularly among women of color. That’s because low-income single women’s economic vulnerability and historically limited access to traditional credit products have made them targets for predatory subprime lending. In a 2006 sample of mortgage borrowers, more than half of mortgages owned by black single women were subprime, compared with 28 percent for non-Hispanic white single male borrowers.
Pushed into the red
My research, which will be published in the Forum for Social Economics, shows that female-headed households experienced a concerning increase in two major forms of borrowing leading up to the financial crisis: mortgage and educational debt.
Controlling for other household characteristics such as household size and marital status, I examined differences in the growth of average mortgage and student debt among single female- and male-headed households in three time periods: the late 1990s, the credit expansion of 2002 to 2007, and the post-crisis period of 2008 to 2013. I also compared differences between incomes below and above the median, which varied from $24,000 in 1995 to $35,000 in 2007.
My most significant finding is that average mortgage debt for households headed by lower-income unmarried, divorced or widowed women increased substantially during the credit expansion – rising from about $9,800 to $16,600 after adjusting for other household characteristics – while similar households led by single men showed no statistically significant change during the period. This gender gap persisted during the recovery; debt for men and women changed very little through 2013.
One explanation is that lenders saw poorer single women – and women of color in particular – as a largely untapped market in their rush to originate all the high-interest loans that they could. Other research has found that women were more likely than men to receive subprime mortgages.
In terms of student debt, I found that the average single woman borrowed an extra $2,000 or so during the lead-up to the crisis, compared with an increase of only $775 for men. This was particularly prevalent among younger single women. After the crisis, when many people went back to school because there were so few jobs, female-headed households increased their student debt by an additional $3,400 on average, while men borrowed an additional $2,800.
One reason for this is likely that single mothers are overrepresented at for-profit colleges, where students are three times more likely than their peers at nonprofit universities to hold costly private loans. Another is that more women were studying at college.
One important caveat to my data. My data show only averages over time, not how the fortunes of particular borrowers changed. In other words, I can only show trends, not whether individual households are in fact better or worse off than they were at different points in time.
Wealth and financial fragility
Of course, debt isn’t always a bad thing. Many households use debt to acquire assets to improve their financial situation down the road.
Homeownership is an important way to build wealth, so it’s not altogether a bad thing that a record share of unmarried women owned their own homes in 2006. Similarly, educational investments lead to long-run payoffs that far exceed tuition costs: Someone with a college degree is estimated to earn one and a half times as much as a high school graduate.
Still, there are good reasons to question whether all that pre-crisis borrowing really improved households’ financial health. In my own research, I found that lower-income women’s debt-to-wealth ratio doubled from the late 1990s to 2013. It turns out, the wealth created by the surge in female homeowners simply vanished when the housing bubble popped.
Today, as borrowing again crescendos, there are good reasons to worry that the next bursting of a debt-driven bubble is right around the corner. And when it happens, once again many low-income single women and their dependents will be among the worst hit.
Melanie G. Long does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Around the world, however, most employed women automatically get paid maternity leave. And in most wealthy countries, they also have access to affordable child care.
These holes in the national safety net are a problem for many reasons, including one I’ve been researching with my colleagues for years: Paid parental leave and child care help women stay in the workforce and earn higher wages over time. This lack of parental leave and child care may explain why the U.S. is no longer a leader in women’s workforce participation.
The U.S. is one of a handful of countries worldwide that does not mandate paid maternity leave. The other four are the low-income nations of Lesotho, Liberia, Papua New Guinea and Swaziland.
Paid leave, which typically lasts at least 14 weeks, needs to be designed thoughtfully. When women can and do take two or even three years off after having a baby, as they may in Hungary, long leaves can limit mothers’ work experience and lead to discrimination.
Denmark offers what I think is a strong example. There, moms get almost 18 weeks of paid maternity leave and dads get two weeks of paid paternity leave. On top of that, couples get up to a total 32 weeks of parental leave, which parents can split. This policy grants parents both the time and resources necessary to care for children, without “mommy tracking” mothers.
In many wealthy countries, child care and preschool are considered a mainstay of the educational system. But in the U.S., only about half of all children between the ages of 3 and 6 are getting publicly supported child care of any kind, including kindergarten, versus 99 percent of kids that age in France.
Interestingly, high-quality early childhood education programs are associated with many excellent outcomes for children from lower-income families: higher graduation rates, along with lower rates of teen pregnancy and juvenile crime.
In other words, when governments invest in child care and maternity leave, it fosters a more productive, healthy and creative workforce.
Joya Misra has received funding from the National Science Foundation and the Washington Center for Equitable Growth.
Cuba has a new president – and for the first time in six decades, his last name is not Castro.
Cuba’s National Assembly has elected Cuba’s First Vice President Miguel Díaz-Canel to replace 87-year-old Raúl Castro, who took over as Cuba’s leader in 2006 after his brother Fidel Castro fell ill.
Raúl Castro stepped down in observance of the two-term limit for senior government and party officials that he himself mandated in 2011. In so doing, he opened the door not just for a new president but for a generational transition in Cuba.
This is one of the most important moments I’ve seen in 40 years of studying and writing on Cuba.
Díaz-Canel faces real challenges. Cuba’s economy is weak, relations with Washington are deteriorating and internet expansion on the Communist island has produced a growing chorus of domestic critics.
Who can fill Castro’s shoes?
The political rise of 57-year-old Díaz-Canel represents the final stage of a transfer of power away from the “historic generation” that waged Cuba’s 1959 revolution. The charisma of Fidel Castro, who died in 2016, was for decades a pillar of Cuba’s regime.
Díaz-Canel – a trained engineer who worked his way up from local party leader to first vice president – will have to earn his authority through performance.
Those who have followed his career say Díaz-Canel is a seasoned, pragmatic politician. As a Communist official in his home province of Villa Clara in the 1990s, when Cuba suffered a prolonged economic depression, he rode his bicycle to work rather than take a car and driver.
He appears ill at ease with large audiences but relaxed and congenial in small groups – much like his mentor, Raúl Castro.
As president, Díaz-Canel will still benefit from Raúl Castro’s experience and authority. Castro remains first secretary of the Communist Party – Cuba’s only party – until 2021.
This is arguably a post more powerful than the presidency. The party leadership makes all major economic, social and foreign relations policies, which the president is obliged to carry out.
So I don’t expect any drastic changes in direction from Díaz-Canel – at least, not right away.
What’s in store for Cuba
This political transition is still significant, though. For the first time, the leader of the Communist Party and the leader of the government are different people. Both Fidel and Raúl Castro held both positions simultaneously.
Cuba must now sort out the lines of authority between party and state. As Díaz-Canel staffs government ministries with his own team, he will gain ever more control over how policy is interpreted and implemented.
He will immediately face some tough issues. Cuba’s economy is struggling, dragged down by the dual-currency system Fidel Castro adopted in 1994 to attract cash remittances from Cuban expats.
Díaz-Canel will also face pressure to reinvigorate the Cuban economy by pushing ahead with the controversial economic reform program launched by Raúl Castro early in his tenure, which loosened restrictions on private enterprise and enabled foreign investment in Cuba.
The pace of change has since slowed, frustrating Cubans. If Díaz-Canel opens up Cuba’s economy too quickly, he’ll alienate Communist Party conservatives. Going too slowly will anger reformers.
Another contentious issue is freedom of expression. Public criticism of the Cuban regime has grown as more citizens connect to the internet. Last year, hard-liners launched a campaign vilifying critical bloggers, which – to many onlookers’ surprise – Díaz-Canal supported.
Other prominent Cubans pushed back, though, and the campaign ended without any of the targeted web sites being closed down.
Raúl Castro has balanced conflicting factions with a delicate strategy he described as reform “without haste, but without pause.” Díaz-Canel must now demonstrate he, too, can manage these conflicts.
US-Cuba relations in flux
Finally, the new president has to deal with the mercurial U.S. administration. President Donald Trump has largely outsourced Cuba policy to conservative Cuban-Americans in Congress, led by Sen. Marco Rubio, a Republican from Florida.
In October, Trump further battered bilateral ties by downsizing the American Embassy in Cuba after U.S. government personnel suffered unexplained health problems there. He also expelled 17 Cuban diplomats from Washington.
Recent Trump appointments do not bode well for the future of U.S.-Cuban relations. The incoming secretary of state, Mike Pompeo, was a vocal opponent of Obama’s rapprochement with Havana. National security adviser John Bolton once deemed Cuba part of an “axis of evil,” falsely accusing it of developing biological weapons.
Anticipation and trepidation
In December, I was in Havana, a city where the benefits of Raúl Castro’s economic reforms are most tangible. Cubans I spoke with there seemed ready for younger leadership and excited about the impending power transition.
But 80 percent of Cubans have always had a Castro as their president. So the anticipatory mood is leavened by trepidation: People fear that instability may accompany this major political change.
If Díaz-Canel can deliver on the economy – the top priority for most Cubans – he’ll be judged a success. If not, he will face a rising tide of discontent from a population impatient for change.
William M. LeoGrande is the co-author with Peter Kornbluh of the 2015 book "Back Channel to Cuba: The Hidden History of Negotiations between Washington and Havana."
Would you get on a plane that didn’t have a human pilot in the cockpit? Half of air travelers surveyed in 2017 said they would not, even if the ticket was cheaper. Modern pilots do such a good job that almost any air accident is big news, such as the Southwest engine disintegration on April 17.
But stories of pilot drunkenness, rants, fights and distraction, however rare, are reminders that pilots are only human. Not every plane can be flown by a disaster-averting pilot, like Southwest Capt. Tammie Jo Shults or Capt. Chesley “Sully” Sullenberger. But software could change that, equipping every plane with an extremely experienced guidance system that is always learning more.
In fact, on many flights, autopilot systems already control the plane for basically all of the flight. And software handles the most harrowing landings – when there is no visibility and the pilot can’t see anything to even know where he or she is. But human pilots are still on hand as backups.
A new generation of software pilots, developed for self-flying vehicles, or drones, will soon have logged more flying hours than all humans have – ever. By combining their enormous amounts of flight data and experience, drone-control software applications are poised to quickly become the world’s most experienced pilots.
Drones that fly themselves
When drones were first introduced, they were flown remotely by human operators. However, this merely substitutes a pilot on the ground for one aloft. And it requires significant communications bandwidth between the drone and control center, to carry real-time video from the drone and to transmit the operator’s commands.
Many newer drones no longer need pilots; some drones for hobbyists and photographers can now fly themselves along human-defined routes, leaving the human free to sightsee – or control the camera to get the best view.
University researchers, businesses and military agencies are now testing larger and more capable drones that will operate autonomously. Swarms of drones can fly without needing tens or hundreds of humans to control them. And they can perform coordinated maneuvers that human controllers could never handle.
Whether flying in swarms or alone, the software that controls these drones is rapidly gaining flight experience.
Importance of pilot experience
Experience is the main qualification for pilots. Even a person who wants to fly a small plane for personal and noncommercial use needs 40 hours of flying instruction before getting a private pilot’s license. Commercial airline pilots must have at least 1,000 hours before even serving as a co-pilot.
On-the-ground training and in-flight experience prepare pilots for unusual and emergency scenarios, ideally to help save lives in situations like the “Miracle on the Hudson.” But many pilots are less experienced than “Sully” Sullenberger, who saved his planeload of people with quick and creative thinking. With software, though, every plane can have on board a pilot with as much experience – if not more. A popular software pilot system, in use in many aircraft at once, could gain more flight time each day than a single human might accumulate in a year.
As someone who studies technology policy as well as the use of artificial intelligence for drones, cars, robots and other uses, I don’t lightly suggest handing over the controls for those additional tasks. But giving software pilots more control would maximize computers’ advantages over humans in training, testing and reliability.
Training and testing software pilots
Unlike people, computers will follow sets of instructions in software the same way every time. That lets developers create instructions, test reactions and refine aircraft responses. Testing could make it far less likely, for example, that a computer would mistake the planet Venus for an oncoming jet and throw the plane into a steep dive to avoid it.
The most significant advantage is scale: Rather than teaching thousands of individual pilots new skills, updating thousands of aircraft would require only downloading updated software.
These systems would also need to be thoroughly tested – in both real-life situations and in simulations – to handle a wide range of aviation situations and to withstand cyberattacks. But once they’re working well, software pilots are not susceptible to distraction, disorientation, fatigue or other human impairments that can create problems or cause errors even in common situations.
Rapid response and adaptation
Already, aircraft regulators are concerned that human pilots are forgetting how to fly on their own and may have trouble taking over from an autopilot in an emergency.
In the “Miracle on the Hudson” event, for example, a key factor in what happened was how long it took for the human pilots to figure out what had happened – that the plane had flown through a flock of birds, which had damaged both engines – and how to respond. Rather than the approximately one minute it took the humans, a computer could have assessed the situation in seconds, potentially saving enough time that the plane could have landed on a runway instead of a river.
Aircraft damage can pose another particularly difficult challenge for human pilots: It can change what effects the controls have on its flight. In cases where damage renders a plane uncontrollable, the result is often tragedy. A sufficiently advanced automated system could make minute changes to the aircraft’s steering and use its sensors to quickly evaluate the effects of those movements – essentially learning how to fly all over again with a damaged plane.
Boosting public confidence
The biggest barrier to fully automated flight is psychological, not technical. Many people may not want to trust their lives to computer systems. But they might come around when reassured that the software pilot has tens, hundreds or thousands more hours of flight experience than any human pilot.
Other autonomous technologies, too, are progressing despite public concerns. Regulators and lawmakers are allowing self-driving cars on the roads in many states. But more than half of Americans don’t want to ride in one, largely because they don’t trust the technology. And only 17 percent of travelers around the world are willing to board a plane without a pilot. However, as more people experience self-driving cars on the road and have drones deliver them packages, it is likely that software pilots will gain in acceptance.
The airline industry will certainly be pushing people to trust the new systems: Automating pilots could save tens of billions of dollars a year. And the current pilot shortage means software pilots may be the key to having any airline service to smaller destinations.
Both Boeing and Airbus have made significant investments in automated flight technology, which would remove or reduce the need for human pilots. Boeing has actually bought a drone manufacturer and is looking to add software pilot capabilities to the next generation of its passenger aircraft. (Other tests have tried to retrofit existing aircraft with robotic pilots.)
One way to help regular passengers become comfortable with software pilots – while also helping to both train and test the systems – could be to introduce them as co-pilots working alongside human pilots. Planes would be operated by software from gate to gate, with the pilots instructed to touch the controls only if the system fails. Eventually pilots could be removed from the aircraft altogether, just like they eventually were from the driverless trains that we routinely ride in airports around the world.
Jeremy Straub is the associate director of the NDSU Institute for Cyber Security Education and Research. He has received funding related to AI and robotics from the North Dakota State University, the NDSU Foundation and Alumni Association, the U.S. National Science Foundation, the University of North Dakota and Sigma Xi. He is the lead inventor on a patent-pending technology for autonomous control of robots, UAVs and spacecraft. The views presented are his own and do not necessarily represent the views of NDSU or funding agencies.
President Donald Trump recently said he was open to returning to the Trans-Pacific Partnership, but only if he could get a “substantially better” deal than his predecessor.
This apparent change of heart, announced via Twitter, caught most observers off guard. The TPP was on track to become the world’s largest free trade zone by joining Pacific Rim countries that collectively produce about 40 percent of global economic output. But Trump railed against the accord on the campaign trail, making it the ultimate bugbear for his brand of economic nationalism. In a widely anticipated move, he withdrew the U.S. from the TPP as one of his first presidential acts.
If Trump ever officially changed his tune and tried to rejoin this trade pact, could he?
Like many observers, I believe it would be tough to pull off. The other 11 countries would clearly prefer to have the U.S. in rather than out, but they are understandably reluctant to throw open, for a third time, negotiations that took years to conclude.
In 2008, most of the major Pacific Rim economies – with the notable exception of China – began to consider a massive free trade agreement for the region.
Formal TPP talks finally began two years later, when representatives of the U.S. and several other Pacific nations, such as Australia, Chile and Vietnam, started to hammer out the pact’s contentious details.
The deal, which took another six years to complete, later expanded to include more countries – including Japan, Canada and Mexico.
The aim of the TPP was to deepen economic ties between the dozen countries, slash tariffs on a broad range of goods and services, and better synchronize their policies and regulations. The substance of the agreement was complex, and different countries negotiated different grace periods for its implementation.
TPP proponents like me based our support on well-established economic theories, which point to the benefits of barrier-free trade for all participating countries. These theories do not deny, of course, that some industries and workers can suffer significantly from open exchange. But they emphasize the overall advantages of freer trade in generating new jobs, cheaper products and more innovation.
Despite its potential benefits, however, the emerging partnership soon became a lightning rod for U.S. opponents of open markets.
The critics lodged three distinct complaints. They expressed skepticism for the benefits of free trade itself, arguing that imports can destroy industries, uproot communities and threaten national security. They also argued that international agreements undermine democracy and objected to the secrecy of the negotiations themselves.
Finally, opponents homed in on the pact’s specific details, especially those that were leaked or released early on. The most controversial issues proved to be indirectly related to trade policy.
TPP foes, for example, lambasted provisions regarding intellectual property, labor and the environment. Some critics argued that these rules went too far, while others complained that they didn’t go far enough.
Many of them also vehemently opposed its investor-state dispute settlement provisions, which would have let foreign businesses sue member governments for any violations that they claimed were hurting their interests.
Despite this opposition in the U.S and elsewhere, the 12 nations ultimately signed the TPP in February 2016 and began the process of domestic ratification. But Trump was elected later that year, and he backed out of the deal as soon as he entered the White House.
Most observers expected America’s exit to doom the agreement. Instead, the 11 remaining signatories forged a smaller pact among themselves, renamed the Comprehensive and Progressive Agreement for Trans-Pacific Partnership and signed in March 2018. Lawmakers in the countries taking part are now considering ratification.
Besides, this bout of Trump’s apparent openness to join the TPP seemed to be short-lived. It may have ended as it started, on Twitter. The pact would have “too many contingencies and no way to get out if it doesn’t work,” Trump said in a tweet that mischaracterized South Korea as a member. (It isn’t.)
Perhaps Trump realized that the U.S. would probably have to accept terms that are no better – and possibly worse – than those President Barack Obama agreed upon in 2016 when the TPP talks ended.
Charles Hankla does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
I am a respiratory disease physician and professor at the University of Pittsburgh School of Medicine, and I direct the COPD clinical and research programs. My research has been inspired by real clinical problems when facing my lung disease patients whom I have worked with over the past 30 years.
COPD is a chronic respiratory condition that results in cough and shortness of breath. It often gets worse. It affects up to 16 million people and is the third-leading cause of death in the United States behind heart disease and stroke. It further results in 6 percent of all deaths worldwide.
The disease is most commonly caused by tobacco smoking and is thus often preventable. Mrs. Bush smoked cigarettes for decades, she wrote in her biography, but quit in 1968. One-fourth of cases occur in nonsmokers, in part due to other environmental exposures. COPD is often undiagnosed because of its slow onset. Also, people often assume that their coughing is “smokers’ cough,” or old age. Women are more likely than men to be diagnosed with COPD.
COPD includes several different conditions, including emphysema and chronic bronchitis. They can occur separately or together.
Normal lungs have bronchial tubes that branch like a tree into smaller and smaller tubes, which end in tiny elastic air sacs called alveoli. These fill up as we breathe in and snap back empty when we exhale.
In COPD, the airway tubes narrow due to inflammation, increased mucous production and, eventually, scarring, which is known as chronic bronchitis. Further, the walls of the alveoli can break down, as do small bubbles coalesce to form larger bubbles. This is known as emphysema. As a result, they do not snap back as easily when a person exhales. They have less ability to transfer oxygen into, and remove carbon dioxide from, the blood.
These different processes result in a prolonged and incomplete exhalation, and air remains trapped in the lungs when the next breath begins. As the condition progresses, it becomes increasingly hard to breathe. This results in more fatigue, a decreasing ability to exercise, declining activity and a lower quality of life.
Many COPD patients develop recurrent chest colds, often requiring hospitalization and rising medical bills. Patients susceptible with COPD are also at greater risk of other chronic conditions such as heart disease, which can complicate the diagnosis and management. For example, Mrs. Bush was reported to have had congestive heart failure.
Due to differences in genetics, not all people who smoke get COPD, and not all patients have the same symptoms or progress at the same rate. It is thus critical that people who have a prolonged cough or shortness of breath undergo lung function testing, particularly if they are smokers or former smokers.
The most important treatment for COPD is to stop smoking, but vaccinations, pulmonary rehabilitation, use of long-acting inhalers and other surgical advances have led to improved quality of life, decreased hospitalizations and better survival for many patients with COPD.
Frank Sciurba has received funding from the National Institutes of Health, Department of Defense, Commonwealth of Pennsylvania, COPD Foundation and several pharmaceutical companies including GlaxoSmithKline and AstraZeneca Pharmaceuticals. He has served on the advisory boards of GSK, Boehringer-Ingelheim Pharmaceuticals, Inc., and Circassia.
Superman – the first, most famous American superhero – turns 80 this year.
The comics, toys, costumes and billion-dollar Hollywood blockbusters can all trace their ancestry to the first issue of “Action Comics,” which hit newsstands in April 1938.
Most casual comic book fans can recite the character’s fictional origin story: As the planet Krypton approaches destruction, Jor-El and his wife, Lara, put their infant son, Kal-El, into a spaceship to save him. He rockets to Earth and is taken in by the kindly Kents. As he grows up, Kal-El – now known as Clark – develops strange powers, and he vows to use them for good.
But the story of the real-life origins of Superman – a character created out of friendship, persistence and personal tragedy – is just as dramatic.
From villain to hero
When I was a kid growing up in Cleveland, my dad would regale my brother and me with stories of Superman’s local origins: The two men who had concocted the comic book hero had grown up in the area.
As I became older, I realized I wanted to understand not only how, but why Superman was created. A 10-year research project ensued, and it culminated in my book “Super Boys.”
In the mid-1930s, Jerry Siegel and Joe Shuster were two nerds with glasses who attended Glenville High School in Cleveland, Ohio. They worked on the school newspaper, wrote stories, drew cartoons, and dreamed of being famous. Jerry was the writer; Joe was the artist. When they finally turned to making comics, a publisher named Major Malcolm Wheeler-Nicholson gave them their first break, commissioning them to create spy and adventure comics in his magazines “New Fun” and “Detective Comics.”
But Jerry and Joe had been working on something else: a story about a “Superman” – a villain with special mental powers – that Jerry had stolen from a different magazine. They self-published it in a pamphlet titled “Science Fiction.”
While “Science Fiction” only lasted for five issues, they liked the name of the character and continued to work on it. Before long, their new Superman was a good guy. Joe dressed him in a cape and trunks like those of the era’s popular bodybuilders, modeled the character’s speedy running abilities after Olympic sprinter Jesse Owens, and gave him the bouncy spit-curl of Johnny Weissmuller, the actor who played Tarzan. It was a mishmash of 1930s pop culture in gladiator boots.
When they were finally ready, they started pitching Superman to every newspaper syndicate and publisher they could find.
All of them rejected it, some of them several times. This continued for several years, but the duo never gave up.
When Superman finally saw print, it was through a process that is still not wholly clear. But the general consensus is that a publisher named Harry Donenfeld, who had acquired the major’s company, National Allied Publications (the predecessor to DC Comics), bought the first Superman story – and all the rights therein – for US$130.
Was Jerry trying to create a Superdad?
The world was introduced to Superman in “Action Comics” No. 1, on April 18, 1938, with the Man of Steel appearing on the cover smashing a Hudson roadster. The inaugural issue cost 10 cents; in 2014, a copy in good condition sold for $3.2 million dollars.
When the comic became a runaway hit, Jerry and Joe regretted selling their rights to the character; they ended up leaving millions on the table. Though they worked on Superman comics for the next 10 years, they would never own the character they created, and for the rest of their lives repeatedly filed lawsuits in an effort to get him back.
But there is another more personal piece to the puzzle of Superman’s origins.
On June 2, 1932, Jerry’s father, Michel, was about to close his secondhand clothing store in Cleveland when some men walked in. Michel caught them trying to steal a suit, and ended up dying on the spot – not in a hail of gunfire, but from a heart attack.
Jerry was 17.
Some believe Jerry may have created Superman as a fantasy version of his own father – as someone who could instantly transform from a mild-mannered man into a hero capable of easily overpowering petty thieves. Indeed, some of the early Superman stories feature Jor-El out of breath (as Michel often was from heart disease) and show criminals who faint dead when confronted by Superman. As many victims of childhood trauma often do, Jerry may have used Superman to re-enact his father’s tragic death over and over in an attempt to somehow fix it.
In Superman’s never-ending battle of good versus evil, this same story is repeated again and again on the page, in cartoons and in movies. It’s seen in kids who pretend to be Superman, tucking towels in at their neck and playing out battles in their backyards.
Why is Superman’s 80th birthday important? It isn’t just about celebrating a “funny book” about a guy who has heat vision and can fly. It’s about using fantasy to make sense of the world, plumbing personal tragedy to tell a story, and using art to envision a more just and safe society.
Brad Ricca does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
On 4/20, many across the U.S. gather to celebrate their love and appreciation for marijuana.
Polls show that 64 percent of Americans favor legalizing marijuana. But, despite the majority support, there’s no clear consensus on how it should be regulated. As a researcher who has studied the impact of drugs in the U.S. and Mexico, it’s been captivating to watch states adapt as they attempt to regulate this illicit and stigmatized substance.
Many states permit medical marijuana, but there’s a wide variety of approaches. Today, 29 states currently permit medical marijuana and have an established system for regulating it.
Another 17 states have limited medical programs. These programs provide access to products with low levels of tetrahydrocannabinol (THC) and high levels of cannabidiol (CBD), with the goal of eliminating the “high” and maximizing medical benefits. Beyond that, the conditions doctors and patients can treat with cannabis vary from state to state.
Minnesota, New York and West Virginia don’t permit marijuana smoking as part of their medical programs. West Virginia, however, allows patients to vaporize marijuana plant matter, while Minnesota only permits consumption of marijuana in liquid extract form.
Colorado, where I am based, has a much more expansive medical program. Patients can access an array of products, from extracts to strains of raw plant material. While New York caps the amount of THC that a product dose may contain, Colorado and other states have no such limit on their medical marijuana products.
Meanwhile, recreational marijuana use has been approved for adults 21 and over by nine states: Alaska, California, Colorado, Maine, Massachusetts, Nevada, Oregon, Vermont and Washington, as well as the District of Columbia.
However, once again, states haven’t implemented their policies uniformly. Vermont, for example, does not currently have a system for commercial sale and distribution, and only allows individuals to cultivate two plants. Colorado, on the other hand, has developed a robust commercial system, allows individuals to grow up to six plants, and limits the amount of marijuana products an individual can possess.
Most states have struggled with how to navigate the public consumption of cannabis, which remains illegal. As states continue to debate and implement marijuana policies, the American public will begin to recognize what works (and what doesn’t).
While these policy inconsistencies may raise concerns for some constituents, these state experiments are a valuable way to figure out how this substance works and how it affects society.
Santiago Guerra is affiliated with Colorado Springs Medical Marijuana Working Group as a content expert.
Mushrooms are often considered only for their culinary use because they are packed with flavor-enhancers and have gourmet appeal. That is probably why they are the second most popular pizza topping, next to pepperoni.
In the past, food scientists like me often praised mushrooms as healthy because of what they don’t contribute to the diet; they contain no cholesterol and gluten and are low in fat, sugars, sodium and calories. But that was selling mushrooms short. They are very healthy foods and could have medicinal properties, because they are good sources of protein, B-vitamins, fiber, immune-enhancing sugars found in the cell walls called beta-glucans, and other bioactive compounds.
Mushrooms have been used as food and sometimes as medicine for centuries. In the past, most of the medicinal use of mushrooms was in Asian cultures, while most Americans have been skeptical of this concept. However, due to changing consumer attitudes rejecting the pharmaceutical approach as the only answer to healing, that seems to be changing.
I study the nutritional value of fungi and mushrooms, and my laboratory has conducted a great deal of research on the lowly mushroom. We have discovered that mushrooms may be even better for health than previously known. They can be excellent sources of four key dietary micronutrients that are all known to be important to healthy aging. We are even looking into whether some of these could be important in preventing Parkinson’s disease and Alzheimer’s disease.
Four key nutrients
Important nutrients in mushrooms include selenium, vitamin D, glutathione and ergothioneine. All are known to function as antioxidants that can mitigate oxidative stress and all are known to decline during aging. Oxidative stress is considered the main culprit in causing the diseases of aging such as cancer, heart disease and dementia.
Ergo is produced in nature primarily by fungi, including mushrooms. Humans cannot make it, so it must be obtained from dietary sources. There was little scientific interest in ergo until 2005, when pharmacology professor Dirk Grundemann discovered that all mammals make a genetically coded transporter that rapidly pulls ergo into the red blood cells. They then distribute ergo around the body, where it accumulates in tissues that are under the most oxidative stress. That discovery led to a significant increase in scientific inquiry about possible role of ergo in human health. One study led to a leading American scientist, Dr. Solomon Snyder, recommending that ergo be considered as a new vitamin.
In 2006, a graduate student of mine, Joy Dubost, and I discovered that edible cultivated mushrooms were extremely rich sources of ergo and contained at least 10 times the level in any other food source. Through collaboration with John Ritchie and post-doctoral scientist Michael Kalaras at the Hershey Medical Center at Penn State, we showed that mushrooms are also a leading dietary source of the master antioxidant in all living organisms, glutathione. No other food even comes close to mushrooms as a source of both of these antioxidants.
I eat mushrooms, ergo I am healthy?
Our current research is centered on evaluating the potential of ergo in mushrooms to prevent or treat neurodegenerative diseases of aging, such as Parkinson’s and Alzheimer’s. We based this focus on several intriguing studies conducted with aging Asian populations. One study conducted in Singapore showed that as people aged the ergo content in their blood declined significantly, which correlated with increasing cognitive impairment.
The authors suggested that a dietary deficiency of ergo might predispose individuals to neurological diseases. A recent epidemiological study conducted with over 13,000 elderly people in Japan showed that those who ate more mushrooms had less incidence of dementia. The role of ergo consumed with the mushrooms was not evaluated but the Japanese are known to be avid consumers of mushrooms that contain high amounts of ergo.
More ergo, better health?
One important question that has always begged an answer is how much ergo is consumed in the diet by humans. A 2016 study was conducted that attempted to estimate the average ergo consumption in five different countries. I used their data to calculate the estimated amount of ergo consumed per day by an average 150-pound person and found that it ranged from 1.1 in the U.S. to 4.6 milligrams per day in Italy.
We were then able to compare estimated ergo consumption against mortality rate data from each country caused by the common neurological diseases, including Alzheimer’s, dementia, Parkinson’s disease and multiple sclerosis. We found, in each case, a decline in the death rates with increasing estimated ergo consumption. Of course, one cannot assume a cause and effect relationship from such an exercise, but it does support our hypothesis that it may be possible to decrease the incidence of neurological diseases by increasing mushroom consumption.
If you don’t eat mushrooms, how do you get your ergo? Apparently, ergo gets into the food chain other than by mushroom consumption via fungi in the soil. The fungi pass ergo on to plants grown in the soil and then on to animals that consume the plants. So that depends on healthy fungal populations in agricultural soils.
This led us to consider whether ergo levels in the American diet may be harmed by modern agricultural practices that might reduce fungal populations in soils. We began a collaboration with scientists at the Rodale Institute, who are leaders in the study of regenerative organic agricultural methods, to examine this. Preliminary experiments with oats have shown that farming practices that do not require tilling resulted in significantly higher ergo levels in the oats than with conventional practices, where tillage of the soil disrupts fungal populations.
In 1928 Alexander Fleming accidentally discovered penicillin produced from a fungal contaminant in a petri dish. This discovery was pivotal to the start of a revolution in medicine that saved countless lives from bacterial infections. Perhaps fungi will be key to a more subtle, but no less important, revolution through ergo produced by mushrooms. Perhaps then we can fulfill the admonition of Hippocrates to “let food be thy medicine.”
Robert Beelman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The recent dispute over whether Pope Francis denied the existence of hell in an interview attracted wide attention. This isn’t surprising, since the belief in an afterlife, where the virtuous are rewarded with a place in heaven and the wicked are punished in hell, is a core teaching of Christianity.
So what is the Christian idea of hell?
Origins of belief in hell
The Christian belief in hell has developed over the centuries, influenced by both Jewish and Greek ideas of the afterlife.
The earliest parts of the Hebrew Bible, around the eighth century B.C., described the afterlife as Sheol, a shadowy, silent pit where the souls of all the dead lingered in a minimal state of silent existence, forever outside of the presence of God. By the sixth century B.C., Sheol was increasingly viewed as a temporary place, where all the departed awaited a bodily resurrection. The righteous would then dwell in the presence of God, and the wicked would suffer in the fiery torment that came to be called “Gehenna,” described as a cursed place of fire and smoke.
Early depictions of the afterlife in ancient Greece, an underworld realm called “Hades,” are similar. There, the listless spirits of the dead lingered in an underground twilight existence, ruled by the god of the dead. Evildoers suffered gloomy imprisonment on an even deeper level called “Tartarus.”
Beginning in the fourth century B.C., after the Greek King Alexander the Great conquered Judea, elements of Greek culture began to influence Jewish religious thought. By time of the first gospels, between 65 and 85 A.D., Jesus refers to the Jewish belief in the eternal fire of Gehenna. Elsewhere, he mentions evildoers’ banishment from the kingdom of God, and the “blazing furnace” where the wicked would suffer sorrow and despair and “where there will be weeping and gnashing of teeth.” Jesus also mentions the Greek Hades when describing how the forces of evil – “the gates of Hades” – would not prevail against the church.
Medieval ideas of hell
In early Christianity, the fate of those in hell was described in different ways. Some theologians taught that eventually all evil human beings and even Satan himself would be restored to unity with God. Other teachers held that hell was an “intermediate state,” where some souls would be purified and others annihilated.
The image that dominated in antiquity eventually prevailed. Hell was where the souls of the damned suffered torturous and unending punishment. Even after the resurrection of the dead at the end of the world, the wicked would be sent back to Hell for eternity.
By the beginning of the fifth century, this doctrine was taught throughout western Christianity. It was reaffirmed officially by popes and councils throughout the Middle Ages.
Medieval theologians continued to stress that the worst of all these torments would be eternal separation from God, the “poena damni.” Medieval visions of the afterlife provided more explicit details: pits full of dark flames, terrible cries, gagging stench, and rivers of boiling water filled with serpents.
Perhaps the most fulsome description of hell was offered by the Italian poet Dante at the beginning of the 14th century in the first section of his “Divine Comedy.” Here the souls of the damned are punished with tortures matching their sins. Gluttons lie in freezing pools of garbage, while murderers thrash in a river of boiling blood.
Hell is God’s absence
Today, these images seem to be part of a past that the 21st century has outgrown. However, the official textbook of Catholic Christianity, the “Catechism of the Catholic Church,” reaffirms the Catholic belief in the eternal nature of hell. It omits the gory details found in earlier attempts to describe the hellish experience, but restates that the chief pain of hell is eternal separation from God.
The Vatican insisted that the pope was misquoted by the journalist. But theologians have pointed out that Pope Francis has stressed the reality of hell several times in recent years. Indeed, for today’s Catholics at least, hell still means the hopeless anguish of God’s absence.
Joanne M. Pierce is a Roman Catholic member of the Anglican-Roman Catholic Consultation in the USA, a national ecumenical dialogue group sponsored by the United States Conference of Catholic Bishops and The Episcopal Church.
For nearly 50 years, Earth Day has provided an opportunity for people across the globe to come together and rally in support of the natural world. While the specific challenges have varied, the goal has remained more or less the same: to protect the rich, biological world that the current generation has inherited from being overwhelmed by the influences of humanity.
While there have been many notable successes since this day of celebration began in 1970, the overall trajectory has not been uplifting.
Today you can travel to the furthest part of the Arctic Ocean, to the highest point of the Caucasus Mountains, to the remotest spot in the Australian outback and find the unmistakable signs of human activity. Chemical and industrial traces are now present in every pinch of soil and every drop of water. Transported by high-altitude atmospheric winds, millennia-old patterns of precipitation, and the tire treads of fossil-fuel-powered vehicles, the imprints of humanity reach all corners of Earth.
These kinds of global impacts demand a fundamental shift in the relationship between humans and the surrounding world. Despite the efforts of those who have marched passionately and religiously on Earth Day, we live in the age when “pristine nature” has permanently blinked out of existence.
Many are suggesting that humanity should mark this moment by declaring that the planet has entered the new epoch of the Anthropocene. The fact that our species has left its mark in every remote bay, on every mountaintop, and across every continent is certainly a cause for reflection. But it might also be seen as a dubious form of branding to celebrate the mess our species has created by naming the next epoch in our honor.
More urgent than getting the name right, however, is the need to think very carefully about where to head from here. For the most noteworthy aspect of the emerging epoch is not the fact that human influence has reached every corner of the entire planet. It is the fact that, as Earth Day approaches 50, technologies are coming online with unprecedented capacity to remake the natural world.
Nanotechnology, synthetic biology and climate engineering have the potential to transform an already tainted planet into an increasingly synthetic whole. Such powerful technologies do not just mark a new period in Earth’s ongoing history. They create the real possibility of what I call a “Synthetic Age.” From the atom to the atmosphere, key planetary processes have the potential to be reconfigured by Earth’s most audacious species.
By shrinking common materials down to the scale of billionths of a meter, nanotechnologists can make available new forms of matter with highly unusual and extremely valuable properties. Using new techniques for editing and assembling DNA, synthetic biologists can fabricate whole genomes, which they can insert into bacterial hosts to hijack their operation. Ecosystem engineers are on the point of redesigning targeted species by sending genetic traits through wild populations, using tools known as gene drives. Climate engineers are preparing to field test technologies that can reduce the amount of short-wave solar radiation entering the atmosphere to cool global temperatures.
What makes these sorts of technologies and practices different from anything that has come before is not how far they reach geographically, but how deeply they go “metabolically.” They mark the beginning of a new period of Earth’s history in which humanity starts to take control of the processes responsible for giving the planet its shape. The biological, geological and atmospheric forces that have sculpted the world over countless epochs start to become the products of human endeavor. Responsibility for some of the formative processes of the biosphere falls increasingly into human hands.
De-extinction and outdesigning evolution
Take the prospect of recreating the genomes of extinct species as an example.
The gene-reading techniques developed during the Human Genome Project, the gene-synthesis methods being refined at places like the J. Craig Venter Institute, and the genome-editing practices now available through CRISPR-Cas9 are together on the cusp of making it possible to recreate close proxies of the genomes of species long ago extinguished from the Earth.
In mammals, it may not be long before a rebuilt genome can be inserted into the evacuated nucleus of an egg cell from a related species and implanted into the womb of a surrogate parent. A primitive version of such a technology was used for the (extinct) Pyrenean ibex in 2003 leading to the mildly disconcerting occurrence of the birth of the world’s first extinct mammal.
In the event, the celebration of the resurrected ibex was cut short by lung deformities, which led to its death within minutes. It is not yet clear whether these types of genetic imperfections can be avoided in future. Some are optimistic that they can. If the technical obstacles are overcome, a genetically manipulated Pyrenean ibex or even a whole new ibex – call it Synthetic Ibex Version 2.0 – could be fashioned from the genes of the extinct animal to occupy the niche that had been left behind.
If de-extinction becomes possible, phenomena once uniquely responsible for shaping the biological world would move out of the natural realm and into the human domain. There would be a genuine alternative to the processes of inheritance, mutation, genetic drift, reproductive isolation and natural selection that were the grist for Darwin’s evolutionary mill. As Harvard chemist George Whitesides said, it would be “a marvelous challenge to see if we can outdesign evolution.”
Earth Day’s annual celebration of the natural world provides a perfect opportunity to reflect on such practices and to note how they put the whole idea of “nature” into question. It is not just that no part of the natural world will be untouched anymore. The natural world – and the processes that have formed it – might increasingly be replaced by synthetic substitutes.
The exact contours of this Synthetic Age are far from determined. There is still the opportunity to pause and to decide that certain physical, biological and atmospheric processes should remain free of human design. Some species might be deliberately left to continue their evolutionary odyssey unmolested. Some landscapes might be selected to remain entirely in the hands of ecological and entropic forces.
So let’s not miss a unique opportunity. On this Earth Day, recognition of the dawning of a new epoch is appropriate. But it is important not because the planet’s fate has already been sealed. It is important precisely because it provides an opportunity for a more conscious and self-reflective decision about the world humanity will choose to create.
Christopher J Preston has received funding from the National Science Foundation.
Twenty years ago, when I was a law student taking constitutional law, the Second Amendment did not even come up in class.
Today, as a law professor, I teach the Second Amendment as the very first case in my constitutional law class.
The emergence of the Second Amendment in law school classrooms is a lesson in the ways politics and society drive constitutional debates, breathing meaning into our Constitution.
The dormant amendment
The Second Amendment says, “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”
Before that case, the last time the Supreme Court had addressed the Second Amendment was in a 1939 ruling, when it linked the right to possess a gun to “the preservation or efficiency of a well regulated militia.”
“Perhaps, at some future date,” Justice Thomas mused in 1997, “this Court will have the opportunity to determine whether” the Constitution guarantees a personal right to bear arms.
Heller: Microcosm of constitutional law
That opportunity came in spring of 2008, when the Supreme Court heard Washington, D.C., resident Dick Heller’s challenge to the city’s ban on keeping a handgun in his home.
In a much-anticipated opinion issued that June, the court ruled “the District’s ban on handgun possession in the home violates the Second Amendment.”
Justice Antonin Scalia wrote a textbook opinion for the court in what is known as the Heller case. Clearly written and filled with historical references, the opinion and its dissents demonstrate the various ways we argue about the Constitution in the United States. These include the text, structure and history of the Constitution itself, as well as the practices, policies and principles of the American people.
Whether one agrees or disagrees with the case’s result or reasoning, the reader will find a microcosm of American constitutional law in Heller. The case asks basic questions of constitutional meaning. And that is why Heller is such a useful tool in teaching constitutional law.
This is how we study the case in my classroom.
We begin by asking, what does the text mean?
The court begins by defining “the People” who possess the right as “all Americans,” comparing the term with similarly broad uses in the First and Fourth Amendments. It also defines “keep and bear arms” as possessing and carrying weapons generally, not only for military use.
Next, students explore how that text fits within the broader structure of the Constitution.
Here the court must explain how the introductory reference to “a well regulated Militia,” relates to federal power over the citizen militia in the states. It decides the militia was the main purpose of the right, but it does not limit the right to the militia.
Then we turn to history. Like nearly all of our constitutional rights, the Second Amendment is derived from earlier state constitutions, many of which expressed pre-existing rights in English law. The court noted how commentators at the time of framing the Constitution recognized “the right of having and using arms for self-preservation.” Meanwhile, some of the first states explicitly guaranteed an individual right to self defense; others adopted an individual right in the decades after the Second Amendment.
The class then leaves the drafting of the Second Amendment itself, and looks at subsequent practices that bear on the right to bear arms, including interpretations by commentators and courts. One notable example is Congress’ protection of “the constitutional right to bear arms” for freed slaves after the Civil War. Of course, the court’s own interpretations of the Second Amendment were sparse and inconclusive – until Heller.
After examining these conventional legal materials, students turn to the court’s discussion of policy considerations. The Second Amendment, the court wrote, is “not a right to keep and carry any weapon whatsoever in any manner whatsoever and for whatever purpose.” The court points to historical limits on carrying concealed weapons and “longstanding prohibitions on the possession of firearms by felons and the mentally ill” or in “schools and government buildings.” Nor does the Amendment extend to “dangerous and unusual weapons.”
The court, and the class, concludes by placing the right within broader principles of American constitutionalism. The self-defense principle the court finds in the Second Amendment, like the free expression principle in the First Amendment, “necessarily takes certain policy choices off the table.” One of those prohibited choices is the handgun ban at issue.
More questions than answers
None of this is to say the court’s decision is uncontroversial, even on these legal terms. So we also teach students about the dissenting opinions in Heller and other critiques of the decision.
In dissent, Justice John Paul Stevens offers, for example, a line-by-line rebuttal of the court’s historical account. Justice Stephen Breyer describes an American policy of reasonable gun control from colonial times to today’s responses to gun violence. Many commentators criticize the opinion as going too far, or not going far enough.
In the end, however, the case has had limited impact, raising more questions than it answers. The questions it poses are why it is such a teachable case. It leaves to future generations the task of filling out the Second Amendment’s meaning.
As I tell my students, our lawyer’s oath to “support the Constitution” is largely a commitment to a culture of argumentation. We lawyers, like citizens generally, sustain our Constitution by debating it, as we have for centuries. In this way, we are all law students, working out the Constitution’s meaning, together.
Anthony Johnstone does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Read in English.
Carlos Alvarado Quesada ganó la presidencia costarricense con 61% de los votos, una victoria contundente para un candidato progresista que llegó al día de las elecciones en empate técnico con su rival conservador.
Alvarado Quesada, de 38 años y ex ministro laboral durante el gobierno del impopular presidente saliente Luis Guillermo Solís, compitió a partir de una “agenda de igualdad”, que contemplaba apoyo al matrimonio entre personas del mismo sexo, educación pública y energía renovable. En Costa Rica esta es una plataforma política bastante clásica.
Pero su oponente, Fabricio Alvarado Muñoz, un senador evangélico y ex músico cristiano que se opone al matrimonio homosexual, al secularismo, y a la educación sexual en las escuelas, ganó la primera ronda de las elecciones de Costa Rica en febrero. La segunda vuelta del 1 de abril a muchos les pareció un referéndum acerca de los valores sociales en un país históricamente considerado estable y progresivo.
En una región donde casi todas las demás naciones enfrentan una violencia extrema y tienen una historia de convulsión política, a la pacífica Costa Rica a veces se le llama “la Suiza de América Central”. Muchos comentaristas dirán que el triunfo de Alvarado Quesada es una confirmación del excepcionalismo costarricense.
Yo veo las cosas de otra manera. En los 15 años que he estudiado la política centroamericana, han surgido profundas fracturas en la democracia costarricense: las mismas tensiones sociales y religiosas que se manifestaron en las elecciones de 2018.
Mientras tanto, he observado con preocupación cómo El Salvador y Guatemala se vuelven democracias más fuertes. Costa Rica sigue siendo una excepción, pero está más cerca de la media centroamericana que nunca antes.
Como resultado, la Costa Rica moderna no ha visto ni las dictaduras militares ni las prolongadas guerras civiles que plagaron a todos los demás países centroamericanos durante el siglo XX.
Menos gasto en defensa ha liberado el presupuesto nacional, lo que permite a Costa Rica invertir en normas de protección ambiental con los estándares más altos , así como en educación pública universal. Su población se cuenta entre las más alfabetizadas del mundo.
Costa Rica también es más próspera que el resto de América Central, una de las regiones más pobres del mundo. Tiene evaluaciones igual de positivas que los países europeos en muchos de los indicadores de desarrollo humano de las Naciones Unidas, incluso en igualdad de género.
Alrededor de un tercio de los escaños en la legislatura de Costa Rica están ocupados por mujeres, gracias a las fuertes leyes de equidad de género. Costa Rica tuvo una presidenta, Laura Chinchilla, de 2010 a 2014.
El país también es el menos corrupto de Centroamérica. Solo 9% de los costarricenses dice haber sido haber sido testigo de algún acto de corrupción, según la encuesta del Vanderbilt University’s Americas Barometer (Barómetro de las Américas de la Universidad de Vanderbilt). En comparación, una cuarta parte de los guatemaltecos dice que han sido víctimas de la corrupción.
El estado de la democracia en Costa Rica
De alguna manera, las elecciones de este año continuaron la tradición costarricense. La participación fue alta, como es típico, alrededor del 62%. La elección fue libre y justa, como suelen ser las elecciones de Costa Rica. No hubo ninguna de las irregularidades observadas, por ejemplo, en las controvertidas elecciones presidenciales de noviembre de 2017 en Honduras.
Pero la campaña fue inusual de todos modos. Casi el 40% de los costarricenses votaron por un candidato profundamente antihomosexual del Partido Evangélico Nacional de Restauración, recientemente creado. Eso tiene fuertes implicaciones en un país históricamente secular.
También es significativo que ninguno de los dos finalistas a la presidencia perteneciera a un partido político convencional.
El Partido Liberación Nacional decidió respaldar a Alvarado Quesada después de que avanzó a la segunda vuelta de las elecciones, pero fue la primera vez desde la fundación del partido en 1951 en la que su propio candidato no compitió por la presidencia costarricense. Lo que indica un descontento generalizado de los votantes con la política de siempre.
Tampoco compitió el candidato oficial del Partido de la Unidad Social Cristiana, la principal oposición conservadora de Costa Rica, que no apoyó a Alvarado Muñoz.
El aumento del número de candidatos externos y la fortaleza inesperada de los votantes evangélicos este año demuestran que Costa Rica se encuentra menos unida y es menos progresista de lo que parecía.
Candidatos independientes en Guatemala
La victoria de Alvarado Quesada no sana estas fisuras. Al verlo rezagado la mayor parte de la campaña de 2018 detrás de un político independiente, religiosamente conservador, duro con el crimen y con raíces en la cultura pop, de hecho me acordé de la vecina Guatemala.
En 2015, el comediante Jimmy Morales ganó una sorpresiva carrera por la presidencia de ese país. En contienda con una ex primera dama, su campaña adoptó el slogan “Ni corrupto ni ladrón”.
Los partidos políticos en Guatemala son tradicionalmente débiles, por lo que una candidatura independiente no era ninguna sorpresa. De hecho, muchos vieron la victoria de Morales como una señal positiva para la democracia guatemalteca.
Morales fue elegido un mes y medio después de que el presidente Otto Pérez Molina renunciara para enfrentar un juicio por cargos de corrupción. Molina es uno de los cientos de funcionarios guatemaltecos juzgados por corrupción desde 2007, cuando el país invitó a una comisión anticorrupción respaldada por la ONU a limpiar la casa.
A la fecha, Morales, conservador, está envuelto en un escándalo de corrupción, confirmando que la malversación pública sigue siendo un problema político importante.
Pero la transferencia democrática sin violencia del poder después de la renuncia presidencial fue una señal de que el cambio pacífico era posible en Guatemala. Esto en sí mismo fue un avance significativo para una nación centroamericana con una larga historia de conflicto.
El Salvador repunta
La democracia también está ganando terreno en El Salvador, país problemático. Allí, el izquierdista Frente de Liberación Nacional Farabundo Martí (FMLN) y su principal oposición conservadora, ARENA, han trabajado juntos en la política desde 1992, cuando los acuerdos de paz trajeron tranquilidad a El Salvador. Las dos facciones alguna vez lucharon entre sí en una sangrienta guerra civil.
Bajo el gobierno los ex revolucionarios del FMLN, que han estado en el poder desde 2009, El Salvador ha seguido un camino político moderado, buscando mejorar el acceso a los servicios sociales y reducir la desigualdad.
De hecho, considero que El Salvador ha reemplazado a Costa Rica como el país con el sistema de partidos más fuerte en Centroamérica. Esto es un logro especialmente impresionante apenas 26 años después de que la guerra civil de 12 años pusiera fin a décadas de dictaduras militares.
En un país que posiblemente se enfrenta a las tasas de homicidios más altas del mundo, El Salvador ha creado tribunales especializados para enfrentar la violencia contra las mujeres.
También ha incrementado el número de mujeres salvadoreñas que se están involucrando en la política. De 2003 a 2012, el número de mujeres alcaldes en El Salvador aumentó de 15 a 28, según datos de las Naciones Unidas. En total hay 262 alcaldes en el país.
Hacia una media centroamericana
Guatemala y El Salvador están lejos de ser democracias perfectas. Como argumenté en mi reciente libro, ambos luchan todavía por construir el estado de derecho. La corrupción y la delincuencia siguen siendo grandes desafíos.
Junto con Uruguay, Costa Rica sigue siendo una de las dos “democracias plenas” en toda América Latina, según la Economist Intelligence Unit (Unidad de Inteligencia del Economist), que clasifica a los países de todo el mundo con base en las libertades civiles, la transparencia y la participación política, entre otros indicadores.
Pero sus vecinos están haciendo progresos. El 60% de los guatemaltecos vota regularmente, apenas por debajo del promedio de Costa Rica. La participación es aún mayor en El Salvador. América Central está cambiando.
Y también Costa Rica. En una región donde la democracia está mejorando, las elecciones de 2018 mostraron que este país es menos excepcional en Centroamérica de lo que parece.
Rachel E. Bowen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
America has had a black president.
Is the country ready for a black president who is also a woman?
Speculation about the candidacy of Oprah Winfrey makes clear that some voters think so. Granted, Winfrey says she won’t run, but friends, commentators and many in the Twitterverse are pushing for her to reconsider.
As a scholar of race and politics, I’m curious about whether Oprah will change her mind about running – and even more curious about whether she could win.
Occupying a unique space
A paper I recently published with Harwood McClerking and Ray Block tried to shed light on this question by examining Oprah’s first major foray into politics — her endorsement of Democratic candidate Barack Obama during the 2008 presidential election. Specifically, we wanted to know how the endorsement altered the public’s perception of her, as a way of understanding if people like the idea of a “political Oprah.”
First, it helps to understand just how popular Oprah was leading up to the 2008 election. From 2002 to 2006, Winfrey’s daytime talk show pulled in an estimated 7 million viewers every day. Winfrey also had racial crossover appeal, maintaining an audience that was predominately female, white and over the age of 55.
During this phase of her career, Oprah avoided politics, a strategy that may have helped her “transcend her race,” to borrow a phrase from political scientists Donald Kinder and Corrine McConnaughy.
For example, in an interview with the Jacksonville Daily News in 1986, Oprah described her high school experience: “Everyone went through the black-power phase … (but) I knew I was not a dashiki kind of girl.”
Talking to People Weekly a year later, Oprah said that during college at Tennessee State, a historically black college, she “refused to conform to the militant thinking of the time.”
“People feel you have to lead a civil rights movement every day of your life, that you have to be a spokeswoman and represent the race,” Oprah said. “Blackness is something I just am.”
Writing in 1994, media scholar Janice Peck asserted that Winfrey served “as a comforting, nonthreatening bridge between black and white culture.” She goes on to say that Winfrey minimized her race through “public rejection of black political activism and the Civil Rights Movement.”
Winfrey’s Obama endorsement
This notion of Winfrey’s racial transcendence was tested in May 2007 with her explicit endorsement of Obama, which made her racial identity and political views salient to the American people. This connection was not lost on viewers, some of whom said she was trying to “sway her mass following.”
Politics scholar Costas Panagopoulos predicted that the endorsement would cause Winfrey’s popularity to suffer. It didn’t help that Winfrey’s endorsement of Obama alienated her from white women – the largest segment of her audience – who believed that, as a woman, she should have backed Hillary Clinton.
Winfrey and black women’s consciousness
Oprah isn’t the first black woman to struggle with two minority identities.
Gender studies scholars find that historically, black women reside in a unique space where they are marginalized by black communities due to their gender and sidelined by the feminist movement due to their race.
Claudine Gay and Katherine Tate assert in their research that the experience of being “doubly bounded” has resulted in the formation of a black female consciousness. Gender and legal scholar Kimberle Crenshaw argues that the experiences faced by black women cannot be understood through traditional understanding of race and gender discrimination. Crenshaw believes that the intersection of racism and sexism interact in a way resulting in shared experience with discrimination that is more severe for black women. To describe this situation, Crenshaw coined the term “intersectionality.”
The fragility of racial transcendence
To understand Winfrey’s transcendence and its fragility, we used Harris Polls’ measure of how Winfrey ranked relative to other television personalities regularly during a large span of her talk show, from 1993 to 2011. In the years leading up to her Obama endorsement – 2002 to 2006 – she was the nation’s favorite television personality. But immediately following her endorsement, her favorability dropped to No. 4. Over the next five years until her show ended in 2011, Oprah’s average ranking was No. 3.
Before her endorsement of Obama, individuals were favorable toward Winfrey at relatively equal levels – between 73 and 82 percent – regardless of their race and gender. After the endorsement, a gap opened up. Black women rated her 86.2 percent favorable. Black men still maintain the second highest favorability, but we see a drop from 81 percent to 72 percent. Similarly, the percentage of white women who hold favorable ratings of Winfrey drops from 73.4 percent pre-endorsement to 67.8 percent after.
By examining post-endorsement polling data, we argue that the impact of Oprah’s endorsement is an important factor that seems to follow the same breakdown. Her endorsement had the strongest tangible effect on black women, followed by black men, white women and white men.
These numbers are important when considering Oprah’s electability because blacks make up only 11.9 percent of the electorate. Having black support alone is not enough to win the presidency. Indeed, Obama built his success on support beyond the black community by having high levels of turnout from voters under the age of 30, low- to moderate-income workers – those earning less than $50,000 – and Latinos.
Despite the seeming impact on her popularity, Winfrey has become more political over time.
In one article, she writes, “The audacity it takes to judge another because they don’t look or sound or act like you goes against the current of humanity.” In another: “We can’t afford to say race is just a black thing, or a Hispanic thing, or an Asian thing or a #StayWoke thing. It’s a human thing.”
The speculation centered on Winfrey’s possible presidential bid – while encouraging on its own – lacks a full assessment of all that Winfrey brings to the table. The experiences of black women, even those that are high-profile, are often constrained by how people view them.
Oprah’s claim that “time’s up” inspired many, but our research suggests that the more political Oprah becomes, the more aware voters will be of her race and gender. And that awareness will give her not one, but two, challenges to overcome.
Chryl N. Laird does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The United States has made enormous progress in reducing water pollution since the Clean Water Act was passed nearly 50 years ago. Rivers no longer catch fire when oil slicks on their surfaces ignite. And many harbors that once were fouled with sewage now draw swimmers and boaters.
But as Earth Day approaches, it is important to realize that new, more complex challenges are emerging. In a study published earlier this year, we found that a cocktail of chemicals from many human activities is making U.S. rivers saltier and more alkaline across the nation. Surprisingly, road salt in winter is not the only source: construction, agriculture, and many other activities also play roles across regions.
These changes pose serious threats to drinking water supplies, urban infrastructure and natural ecosystems. Salt pollution is not currently regulated at the federal level, and state and local controls are inconsistent.
Our research shows that when salts from different sources mix, they can have broader impacts than they would individually. It also shows the importance of supporting water quality monitoring nationwide, so that we can detect and address other pollution problems that have yet to be recognized.
Our group has been studying freshwater salinization for over 15 years. In 2005 we published a paper that demonstrated that levels of sodium chloride (common table salt) were rapidly increasing in fresh waters across the northeastern United States.
Until that time, scientists thought that salinization was a serious problem mainly in arid regions where water evaporates rapidly, leaving salts behind. But we found that it was affecting major drinking water supplies, exceeding toxic levels for some aquatic organisms and persisting in the environment year-round, even in humid regions.
The main cause we found was the spread of paved surfaces, such as roads and parking lots. Communities in cold regions use de-icing salts to clear snow from roads during winter, and the more roads they build, the more treatment is needed. We found that a 1 percent increase in paved surfaces could boost salt concentrations in nearby water bodies to levels more than 10 times higher than pristine forested conditions.
In 2013, we published another study showing that rivers were becoming more alkaline across regions of the eastern United States. At that time acid rain – i.e., too much acid in rainwater, caused by air pollution – had been a well-known environmental issue for several decades. However, alkalinization was not recognized in the same way, and its effects are still poorly understood now.
Alkalinization is the opposite of acidification: It occurs when water’s pH value increases instead of falling. As water becomes more alkaline, certain chemicals dissolved in it can become toxic. For example, ammonium is a nutrient in freshwater ecosystems, but is converted to toxic ammonia gas in significant concentrations in waters with a high pH. Alkaline conditions also enhance release of phosphorus from sediments, which can trigger nuisance blooms of algae and bacteria.
We found that a process we called “human-accelerated weathering” was breaking down rock and releasing minerals into rivers that were making them more alkaline. The process of weathering rocks and minerals that become exported to rivers is typically slow, but we showed that land development and decades of exposure to acid rain were speeding it. We also suggested that widespread use of geologic materials in fertilizers and concrete was a factor.
Identifying freshwater salinization syndrome
Our study on human-accelerated weathering showed that along with sodium chloride, other dissolved salts were increasing in fresh water across large regions of the eastern United States. This made us wonder whether there could be a link to our previous work on salinization in these regions.
We started to recognize that in theory, salt pollution and human-accelerated weathering could be sending increasing quantities of salts that were alkaline into rivers throughout the nation, and that this could increase their pH levels. We knew that ocean water, which is naturally salty, has a higher pH than fresh water because it has accumulated high levels of alkaline salts. After much analysis, we proposed that similar interconnected processes could influence salinity and pH in fresh water.
Many sources release alkaline salts into the environment, including weathering of impervious surfaces, fertilizer and lime use in agriculture, mine drainage, irrigation runoff and winter use of road salt. Initially, parts of these alkaline salts bind to soil. But when they come into contact with sodium – for example, excess road salt – chemical reactions occur that release the alkaline salts, which then wash into freshwater ecosystems.
We called this process freshwater salinization syndrome because it was producing multiple effects on salts, alkalinity and pH, which are fundamental chemical properties of water.
Different causes by region
Figuring out this process was a team project that required knowledge of limnology (the study of inland waters), geochemistry and geography. The causes vary from one location to another, but the outcomes can be similar.
For example, rivers are becoming more saline and alkaline in parts of North Carolina, Florida, Virginia and other states that use little or no road salt. This is likely due to human-accelerated weathering in locations underlain by limestone (which dissolves when it comes in contact with acid rainwater) and in urbanized areas with lots of concrete infrastructure, as well as urban salt pollution from sewage, water softeners or fertilizers.
Our research was supported by the U.S. National Science Foundation and drew on enormous quantities of monitoring data from ecosystems across the United States collected mainly by the U.S. Geological Survey. We analyzed long-term trends in the chemistry of rivers over five decades and compared these trends across different major river systems and regions.
We also analyzed trends in major estuaries, such as the Hudson River and the Chesapeake Bay, to investigate whether increasingly alkaline inputs from rivers could potentially influence the chemistry of coastal waters. Our results show that changes in salts can alter concentrations of pollutants such as excess phosphorus and nutrients that are bound up in sediments at these sites.
Managing salt pollution
Freshwater salinization syndrome is affecting drinking water supplies in many parts of the United States. In some cases it is altering the taste of water or threatening the health of people with hypertension.
There is growing concern that salts in fresh water can corrode water pipes and release toxic metals such as lead into drinking water. They also can trigger reactions that mobilize other contaminants and pollutants from soils into rivers.
As other scientists have shown, mixtures of salts can be more toxic to aquatic life than just one salt alone. The Environmental Protection Agency does not currently regulate salts as primary contaminants in drinking water, and state and local regulation of salt releases over wide areas from activities such as road treatment are sparse and inconsistent.
We believe there is a serious need for federal regulations and regional plans to reduce salt pollution in fresh water. One strategy would be to reduce use of road salts by calibrating application and adjusting application rates based on temperature. In addition, not all salts are created equal: It may be more efficient to use certain salts as deicers at lower temperatures. Finally, organic de-icing solutions use less salt than conventional versions.
New forms of water pollution are constantly emerging, and it is important to identify how different human activities accelerate geological processes in nature. Fresh water accounts for only about 3 percent of the Earth’s total water supply (the rest is in the oceans), and there will always be a need for better understanding and management of this precious resource.
Sujay Kaushal receives funding from the National Science Foundation.
Gene E. Likens receives funding from the National Science Foundation and the A.W. Mellon Foundation. He is Founding Director, President Emeritus and Distinguished Senior Scientist Emeritus at the Cary Institute of Ecosystem Studies, and a board member of the Hudson River Foundation.
Michael Pace receives funding from the National Science Foundation.
Ryan Utz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Joint ventures between Western and Chinese companies are in the news over accusations – including those of President Donald Trump – that China uses them to steal intellectual property from foreign competitors in industries like cars and technology.
Less well known, however, are the joint ventures between French and Chinese winemakers, which offer a notable counterpoint to this narrative of international rivalry – or foreign exploitation, depending on your perspective.
Unlike for cars and electronics, there are no secret technologies in the making of wine. The millennia-old fermented drink is primarily a product of the land where the grapes are grown. What differentiates the best from the rest is not proprietary technology but experience in combining agriculture, science and art.
During research visits to China’s major wine regions – from beach resorts in Shandong and Ningxia’s rocky and arid landscapes to the lush mountains of Yunnan – we encountered a blend of local and foreign winemakers, farmers, wine scientists and local government officials, all committed to establishing local wines on the world stage.
Winemaking succeeds on the back of such international collaboration. And in our experience, it’s helping Chinese wine producers overcome their biggest obstacles to success.
No secret technology to steal
China is currently the sixth-largest wine producer, bottling 11.4 million hectoliters in 2016, just behind Australia’s 13 million. China is fifth in terms of consumption.
A few years ago, as we explained in The Conversation, China’s wine industry was focused on overcoming the rising cost of labor, dealing with difficult climates and improving grape quality.
Now, the biggest obstacles Chinese vintners have to overcome are the country’s image problem and growing competition from foreign wine. And that’s where the foreign ventures have proven so valuable.
China has long had a reputation for counterfeiting and food safety scandals. At the same time, the wine industry has become less protected from foreign competition after bilateral trade deals with countries such as Chile and Australia eliminated some tariffs. And although there are still such barriers in place with Europe (as well as the U.S.), Chinese wine lovers still drink a ton of French wine, despite the higher prices.
That has meant Chinese makers of premium wines have had to raise their game to compete with skilled foreign competitors. And perhaps ironically, some of those foreign rivals have been only too happy to share knowledge and skills.
Unlike for cars, making good wine doesn’t require proprietary technology. Any serious student can learn the techniques, whether they are traditional or cutting edge, by reading, going to school or finding a mentor. Becoming a good winemaker requires experimenting with a range of tried and true methods, both in the vineyard and the cellar. There is no secret recipe, only hard work and problem solving.
Such collaborative partnerships have been essential to helping China wine producers overcome the image problem and better compete.
Enter the French
It might surprise readers that French Cognac producer Remy Martin was one of the first Western companies to form a joint venture in China, in this case with the city of Tianjin in 1980 to set up a winery.
The French brought winemaking skills and, in exchange, got a foot in the door into a promising market for imported Cognac. The result, Dynasty Winery, is now one of the largest Chinese wine producers.
Remy and other Western companies brought not only skills but also their brand name. Chinese wine enthusiasts – vulnerable to the same stereotypes Westerners have – might question how good a wine from an unknown domestic company might be. But if is made by a famous French wine group, whose wines they enjoy, they might give it a chance.
While Dynasty is a mass market brand, other more recent French-Chinese partnerships have focused on developing premium wines. One involved LVMH and a state-owned enterprise in Ningxia, a poor province often hailed as China’s most promising wine region. In 2013, the French luxury conglomerate launched Chandon China, the latest offspring in the global Chandon family of sparkling wine.
Unlike in other sectors, such as clothing or electronics, Western winemakers are not in China to take advantage of low costs. Chinese wine is expensive to make, due to the rising cost of labor, and, in some regions, the need to bury the vines to protect them from cold winters and dig them out every spring.
Moreover, you can’t outsource the production of wine to another country. Champagne can only be made in the Champagne region of France. Napa Valley wine can only be made in the Napa Valley. If a wine is made in China, it becomes Chinese wine.
Soaring wine quality
The result, for Chinese winemakers, has been soaring quality.
Not long ago, really good Chinese wines were very hard to find. Mass market wine brands, like Changyu, Great Wall or Dynasty, were ubiquitous in supermarkets and convenience stores around the country. But most award-winning boutique wineries you read about in the media were too small or lacked marketing skills and deals with distributors that could put their wines in front of consumers.
Today the best boutique Chinese wines are far more available in major cities because the major distributors have begun to include more Chinese producers in their porfolios of primarily imported wines. This has made the best Chinese wines available in local shops frequented by wine enthusiasts, like Pudao Wines in Beijing and Shanghai, and on a few restaurant wine lists.
At a hotel restaurant in Guangzhou’s main airport in 2016, for example, we were able to order an glass of Pretty Pony, an award winning Ningxia red by Kanaan winery – something we couldn’t have done just a year earlier.
Next stop: exports
So how easy is it to pick up a bottle of Pretty Pony at your local supermarket if you don’t live in China?
Although exports of Chinese wine are still quite low, at just US$1.2 million in 2016 compared with $15 million for Argentina and $3.2 billion for France, a growing number of supermarkets and wine shops in Europe and the U.S. are stocking some of the best Chinese wines, from Seattle and Melbourne to London and Madrid.
While it’s unlikely Chinese winemakers will be threatening their French peers anytime soon, they are now decidedly on the world’s wine map.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Forty-three years ago today, the Khmer Rouge took power in Cambodia. Their radical regime, led by the dictator Pol Pot, inflicted countless atrocities and left deep wounds. Neighbors turned against one another. Families were fractured. Political cleavages deepened. An estimated 1.7 million people died. Almost everyone suffered personal trauma.
Survivors are still in the long process of seeking reconciliation, or putting the pieces back together in lives and societies shattered by conflict.
Yet the measures taken to address political and social conflict are not always conducive to personal reconciliation – the journey of coming to terms with excruciating past experiences.
Reconciliation is one of the aims of transitional justice, the means by which societies address past crimes as they emerge from conflict or repression. As someone who studies this process in Cambodia, I am interested in how official efforts to promote justice and reconciliation affect individual survivors’ ability to heal.
Can efforts to exact justice end up injuring survivors again?
Research on such profound human suffering requires more than intellectual understanding of the legal and political mechanics of justice and reconciliation. It requires a human journey from sympathy to empathy and learning about the personal experiences of the survivors whom such initiatives aim to assist.
Can criminal trials heal survivors?
Cambodians endured staggering atrocities during the Pol Pot era. The Khmer Rouge regime repressed brutally any perceived resistance to its rule. Urban populations were sent to the countryside to perform forced labor. Families were split. Intellectuals were murdered. People accused of even slight infractions were imprisoned, tortured and often killed without any legal process.
The best-known effort to address crimes of the Pol Pot era is a series of trials backed by the United Nations and Cambodian government. The special tribunal they created more than a decade ago has yielded three convictions for crimes against humanity and other offenses – the first credible verdicts for crimes of the Khmer Rouge regime.
As my colleague Anne Heindel and I have written, the tribunal has faced many challenges. Its blend of Cambodian and international laws, procedures and personnel is cumbersome. Politicized feuds have arisen between the U.N. and Cambodian sides of the court.
Yet a more fundamental question surrounds its work: Do high-profile criminal trials help survivors heal?
Official statements on war crimes and genocide trials often stress their healing potential. In theory, trials can reveal truths, give voice to victims, express condemnation and satisfy survivors’ thirst for justice.
Reality is more complex.
The goals and requirements of a criminal process sometimes cut against the interest of promoting survivors’ personal reconciliation. For example, witness testimony is often crucial to a fair trial. But giving testimony can retraumatize victims. A focus on select defendants can produce a skewed and partial exposure of the truth.
These trade-offs challenge researchers trying to evaluate transitional justice efforts. Which goals should matter most? Securing credible criminal convictions? Clarifying the historical record? Helping survivors work through trauma? And what role does academic research play in determining which values should get priority?
Seeing through others’ eyes
I grapple with these questions myself. They challenge me to reflect on how my values and interests shape my research, sometimes in tension with the expressed needs of survivors.
One example occurred in July 2010, after the Khmer Rouge tribunal issued its first conviction. The defendant was Duch, the notorious former head of a Khmer Rouge security center known as Tuol Sleng. By overwhelming evidence and his own admission, Duch was found guilty of overseeing the torture and arbitrary execution of thousands of innocent Cambodians.
The judges sentenced Duch to just 19 years in prison. They reasoned that Duch had been detained illegally for years by Cambodian authorities and was entitled to a meaningful remedy – which meant something shorter than a life sentence.
Some human rights groups and international lawyers applauded the court’s stand for the defendant’s rights. Many victims were outraged that an architect of mass murder could receive such a light sentence.
As a scholar of international criminal law, interested in how principles of due process are developed, my first instinct was to applaud.
But I also saw the depth of many survivors’ disappointment and anger at the sentence. It was an uncomfortable, humbling reminder that what professionals see as proper process may not answer the profound needs of survivors. The values embedded in many studies of transitional justice do not necessarily match the legitimate desires of those who have suffered the gravest injustice.
Duch was later sentenced on appeal to life in prison, but his trial raised other issues related to personal reconciliation that have yet to be resolved.
Survivor testimony: Who benefits and who pays the price?
The clearest issue that needs resolution is victim testimony.
Some survivors have found appearing at the tribunal empowering and cathartic. Others have found the experience deeply unsettling. Sitting in a large courtroom, responding to cross examination and facing the gaze of robed judges and the accused is hardly a recipe for psychological comfort.
Survivors tell of deep personal losses or hardship. They tell of lost loved ones, of near starvation, and of witnessing or experiencing gruesome abuses. In some cases, they relate their stories while looking straight at their past tormentors.
Watching some survivors tremble as they enter or exit the courtroom has made me reflect on who benefits from justice and who pays the price.
To help survivors tell their stories with greater ease, the Khmer Rouge tribunal has allowed “statements of suffering,” which resemble victim impact statements in U.S. criminal courts.
Victims can narrate their stories without frequent interruptions for questions from judges and lawyers. Many provide gripping accounts of their personal and family experiences. In doing so, many stray from the alleged crimes of the accused into broader revelations of personal suffering.
But as Anne Heindel and I have discussed, these raise difficult questions of how to balance the rights of the accused with victims’ needs. Statements of suffering can undermine fair trials if they include questionable claims that defendants cannot challenge. Yet for victims, those statements are often the best opportunity to have their voices heard and to experience catharsis.
As researchers, we evaluate transitional justice mechanisms and thus can affect debates on how they are structured. We often bring useful technical knowledge and outside perspectives.
But “experts” often measure success very differently than ordinary survivors. We have an ethical responsibility to consider carefully the diverse preferences of survivors.
This means going well beyond the study of laws and institutions to embrace personal narratives and experiences in transitional justice research.
Getting from sympathy to empathy entails doing more than identifying practices we regard as effective. It requires trying to understand what survivors will see as success.
John Ciorciari is affiliated with the Documentation Center of Cambodia.
Japanese Prime Minister Shinzo Abe will meet with President Donald Trump at Mar-a-Largo on April 17 and 18.
The relationship between these two leaders’ countries may help shape the U.S. approach to upcoming talks with North Korea. Those talks will likely be focused on denuclearization and regional stability.
The U.S.-Japan relationship since World War II has rested on economic and military cooperation. It is sustained through a complex network of institutions that facilitate interactions between the two countries. But interactions among U.S. and Japanese citizens themselves – a form of “soft diplomacy” – also play an important role in furthering relations between the two countries.
Since 1987, the Japanese government’s Japan Exchange and Teaching Program has built familiarity with and interest in Japan among young Americans and citizens of more than 60 other countries. The program brings young college graduates to work in Japan for at least one year. It has helped sustain a bilateral relationship that the U.S. Department of State describes as key to stability and prosperity in Asia.
I am an alumna of the exchange program and recently published a book about its growing importance to the U.S.-Japan relationship. I argue that through the program, Japan has nurtured a generation of Americans who function as “willing interpreters and receivers,” or citizen ambassadors, for Japan at home and abroad.
Beyond teaching English
For 30 years, the Japan Exchange and Teaching Program has recruited college educated Americans to live and work in Japan. This year, there are 2,924 Americans in the program.
Tomohiko Taniguchi, a key adviser to Japan’s prime minister, summed up the program to me as “the single most shining crown jewel of Japan’s diplomacy.”
Participants either teach English or help organize events such as theatrical performances, international conferences and sports exhibitions across Japan’s small towns and regional capitals. Over the course of its history, more than 30,000 Americans have participated. That’s over half of the 60,000 alumni worldwide.
The three core goals of this exchange program are increasing foreign language proficiency among Japanese citizens, nurturing an international mindset in Japanese communities and, on the flip side, improving global attitudes toward Japan.
The program has gotten significant criticism for failing on the first of these goals. English language proficiency in Japan has not improved since the program began. During Japan’s 2010 budget talks, this problem nearly sank the program’s government support.
No formal study has evaluated the program’s success in promoting broadened worldviews among the Japanese. Meanwhile, my research has focused on the third goal. Has the program affected how participants – especially alumni in the United States – feel about Japan?
Part of the fabric
Based on my surveys and interviews with alumni, meetings with program sponsors and input from experts on U.S.-Japanese relations, I have found the majority of U.S. alumni maintain interest in and affection for Japan.
Today, alumni of the program work in federal agencies, state and local governments, major educational institutions, leading media outlets and the nonprofit and private sectors. More than 100 American alumni work for the U.S. Department of State.
Following Japan’s 2011 earthquake and tsunami, the State Department asked foreign service officers worldwide who were familiar with the country’s language and culture to assist with consular work (serving American citizens in Japan and processing visa applications for Japanese citizens seeking to travel to the United States) as well as with relief efforts. Dozens of alumni in the diplomatic corps answered the call, joining other alumni who were already serving at the U.S. Embassy and consulates across Japan. Indeed, alumnus Matthew Fuller was already there serving as special assistant to the American ambassador to Japan, John Roos.
Other American alumni also sprang into action in the disaster’s aftermath, raising more than US$360,000 for relief efforts. An online community run by alumni served as a clearinghouse for information about how to help. The New York alumni chapter coordinated the collection and dissemination of donations.
Japan scholar and Professor Emeritus at Columbia University Gerald Curtis described to me an “unanticipated benefit” of the Japan Exchange and Teaching Program: It has produced a “generation of Americans and others who are connected to Japan.” He compared it to the U.S. government’s recruitment and training of Japan experts during World War II.
Alumnus Michael Auslin has documented how that wartime generation of Americans went on to careers in academia, journalism and government. They influenced the U.S.-Japan relationship for decades thereafter. They taught students at American universities about Japan, reported about Japan in U.S. media, and served as experts within the U.S. government.
The Japan Exchange and Teaching Program gives Americans a chance to learn more about Japan. In fact, one Japanese diplomat told me JET alumni are tough negotiators when they represent the U.S. in bilateral talks with Japan because “they know us so well.”
Thanks in part to Japan’s sustained investment in the program, tens of thousands of Americans support this friendship between two countries, whose interactions are critical to continued peace in a region facing many challenges.
Emily Metzgar served as vice president of the Washington, DC chapter of JET Alumni Association from 1996 to 1997.
In a letter on April 11 to the bishops of Chile, Pope Francis asked forgiveness for his “serious errors of assessment and perception.” His apologies were directed to the victims of Fr. Fernando Karadima, whose abuse of at least three men when they were children was witnessed and covered up by Chilean Bishop Juan Barros. Until recently, Pope Francis had maintained that Bishop Barros was actually the victim of “slander.” In 2011, the then 80-year-old Fr. Karadina was found guilty by a Vatican tribunal, and sentenced to a life of “prayer and penance.”
In times past, a personal apology from the pope would have been close to unthinkable.
Popes can make mistakes
Catholics believe the pope is the successor to the Apostle Peter, one of the first followers of Jesus. But Peter was a flawed human being: When confronted by a crowd, he denied his association with Jesus three times. Afterwards, according to the Gospel of Matthew, Peter “wept bitterly.”
For Catholics, Peter’s experience shows that even those specially chosen by God have deep-seated weaknesses for which they must show sorrow.
Popes are not always right in what they do, but their errors have been admitted only years – sometimes centuries – later. In 1992, for example, John Paul II apologized for the Catholic Church’s condemnation of Galileo that happened over 350 years earlier.
Once rare, papal apologies increased under the reign of John Paul II. While those apologies admitted that the Church made mistakes, they did not ask for forgiveness for past popes.
Church history on apology
In the middle ages, popes were not inclined to apologize at all, or even accept apologies. Most famously, in 1077 A.D., Pope Gregory VII initially rejected King Henry IV’s apology concerning a dispute over who had the power to appoint local bishops. The pope forced Henry, then the king of the Holy Roman Empire, to wait in a blizzard for three days before accepting him back into the Catholic Church.
This dismissive attitude gave way to soul-searching during the Second Vatican Council, a seminal meeting that modernized the Church, held in Rome from 1962-65. One of the most important issues Catholicism had to confront was its historical persecution of Jews. Thousands of Jews were killed as Crusaders made their way to Jerusalem. Jews were expelled from Catholic Spain in 1492. And most horrible was the Holocaust, or “Shoah,” the organized slaughter of over 6 million Jews, which occurred in Christian-majority nations during the Second World War.
In one of the council’s most important documents, Nostra Aetate, the Catholic Church rejected the idea that Jews were responsible for the crucifixion of Jesus Christ. Nostra Aetate also established a foundation for a more cooperative and respectful relationship between Christians and Jews.
In 1966, the Church moved to apologize for centuries of distrust between Catholics and Protestants, when Pope Paul VI gave his ring to Michael Ramsey, the head of the Anglican church – the 100th archbishop of Canterbury – as an offering of reconciliation.
Pope John Paul II gave many apologies, but usually on behalf of the Church for what was done centuries ago. Most notable was the “Day for Pardon” in March 2000, that asked forgiveness for a series of sins, including those “against the dignity of women and the unity of the human race” and “actions against love, peace, the rights of peoples, and respect for cultures and religions.”
But many remember how Pope John Paul II remained largely silent on the issue of clerical abuse because it “did not fit with his image of the Church,” according to Australian bishop Geoffrey Robinson. In a 2002 address to American cardinals, John Paul II did say he was “greatly grieved” that priests “had caused such suffering and scandal to the young,” but he stopped short of offering a personal plea for forgiveness.
Following John Paul’s example, Pope Benedict XVI stated in a 2010 letter that he was “sorry” that Catholics of Ireland had “suffered grievously” because of the “abuse of children and vulnerable young people.” But he did not apologize for lack of Vatican oversight over Irish bishops and priests.
Perhaps the closest parallel to Pope Francis’ apology was Pope Benedict’s expression of regret over “reactions” to his address in 2006 at the University of Regensburg, Germany, where he seemed to criticize Islam.
What is Pope Francis doing?
Fully accepting that the pope is a fallible human being can be somewhat of an emotional struggle for Catholics. While the pope – also called “The Vicar of Christ” – is considered to be infallible when he formally makes a statement about Catholic doctrine concerning “faith and morals,” the pope certainly makes mistakes in his priestly service and personal life.
Francis, however, is not shy about admitting his own fallibility as a pope and as a person. In fact, he said in a 2013 interview:
“I am a sinner. This the most accurate definition. It is not a figure of speech, a literary genre. I am a sinner.”
With that statement, Pope Francis was saying that he – a leader of 1 billion people – needs forgiveness and mercy too. And mercy and forgiveness have been the central themes of his pontificate.
Of the many responsibilities of a pope, chief among them is being a teacher. And when Francis apologized to the people of Chile and to victims of sexual abuse, he also was teaching the rest of us how to admit our sins as a first step in making things right.
Mathew Schmalz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Facebook CEO Mark Zuckerberg’s testimony in front of Congress, following disclosures of personal data being misused by third parties, has raised the question over how and whether the social media company should be regulated. But short of regulation, the company can take a number of steps to address privacy concerns and the ways its platform has been used to disseminate false information to influence elections.
Scholars of privacy and digital trust have written for The Conversation about concrete ideas – some of them radical breaks with its current business model – the company could use right away.
1. Act like a media company
Facebook plays an enormous role in U.S. society and in civil society around the world. The leader of a multiyear global study of how digital technologies spread and how much people trust them, Tufts University’s Bhaskar Chakravorti, recommends the company accept that it is a media company, and therefore
“take responsibility for the content it publishes and republishes. It can combine both human and artificial intelligence to sort through the content, labeling news, opinions, hearsay, research and other types of information in ways ordinary users can understand.”
2. Focus on truth
Facebook could then, perhaps, embrace the mission of journalism and watchdog organizations, and as American University scholars of public accountability and digital media systems Barbara Romzek and Aram Sinnreich suggest,
“start competing to provide the most accurate news instead of the most click-worthy, and the most trustworthy sources rather than the most sensational.”
3. Cut users in on the deal
If Facebook wants to keep making money from its users’ data, Indiana University technology law and ethics scholar Scott Shackelford suggests
“flip[ping] the relationship and having Facebook pay people for their data, [which] could be [worth] as much as US$1,000 a year for the average social media user.”
The multi-billion-dollar company has an opportunity to find a new path before the public and lawmakers weigh in.
Most of Facebook’s 2 billion users have likely had their data collected by third parties, the company revealed April 4. That follows reports that 87 million users’ data were used to target online political advertising in the run-up to the 2016 U.S. presidential election.
As company CEO Mark Zuckerberg prepares to testify before Congress, Facebook is beginning to respond to international public and government criticism of its data-harvesting and data-sharing policies. Many scholars around the U.S. are discussing what happened, what’s at stake, how to fix it, and what could come next. Here we spotlight five examples from our recent coverage.
1. What actually happened?
A lot of the concern has arisen from reporting that indicated Cambridge Analytica’s analysis was based on profiling people’s personalities, based on work from Cambridge University researcher Aleksandr Kogan.
Media scholar Matthew Hindman actually asked Kogan what he had done. As Hindman explained, “Information on users’ personalities or ‘psychographics’ was just a modest part of how the model targeted citizens. It was not a personality model strictly speaking, but rather one that boiled down demographics, social influences, personality and everything else into a big correlated lump.”
2. What were the effects of what happened?
On a personal level, this level of data collection – particularly for the 50 million Facebook users who had never consented to having their data collected by Kogan or Cambridge Analytica – was distressing. Ethical hacker Timothy Summers noted that democracy itself is at stake:
“What used to be a public exchange of information and democratic dialogue is now a customized whisper campaign: Groups both ethical and malicious can divide Americans, whispering into the ear of each and every user, nudging them based on their fears and encouraging them to whisper to others who share those fears.”
3. What should I do in response?
The backlash has been significant, with most Facebook users expressing some level of concern over what might be done with personal data Facebook has on them. As sociologists Denise Anthony and Luke Stark explain, people shouldn’t trust Facebook or other companies that collect massive amounts of user data: “Neither regulations nor third-party institutions currently exist to ensure that social media companies are trustworthy.”
4. What if I want to quit Facebook?
Many people have thought about, and talked about, deleting their Facebook accounts. But it’s harder than most people expect to actually do so. A communications research group at the University of Pennsylvania discussed all the psychological boosts that keep people hooked on social media, including Facebook’s own overt protestations:
“When one of us tried deactivating her account, she was told how huge the loss would be – profile disabled, all the memories evaporating, losing touch with over 500 friends.”
5. Should I be worried about future data-using manipulation?
If Facebook is that hard to leave, just think about what will happen as virtual reality becomes more popular. The powerful algorithms that manipulate Facebook users are not nearly as effective as VR will be, with its full immersion, writes user-experience scholar Elissa Redmiles:
“A person who uses virtual reality is, often willingly, being controlled to far greater extents than were ever possible before. Everything a person sees and hears – and perhaps even feels or smells – is totally created by another person.”
And people are concerned now that they’re too trusting.
Following the fatal police shooting of 22-year-old Stephon Clark, an unarmed black man in Sacramento, protesters have gathered to express anger over his death.
At issue is what is called police “use of force.” That’s when police exert physical force to make an individual comply with their orders. Each police department has policies for when use of force is appropriate. Protesters and Clark’s family members argue shooting at Clark 20 times was excessive. The California Department of Justice is investigating whether the officers involved breached any policies and whether policies need to be changed.
We’ve turned to our archives for a look at what experts say drives how and when police use force, and how it ties in to issues of race and gender.
1. Unconscious bias
One leading theory that is used to explain police behavior is what psychologists call implicit bias. The idea is that police officers, like all humans, have biases – negative attitudes and beliefs toward members of a social group.
As psychology professors Kate Ratliff and Colin Smith at the University of Florida explain, what makes these biases hard to address is that they exist outside of a person’s conscious awareness and control. These experts explain the methods scientists are developing to measure implicit bias, such as the Implicit Association Test.
Data from such tests have revealed, for example, that “Pro-white implicit biases are pervasive. Data from millions of visitors to the Project Implicit website reveal that, while about 70 percent of white participants report having no preference between black and white people, nearly the same number show some degree of pro-white preference on the IAT.”
Psychologists and social scientists are only beginning to understand how to apply this science to change behavior.
Another issue people often point to is machismo, which law professor Frank Rudy Cooper of Suffolk University describes as “a gendered, aggressive outlook” that is embedded in police training and culture.
One of the essential components of this outlook, Cooper writes, is zero tolerance for disrespect. This can even apply to female officers. When police officers feel disrespected, their natural response is to punish and often escalate a situation.
“Such escalation is commonly known as ‘contempt of cop.’ Being found in contempt of court is a punishment for disobeying a judge. ‘Contempt of cop’ occurs when an officer punishes you for failing to comply with her request.”
3. Lack of police-community trust
Historically bad relations between police and communities of color may also be a contributor to the ongoing violence. Police departments across the U.S. are seeking ways to train officers on improving trust and citizen cooperation.
Megan Price, a professor at George Mason University, is also director of a program called Insight Conflict Resolution. A new strategy called Insight Policing, she writes, is a “community-oriented, problem-solving policing practice designed to help officers take control of situations with the public before conflict escalates.” Pilot programs have been tested in Memphis, Tennessee and Lowell, Massachusetts.
The program teaches officers to defuse citizens’ behavior that is threatening or intended to cause conflict. Price writes,“ Eighty percent of officers trained agreed that Insight Policing enhanced their ability to defuse the feelings of threat citizens have about their encounters with police officers.”
At the beginning of the baseball season, every team wants to win the World Series or at least make a strong playoff run. For the Cincinnati Reds, the goal might be more modest: breaking a streak of three straight last-place finishes and improving on the worst-pitching record in the National League last season.
Off the field, the team faces a different kind of struggle: trying to avoid paying tens of thousands of dollars in sales tax on the bobbleheads, jerseys, tote bags and other memorabilia that the team gives away as promotions to entice fans to buy tickets.
It’s an issue that baseball teams have been battling for decades, with some having more success than others convincing their home states they shouldn’t have to pay sales tax on the promotional merchandise.
As I explain in a recent paper in the Journal of Taxation of Investments, I believe their arguments have little merit. Still, it does raise challenging questions about how we define the scope of sales taxes.
Sales tax exemptions
All but five states impose sales taxes, which can range widely from one place to another. Residents and businesses in Cincinnati, Ohio, where the Reds are based, pay the state rate of 5.75 percent as well as a local levy of 0.75 percent.
But most states, including Ohio, exempt purchases for resale from sales tax. For example, you pay a sales tax when you buy a computer at Best Buy, but the retailer does not.
This exemption can also apply to stuff a company buys to pass on to customers as part of a larger transaction. A good illustration of this involves the bags in which Kroger, Safeway or your neighborhood food store uses to pack your groceries. That’s because the typical grocery store includes the cost of the bags in the price of the goods it sells.
Another example is the toy that comes with a Happy Meal at McDonald’s. Again, the cost of the toy is included in the price of the meal.
Baseball teams claim that their memorabilia are like grocery bags or Happy Meals. The Reds say they factor the cost of those items into their ticket prices for every game during the season, so they should not have to pay a separate sales tax when they purchase them to give to fans as promotions.
The Ohio tax commissioner is the latest state official to disagree with that analysis.
‘Supplies are limited’
The commissioner emphasized in his unpublished ruling that the Reds did not give away the promotional items at every game. And even at games designated as giveaways, supplies were always limited.
In other words, ticket prices for the games were the same whether or not promotional items were offered or a fan actually received a bobblehead doll – those who didn’t get one did not get a discount on their ticket price. The commissioner found that the promotional items were gifts and therefore taxable.
The Reds appealed the ruling to the Ohio Board of Tax Appeals, which upheld the commissioner’s decision. So the Reds asked the Ohio Supreme Court to consider their case. After initially dismissing the appeal because the Reds’ lawyers missed a filing deadline, the court reluctantly agreed in February to consider the case, which remains pending.
The Reds have a lot at stake. They’ve designated 30 games this year for giveaways.
This number is typical for the major league teams. Forbes magazine reports that the average big league team has almost 27 games with giveaways this season.
If the Reds lose, they could be on the hook for a lot of money, though how much is unclear. One account put the total for just the 2008-2010 seasons at US$88,000, but the commissioner is likely to ask for tax payments going back a full decade, based on court records.
The Reds are at least the fourth MLB team to wind up in court over the past four decades over sales taxes on promotional items. Previous decisions have reached inconsistent results.
The Kansas City Royals won a case in 2000 that allowed them not to pay sales tax on their giveaways. But the Milwaukee Brewers in 1983 and the Minnesota Twins in 1998 lost their cases and currently have to pay sales tax.
A bobblehead is not a Happy Meal toy
The teams have a respectable legal argument, but I believe they should lose.
Promotional items are not really like grocery bags or Happy Meals. Virtually everyone who shops at grocery stores uses the store-provided paper or plastic bags. And every Happy Meal comes with a toy. But as the Ohio tax commissioner noted, only a minority of baseball fans who go to a game get the team memorabilia. To qualify, fans have to buy tickets for giveaway games and get to the stadium early enough to get one of the items before the supply runs out.
There is no resale and the teams should have to pay the sales tax on their promotional items.
The case, however it’s resolved, does have broader implications beyond some wealthy baseball teams. And it illustrates the challenge of defining what qualifies for the sales tax exemption. How should we decide when a premium item qualifies for the exemption?
How about a “free” streaming service like Netflix offered when you buy a TV or new phone? Or a buy one, get one free offer?
This is why we might think about switching to a value-added tax, which is collected at every stage of the supply chain. A sales tax, in contrast, is paid only by the final consumer. Of course, a value-added tax has its own issues, such as whether it’s regressive, but a VAT would eliminate these kinds of exceptions and loopholes.
For now, though, we have a sales tax. And the Reds case will turn on the meaning of the purchase-for-resale exception. I only hope that they do better on the field this year than they may do in court.
Jonathan Entin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Editor’s note: The following is a roundup of stories from The Conversation’s archive.
Students from across the country will march in Washington, D.C., on Saturday. Similar marches will take place elsewhere in the U.S. Organized by survivors of the Parkland school shooting in Florida, the protesters want Congress to pass gun control legislation.
Among the activists’ demands: Ban assault weapons, stop the sale of high-capacity ammunition magazines and mandate background checks for every gun purchase.
Here are a selection of stories from our archive that will help you understand the issues raised by the students:
1. Can security measures stop school shootings?
When shootings happen, one common response of policymakers is to demand better security at schools – or what’s known as “target hardening.” More metal detectors, more surveillance cameras and more lockdown policies would be among these steps.
Scholars Bryan Warnick, Benjamin A. Johnson and Sam Rocha write that while it may seem sensible, there’s scant evidence that such an approach will decrease the likelihood of a school shooting. There were surveillance cameras used at the terrible scene of the 1999 massacre at Columbine High School; and a lockdown during the Sandy Hook school shooting did not save children there.
And, the authors write, there’s a subtle negative effect from instituting such measures: “Filling schools with metal detectors, surveillance cameras, police officers and gun-wielding teachers tells students that schools are scary, dangerous and violent places – places where violence is expected to occur.”
2. School shooters share traits
After Columbine, the U.S. Secret Service joined with the Department of Education in 1999 to study school shooters. They wanted to see whether there were traits and patterns common to those shooters that could be detected and used as early warnings.
Equally important: In most cases, there were people who knew of the attacker’s plans. And many attackers had exhibited behavior that worried others and showed that they needed help.
3. Arming teachers could make things worse
Both President Donald Trump and the National Rifle Association responded to the Florida shootings with proposals to arm teachers. Scholars Aimee Huff and Michelle Barnhart write that the proposal misses the mark.
“While carrying a gun may reduce the risk of being powerless during an attack,” they write, “it also introduces substantial and overlooked risks to the carrier and others.” Chief among those risks: That the teacher may shoot an innocent person.
4. Law enforcement’s limited options
Legal scholar James Jacobs writes that some abhorrent or disturbing behavior isn’t illegal. That’s important to understand when, after a school shooting, people ask: “Why didn’t they do something to stop him? Everyone knew he was strange and violent.”
Jacobs writes that prior to the Parkland shooting, shooter Nikolas Cruz had attracted attention – “family members, school personnel and neighbors had reported Cruz’s disturbing, threatening and violent behavior many times to police and social services.” But Jacobs writes: “Florida police have limited options when faced with a potential shooter like Cruz. They can take him to a mental hospital for evaluation. They can try to persuade him to surrender his firearms, but they cannot seize his guns.”
The best option for stopping a shooter like Cruz from killing in the future, writes Jacobs, is a “civil commitment to a mental hospital, where the disturbed person’s mental and emotional state is addressed.”
Editor’s note: The following is a roundup of stories from The Conversation’s archive.
In an unexpected development, President Donald Trump and North Korean dictator Kim Jong Un have agreed to meet and discuss improving relations.
Over the last year, the two countries’ leaders have exchanged increasingly bitter and bellicose rhetoric as North Korea continued its development of long-range nuclear weapons, which the U.S. opposes. Trump called Kim “Little Rocket Man,” Kim called Trump a “mentally deranged U.S. dotard,” and both have threatened nuclear annihilation of the other side.
But it’s not necessarily nasty name-calling that got North Korea to issue Trump the invitation to meet. Years of sanctions imposed by the U.S. and the United Nations have isolated North Korea economically.
Here is a roundup of stories from our archives to help you sort the story behind the U.S.-North Korea conflict.
1. A brief history
North and South Korea were once a single country. They shared a common culture. But they’ve been at odds since the end of World War II, when the Korean Peninsula was divided in two, with South Korea backed by the U.S. and North Korea backed by the Soviets.
“Amid the growing Cold War tensions between Moscow and Washington, in 1948, two separate governments were established in Pyongyang and Seoul,” writes East Asian scholar Ji-Young Lee. Lee also quotes recent polls reporting that more than half of South Koreans believe that reunification with North Korea “is necessary.”
2. Everybody get together
Since Korea was divided into two countries relatively recently, is there a chance they could join together again?
“History suggests such efforts to reunite the peninsula as a single country often don’t go far,” Lee writes.
In a second essay, Lee details the number of times the two countries have tried to achieve national unity and how each has failed. It turns out that each of these efforts has floundered through a combination of aggressive moves by each side, internal political conflicts and pressure from allies across the globe.
Now, though, writes Lee, “current South Korean President Moon Jae-In is more open to … pursuing engagement. … This may be a game changer. Without a doubt, he is much more proactive about creating opportunities for inter-Korean reconciliation.”
3. The mind of a ‘smart cookie’
The North Korean leader Kim Jong Un, who was called a “smart cookie” and other less complimentary things by President Donald Trump, is a complex man who has assassinated political rivals and, according to human rights advocates, runs prison camps where political prisoners are raped and starved.
When Trump meets him and becomes the first U.S. president to ever hold talks with a North Korean leader, it might help if he understands something about Kim. “I hope he’s rational,” Trump said last April.
Scholar Stephen Benedict Dyson provides a response to Trump’s statement. Dyson writes that in his research on political leaders, he found that “different people have different definitions of rationality. The core question – "What is my best move?” – is often answered by a leader’s idiosyncratic beliefs, rather than by an immediately obvious logic of the situation as seen by external observers.“
So if the United States want to influence Kim, "Trump and his advisers must first understand how we look to the North Korean leader, peering at us from his very particular vantage point,” writes Dyson.
4. It’s not only a nuclear threat
Kim Jong Un doesn’t just have an army of soldiers and a growing nuclear arsenal at his disposal. He also has an army of cyber warriors.
Those hackers, writes Dorothy Denning of the Naval Postgraduate School, have used their digital skills to steal money to fund the government and launch cyberattacks that disable online services and destroy the data on computer disks.
5. Around the corner
Let’s say Trump and Kim meet. But then, like so many summits between adversaries, it fails to produce an agreement.
What’s next, war?
There’s an alternative, say Sievert and Norris: international arbitration. It’s worked in the past with seemingly intractable adversaries.
The scholars detail several of those crises, going back to the mid-19th century, and describe how the countries were brought to the table to work out their conflicts peacefully.
Every vote counts. It’s the key principle underlying democracy. Through the history of democratic elections, people have created many safeguards to ensure votes are cast and counted fairly: paper ballots, curtains around voting booths, locked ballot boxes, supervised counting, provisions for recounting and more.
With the advent of computer technology has come the prospect of faster counting of votes, and even, some hope, more secure and accurate voting. But the internet has also enabled hackers to attack voting systems and has given disinformation campaigns new tools to influence public opinion. Here are highlights of The Conversation’s coverage of these issues.
1. Voting machines are old
After the debacle of the 2000 election’s efforts to count votes, the federal government handed out massive amounts of money to the states to buy newer voting equipment that, everyone hoped, would avoid a repeat of the “hanging chad” mess. But almost two decades later, as Lawrence Norden and Christopher Famighetti at the Brennan Center for Justice at New York University explain, that one-time cash infusion has left a troubling legacy of aging voting machines:
“Imagine you went to your basement and dusted off the laptop or mobile phone that you used in 2002. What would happen if you tried to turn it on?”
That’s the machinery U.S. democracy depends on.
2. Not everyone can use the devices
Most voting machines don’t make accommodations for people with physical disabilities that affect how they vote. Juan Gilbert at the University of Florida quantifies the problem:
“In the 2012 presidential election, … The turnout rate for voters with disabilities was 5.7 percent lower than for people without disabilities. If voters with disabilities had voted at the same rate as those without a disability, there would have been three million more voters weighing in on issues of local, state and national significance.”
To date, most efforts to solve the problems have involved using special voting equipment just for people with particular disabilities. That’s expensive and inefficient – and remember, separate is not equal. Gilbert has invented an open-source (read: inexpensive) voting machine system that can be used by people with many different disabilities, as well as people without disabilities.
With the system, which has been tested and approved in several states, voters can cast their ballots using a keyboard, a joystick, physical buttons, a touchscreen or even their voice.
3. Machines are not secure
In part because of their age, nearly every voting machine in use is vulnerable to various sorts of cyberattacks. For years, researchers have documented ways to tamper with vote counts, and yet few machines have had their cyberdefenses upgraded.
The fact that the election system is so widespread – with multiple machines in every municipality nationwide – also makes it weaker, writes Richard Forno at the University of Maryland, Baltimore County: There are simply more opportunities for an attacker to find a way in.
“Voter registration and administration systems operated by state and national governments are at risk too. Hacks here could affect voter rosters and citizen databases. Failing to secure these systems and records could result in fraudulent information in the voter database that may lead to improper (or illegal) voter registrations and potentially the casting of fraudulent votes.”
4. Even without an attack, major concerns
Even if an attack never happens – or if nobody can prove one happened – public trust in elections is vulnerable to sore losers taking advantage of the fact that cyberweaknesses exist. Just that prospect could destabilize the country, argues Herbert Lin of Stanford University:
“State and local election officials can and should provide for paper backup of voting this (and every) November. But in the end, debunking claims of election rigging, electronically or otherwise, amounts to trying to prove something didn’t happen – it can’t be done.”
5. The Russians are a factor
American University historian Eric Lohr explains the centuries of experience Russia has in meddling in other countries’ affairs, but notes that the U.S. isn’t innocent itself:
“In fact, the U.S. has a long record of putting its finger on the scales in elections in other countries.”
Neither country is unique: Countries have attempted to influence each other’s domestic politics throughout history.
6. The real problems aren’t technological at all
In any case, the major threats to U.S. election integrity have to do with domestic policies governing how voting districts are designed, and who can vote.
Penn State technologist Sascha Meinrath discusses how partisan panels have “systematically drawn voting districts in ways that dilute the power of their opponent’s party,” and “chosen to systematically disenfranchise poor, minority and overwhelmingly Democratic-leaning constituencies.”
There’s plenty of work to be done.
Editors’ note: This is an updated version of an article originally published Oct. 18, 2016.
On Feb. 6, technology companies, educators and others mark Safer Internet Day and urge people to improve their online safety. Many scholars and academic researchers around the U.S. are studying aspects of cybersecurity and have identified ways people can help themselves stay safe online. Here are a few highlights from their work.
1. Passwords are a weakness
With all the advice to make passwords long, complex and unique – and not reused from site to site – remembering passwords becomes a problem, but there’s help, writes Elon University computer scientist Megan Squire:
“The average internet user has 19 different passwords. … Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.”
That’s a good start.
2. Use a physical key
To add another layer of protection, keep your most important accounts locked with an actual physical key, writes Penn State-Altoona information sciences and technology professor Jungwoo Ryoo:
“A new, even more secure method is gaining popularity, and it’s a lot like an old-fashioned metal key. It’s a computer chip in a small portable physical form that makes it easy to carry around. The chip itself contains a method of authenticating itself.”
Just don’t leave your keys on the table at home.
3. Protect your data in the cloud
Many people store documents, photos and even sensitive private information in cloud services like Google Drive, Dropbox and iCloud. That’s not always the safest practice because of where the data’s encryption keys are stored, explains computer scientist Haibin Zhang at University of Maryland, Baltimore County:
“Just like regular keys, if someone else has them, they might be stolen or misused without the data owner knowing. And some services might have flaws in their security practices that leave users’ data vulnerable.”
So check with your provider, and consider where to best store your most important data.
4. Don’t forget about the rest of the world
Sadly, in the digital age, nowhere is truly safe. Jeremy Straub from North Dakota State University explains how physical objects can be used to hijack your smartphone:
“Attackers may find it very attractive to embed malicious software in the physical world, just waiting for unsuspecting people to scan it with a smartphone or a more specialized device. Hidden in plain sight, the malicious software becomes a sort of ‘sleeper agent’ that can avoid detection until it reaches its target.”
It’s a reminder that using the internet more safely isn’t just a one-day effort.
Editor’s note: This is a roundup of material from The Conversation’s coverage of the 2018 World Economic Forum.
The annual gathering of global elites known colloquially as Davos is over – in case you missed it. So what does it matter?
While the rare attendance of the president of the United States may have made the biggest splash, several thousand other leaders in business, politics, academia and, well, celebritydom ensured there was plenty of other star wattage at the World Economic Forum’s three-day meeting in the Alps.
A few heads of state auditioned to take the U.S. president’s place as “leader of the free world,” while other attendees assessed the risks likely to batter the world economy in coming months and years. And the parties this year – we hear – were epic.
To help readers pierce the luxury bubble that surrounded the Alpine town of Davos from Jan. 23-26, we’ve been asking experts to examine some of the key themes and speeches.
1. A Davos primer and reading list
Since it was President Donald Trump’s first time attending Davos, we thought he might like a primer.
To that end, University of St. Thomas professor of ethics and business law Christopher Michaelson, who has previously attended the meeting, offered a little history on the World Economic Forum and suggested three novels and a children’s book to help the president acclimate himself to the unique culture at Davos.
“If only Trump could get over his distaste for books and read them,” Michaelson wrote.
2. Inequality, dishwashers and Elton John
The main theme at Davos was “Creating a Shared Future in a Fractured World,” which put growing concerns about inequality front and center. While receiving an award, musician and philanthropist Elton John decried economic inequality as “disgraceful.”
Michele Gilman, a professor of law at the University of Baltimore, gave the Davos elite credit for grappling with inequality but cautioned the issue won’t be meaningfully addressed until the forum panels “include the voices and concerns of everyone who doesn’t get to go there.”
“Politicians have little incentive to tackle these problems because they are beholden to the interests of their donors,” she wrote. “Until the drivers and dishwashers toiling in the chalets at Davos – along with their working brothers and sisters across the globe – get the microphones and push the levers of policy, economic inequality is likely to persist and perhaps get worse.”
3. Macron makes his own splash
French President Emmanuel Macron, in his own special address at the forum, declared “France is back!” – in English – as he touted his government’s reforms meant to improve French productivity and competitiveness.
But when Macron switched to French, he made a very different argument, wrote University of Michigan history professor Joshua Cole, in which he decried growth at any cost and called for a new “global contract” to replace globalization and fend off nationalists. It’s a “compelling vision” that may be hard to achieve, Cole argued.
“For Macron this is the danger of the present moment: a relapse into a sterile nationalism that is incapable of addressing the real challenges posed by the present,” he wrote. “Macron’s hope is that a united Europe might play this mediating role. A democratic Europe might thread the needle between the unregulated capitalism often endorsed by the United States and the statist and anti-democratic model provided by China.”
4. A coal clash
During the speech, Macron also pledged to end his country’s use of coal within four years – in sharp contrast to his American counterpart’s express interest in pushing policies that favor the fossil fuel. Jay Zagorsky, an Ohio State University economist, explained why the rhetoric of both leaders won’t be able to change the laws of economics.
“Politicians can claim all they want that they are for or against coal in Davos’ forums, making grand promises about ending the use of the dirty fuel or declaring their plans to make it cheaper to use as a way to protect jobs,” he wrote. “No matter what they say in speeches, however, economic forces will inevitably dictate whether reality can match their words.”
5. Merkel looks back and forward
German Chancellor Angela Merkel, another candidate for leader of the free world, spoke about a digital divide separating generations of Germans. Elizabeth Heineman, of the University of Iowa, delved into the history that makes Germans more cautious than Americans about turning their data over to businesses or the state.
“Merkel’s – and Europe’s – quandary is this: how to move forward in the digital age when Europe’s contribution is to seek balance between state power, individual rights and the dynamism of capitalism,” she wrote.
In her speech, Merkel worried about Germany being steamrolled as it conducted “philosophical debates about the sovereignty of data.”
But commentator Heineman argues that “achieving any balance – means slowing things down. It means philosophizing.”
6. A currency crash
The Trump administration was already making headlines well before the president himself arrived in Davos on Jan. 25. Treasury Secretary Steve Mnuchin, for one, caused the dollar to nosedive after he broke with longstanding tradition and said a weak greenback would be “good” for the U.S.
Benjamin Cohen, professor of international political economy at the University of California, Santa Barbara, described what was so “alarming” about Mnuchin’s words.
“It came as a shock to many – including me – when Mnuchin declared that a depreciation of the greenback is ‘obviously’ welcome,” Cohen wrote. “Prolonged depreciation could severely erode the dominant position of the greenback as the world’s leading currency.”
7. Trump strikes a chord
Trump, in his 15-minute speech to forum delegates, sought to make his “America First” policies palatable to an audience much more inclined to international alliances and plotting a course toward a shared future.
While the president’s words were well-received in some quarters for their perceived pragmatism, they struck a discordant note in the ears of at least one person in attendance. Stephen D. Smith, director of Shoah Foundation at the University of Southern California, argued that Trump’s promotion of a “my country first” brand of isolationism turns the main theme of the forum on its head, more likely fracturing the world than sharing it.
“Trump’s insistence that ‘putting America first’ is his duty as president – just as other heads of state must do so in their own countries – ignores the many political leaders at Davos, such as France’s Emmanuel Macron or Germany’s Angela Merkel, who were genuinely trying to find shared interests at the forum,” Smith wrote.
8. Bullish billionaires
Trump’s words had a very different impact on at least some of the business leaders and billionaires in attendance, who seem to have decided to look past the differences in style they find less appealing and focus on the impact of “pro-business” policies such as tax cuts and deregulation.
However, Georgia State political scientist Charles Hankla cautioned it’s a bit too soon to celebrate.
“It is important to remember that the long-term business costs of Trump’s destabilizing influence are likely to be much greater than any short-term policy benefits,” he argued. “This is because businesses must operate within a social and political context, one which influences their success at every step.”
In 2017 an evangelical perspective influenced many political decisions, as President Donald Trump embraced the key constituency that voted overwhelmingly in his favor. As recently as Dec. 6, President Trump announced that the U.S. would recognize Jerusalem as the capital of Israel, a move embraced by many evangelicals, for its significance to a biblical prophecy.
Earlier in the year Trump made several other announcements keeping in mind his conservative Christian supporters. He nominated Judge Neil M. Gorsuch, a conservative judge, to the Supreme Court. He also brought evangelical Christian leader Jerry Falwell Jr. to head the White House education reform task force, and Betsy DeVos, a conservative advocate of school choice, to serve as secretary of education.
I’ve been editing The Conversation’s ethics and religion desk since February 2017. As mainstream media outlets covered how Trump was embracing evangelical politics, at The Conversation we strived to provide historical context to these developments, as the following six articles exemplify.
1. History of the end-times narrative
Trump’s move on Jerusalem was widely understood as being linked to a biblical prophecy. Many evangelical Christians believe in an end-times narrative, that promises the return of Jesus to the Earth to defeat all God’s enemies, and establish God’s kingdom. The nation of Israel and the city of Jerusalem are crucial for the fulfillment of this prophecy. This is part of a theology considered to be a literal reading of the Bible.
However, Julie Ingersoll, religious studies professor at University of North Florida, explains that this theology is actually a relatively new interpretation that dates to the 19th century and relates to the work of Bible teacher John Nelson Darby.
Darby argued that the Jewish people needed to have control of Jerusalem and build a third Jewish temple on the site where the first and second temples were destroyed. This would be a precursor to the Battle of Armageddon, when Satan would be defeated and Christ would establish his earthly kingdom.
With the creation of the state of Israel in 1948, this theology suddenly seemed feasible and as Ingersoll further explained, the end-times framework became popularized in the 1970s and ‘80s through novels and movies.
“It’s impossible to overemphasize,” she writes, “the effects of this framework on those within the circles of evangelicalism where it is popular. A growing number of young people who have left evangelicalism point to end-times theology as a key component of the subculture they left. They call themselves 'exvangelicals’ and label teachings like this as abusive.”
2. The Moral Majority
Evangelicals have for decades played a prominent role in American politics. As the senior director of research and evaluation at USC Dornsife Richard Flory wrote, President Trump’s appointing Jerry Falwell Jr. to spearhead education reform is best explained by his family’s legacy.
Falwell Jr. is a relatively minor political and religious figure. It was his late father, Jerry Falwell Sr. who was, and continues to be, enormously influential in American politics.
Falwell founded the Moral Majority in 1979 as a conservative Christian political lobbying group that promoted “traditional” family values and prayer in schools and opposed LGBT rights, the Equal Rights Amendment and abortion – all key issues in Trump administration as well.
“Republican candidates for office, dating back to Reagan and George H.W. Bush,” Flory says, recognized the power of the religious right as a voting bloc.“
3. Billy Graham and Eisenhower
Before Falwell, it was evangelist Billy Graham who left a deep impact on conservative politics.
The National Prayer Breakfast, now an annual political tradition in Washington D.C., attended by the American president was a result of Billy Graham’s efforts with President Dwight Eisenhower.
As USC Annenberg religion scholar Diane Winston writes,
"Soon after his election in 1952, Eisenhower told Graham that the country needed a spiritual renewal. For Eisenhower, faith, patriotism and free enterprise were the fundamentals of a strong nation. But of the three, faith came first.”
It was, indeed, under Eisenhower that Congress voted to add “under God” to the Pledge of Allegiance, and “In God We Trust,” to the nation’s currency.
And readers may recall that it was at the 65th National Prayer Breakfast that President Trump made an announcement to repeal the Johnson Amendment and allow religious leaders to endorse candidates from the pulpit, a pledge he made on the 2016 campaign trail. It’s another matter that the repeal was eventually dropped from the Republican tax reform bill.
4. Christian movements
Besides these prominent individual conservative voices, there are other Christian groups trying to shape American politics and the religious landscape.
Two of our contributors pointed in particular to a fast-growing Christian movement, that aims to bring God’s perfect society to Earth by placing “kingdom-minded people” in “powerful positions at the top of all sectors of society.”
Writing about this movement, Brad Christerson, professor of sociology at Biola University together with USC’s Richard Flory, explain how this movement regards Trump as part of that plan. Other kingdom-minded people include Secretary of Energy Rick Perry, Secretary of Education Betsy DeVos and Secretary of Housing and Urban Development Ben Carson.
Christerson and Flory believe this to be the fastest-growing Christian group in America and possibly in the world. Between 1970 and 2010, Protestant churches shrunk by an average of .05 percent per year but this group grew by an average of 3.24 percent per year. This number, they say, was “striking,” when considering the fact that U.S. population grew an average of 1 percent per year during this time period.
5. History of pluralism
Over the past year our scholars also pointed out how American politicians – starting with the nation’s founding fathers – have strived to be inclusive.
University of Texas historian Denise A. Spellberg told the story of a 22-year-old Thomas Jefferson purchasing a copy of the Quran, when he was a law student in Williamsburg, Virginia, 11 years before drafting the Declaration of Independence.
As she explains, Muslims who arrived in North America as early as the 17th century, eventually comprised 15 to 30 percent of the enslaved West African population of British America. The book purchase, she says, was not only “symbolic” of this connection, but also showed America’s early view of religious pluralism.
In Jefferson’s private notes was a paraphrase of the English philosopher John Locke’s 1689 “Letter on Toleration”:
“[he] says neither Pagan nor Mahometan [Muslim] nor Jew ought to be excluded from the civil rights of the commonwealth because of his religion.”
6. An inclusive nation?
Coming to the present, the question is how far has President Trump shifted the rhetoric of inclusiveness?
Trump, too, walked the “well-worn path,” in “proclaiming tolerance and highlighting commonality with Muslims,” wrote David Mislin, an assistant professor at Temple University. Analyzing President Trump’s address to leaders of some 50 Muslim nations during his visit to Saudi Arabia in May 2017, Mislin explained that Trump “used the language of a shared humanity and common God.”
However, Mislin also pointed out, that there was no acknowledgment in Trump’s speech of the Muslim population in the United States or of its contribution to American society. And that, “Islam remains something foreign” for Trump.
Indeed, in this administration – backed by over 80 percent of the white evangelical vote – “the legacy of Falwell Sr. lives on,” writes Richard Flory, “at least for the near term.”
Over the course of 2017, people in the U.S. and around the world became increasingly concerned about how their digital data are transmitted, stored and analyzed. As news broke that every Yahoo email account had been compromised, as well as the financial information of nearly every adult in the U.S., the true scale of how much data private companies have about people became clearer than ever.
This, of course, brings them enormous profits, but comes with significant social and individual risks. Many scholars are researching aspects of this issue, both describing the problem in greater detail and identifying ways people can reclaim power over the data their lives and online activity generate. Here we spotlight seven examples from our 2017 archives.
1. The government doesn’t think much of user privacy
One major concern people have about digital privacy is how much access the police might have to their online information, like what websites people visit and what their emails and text messages say. Mobile phones can be particularly revealing, not only containing large amounts of private information, but also tracking users’ locations. As H.V. Jagadish at University of Michigan writes, the government doesn’t think smartphones’ locations are private information. The legal logic defies common sense:
“By carrying a cellphone – which communicates on its own with the phone company – you have effectively told the phone company where you are. Therefore, your location isn’t private, and the police can get that information from the cellphone company without a warrant, and without even telling you they’re tracking you.
2. Neither do software designers
But mobile phone companies and the government aren’t the only people with access to data on people’s smartphones. Mobile apps of all kinds can monitor location, user activity and data stored on their users’ phones. As an international group of telecommunications security scholars found, ”More than 70 percent of smartphone apps are reporting personal data to third-party tracking companies like Google Analytics, the Facebook Graph API or Crashlytics.“
Those companies can even merge information from different apps – one that tracks a user’s location and another that tracks, say, time spent playing a game or money spent through a digital wallet – to develop extremely detailed profiles of individual users.
3. People care, but struggle to find information
Despite how concerned people are, they can’t actually easily find out what’s being shared about them, when or to whom. Florian Schaub at the University of Michigan explains the conflicting purposes of apps’ and websites’ privacy policies:
That can leave consumers without the information they need to make informed choices.
4. Boosting comprehension
Another problem with privacy policies is that they’re incomprehensible. Anyone who does try to read and understand them will be quickly frustrated by the legalese and awkward language. Karuna Pande Joshi and Tim Finin from the University of Maryland, Baltimore County suggest that artificial intelligence could help:
“What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to pay particular attention to certain issues, like when an email address is shared, or whether search engines can index personal posts.”
That would certainly make life simpler for users, but it would preserve a world in which privacy is not a given.
5. Programmers could help, too
Jean Yang at Carnegie Mellon University is working to change that assumption. At the moment, she explains, computer programmers have to keep track of users’ choices about privacy protections throughout all the various programs a site uses to operate. That makes errors both likely and hard to track down.
Yang’s approach, called “policy-agnostic programming,” builds sharing restrictions right into the software design process. That both forces developers to address privacy, and makes it easier for them to do so.
6. So could a new way of thinking about it
But it may not be enough for some software developers to choose programming tools that would protect their users’ data. Scott Shackelford from Indiana University discussed the movement to declare cybersecurity – including data privacy – a human right recognized under international law.
He predicts real progress will result from consumer demand:
“As people use online services more in their daily lives, their expectations of digital privacy and freedom of expression will lead them to demand better protections. Governments will respond by building on the foundations of existing international law, formally extending into cyberspace the human rights to privacy, freedom of expression and improved economic well-being.”
But governments can be slow to act, leaving people to protect themselves in the meantime.
7. The real basis of all privacy is strong encryption
The fundamental way to protect privacy is to make sure data is stored so securely that only the people authorized to access it are able to read it. Susan Landau at Tufts University explains the importance of individuals having access to strong encryption. And she observes police and the intelligence community are coming around to understanding this view:
“Increasingly, a number of former senior law enforcement and national security officials have come out strongly in support of end-to-end encryption and strong device protection …, which can protect against hacking and other data theft incidents.”
One day, perhaps, governments and businesses will have the same concerns about individuals’ privacy as people themselves do. Until then, strong encryption without special access for law enforcement or other authorities will remain the only reliable guardian of privacy.
Editor’s note: The following is a roundup of stories The Conversation has published on the GOP’s sweeping 2017 tax bill.
The Senate’s passage of the Republican tax plan on a party-line vote on Dec. 2 means the most significant overhaul of the U.S. tax code in a generation may be just days away from becoming the law of the land. All that remains is reconciling the Senate’s version with the one passed by the House in mid-November, a couple of additional votes and the president’s signature.
The Senate bill’s nearly 500 pages, some aspects of which were handwritten in the margins within hours of the 2 a.m. vote, contain changes that will greatly affect every facet of the economy, from health care to higher education and housing.
1. Sickness and the economy
Under the Affordable Care Act’s individual mandate, Americans must buy health insurance or face a penalty. But the Senate’s tax bill would eliminate this rule. According to the Congressional Budget Office, that will lead to 13 million more uninsured Americans by 2027.
That’s bad for the economy, says Diane Dewar, who studies health policy at the University at Albany, SUNY. When people don’t have health insurance, they’re more likely to get sick and miss work. They also may wait to go to the doctor until it’s absolutely necessary, racking up bills for uncompensated care that end up being absorbed by hospitals, insurance companies and others.
“If Americans become less healthy and have less access to health care,‘ she argues, "then everyone loses.”
2. An attack on higher education
The tax plans passed by the House and Senate contain several provisions that would have a big impact on universities and students alike, such as a tax on university endowments and changes that could significantly increase taxes for some students.
Benjamin Cohen, a professor of international political economy at the University of California, Santa Barbara, explains how these “appalling” provisions “will have adverse economic effects that will be both substantial and long-lasting.”
“Many schools will see their budgets cut,” he continued. “Faced with higher fees and tuition, many students will be forced to drop out – their dreams shattered, their earning potential stunted, their contribution to the American economy significantly curtailed.”
3. It’s not a popularity contest
So will voters punish Republican lawmakers for passing such an unpopular bill during next year’s midterms?
David Barker, director of the Center for Congressional and Presidential Studies at American University sees the opposite – positive outcomes for incumbents.
“Nothing is more central to Republican orthodoxy than tax cuts for the wealthy,” Barker writes. “If Republican lawmakers hadn’t gotten this done now, while they had the chance, they could have expected donors to ignore their calls next year.”
Of course, passage of the law was also a win for the president – and may help him win reelection in 2020.
“And if Trump wins reelection, everything else that we associate with his candidacy and his presidency may be validated and copied by future politicians, on both sides, as 'the way to win,’” Barker concludes, “leaving a political legacy that may far outlast the consequences of this tax bill.”
4. Tax ‘reform’?
Many refer to the Republican tax plan as a “reform” that will make the tax code simpler.
Taxpayers do want reform – just not the kind in the bills that just passed the House and Senate, argues Stephanie Leiser, a lecturer in public policy at University of Michigan.
“Research has shown that their most important gripe about taxes is the demoralizing feeling that the system is hopelessly complex and that other people are getting away with not paying their fair share. To use the president’s words, people think the ‘system is rigged.’”
The Republican plan, she writes, “will only exacerbate that feeling.”
5. ‘Dire’ impact on affordable housing
You might not expect a tax plan to have a big affect on the supply of affordable housing, yet Georgetown research fellow Michelle D. Layser explains how it will do just that.
“The supply of affordable housing is so low that there is no state, city or county in the country where a full-time minimum wage employee can afford to rent a two-bedroom unit,” she wrote. “These housing woes are sure to become more dire.”
Editor’s note: The following is a roundup of stories from The Conversation’s archive.
On Sept. 20, 2017, Hurricane Maria tore across Puerto Rico. The Category 4 storm was so massive – 300 miles wide – that it enveloped the island entirely, battering it with 155 mph winds and dropping almost two feet of rain.
The next day, Puerto Ricans awoke to a radically altered reality. Two months after the storm, the island still faces shortages of food, water, electricity, transportation, cell service and medical services – an American humanitarian crisis that even today shows few signs of improvement.
Here, experts answer five key questions about Puerto Rico in the aftermath of Hurricane Maria.
1. What has life been like since the storm?
For Puerto Ricans who stayed on the island, life has been extraordinarily hard. Just finding enough food and water to survive can be a daily struggle, especially for people in rural areas and places cut off from help by washed-out bridges or mudslides.
Evelyn Milagros Rodriguez, a librarian at the University of Puerto Rico’s Humacao campus, offers an insider’s view of what life is like on the island’s eastern shore (in Spanish here).
She hasn’t been able to go to work since the storm, she says, because the library is “mold-infested and the roof is leaking. The mold has gotten into our collection…most of the furniture and computers will have to be replaced.”
After three weeks of a near-total communication blackout, radio, television, telephones and internet are starting to recover. Still, Rodriguez says, electricity comes and goes, and “it took me more than two weeks just to write this article, between finding somewhere to charge my laptop and locating an internet connection strong enough to research the data and send a file by email.”
Hurricane Maria demolished an estimated 100,000 homes and buildings, and 90 percent of the island’s infrastructure is damaged or destroyed.
It’s also unsafe to venture outside at night. An island-wide curfew was lifted in October, but without streetlights, stoplights or police, driving and walking are dangerous after dark.
“Nothing is easy,” Rodriguez reports.
2. Why are things still so bad?
After a decade of fiscal decline and a May 2017 bankruptcy, Puerto Rico was exceptionally vulnerable by the time Maria hit. Before the hurricane, people already struggled with food insecurity, poor health care and crumbling public infrastructure, the result of both damaging U.S. policy and deepening financial crisis.
Now these problems are complicating Puerto Rico’s recovery, asserts Lauren Lluveras, a policy analyst at the University of Texas, Austin.
Lluveras, whose family is Puerto Rican, notes that in addition to preexisting financial hardship, a lackluster federal relief effort has made storm recovery much harder. The Trump administration delayed dispatching military personnel and material relief until after the hurricane made landfall and allowed a waiver to the Jones Act – a law dictating that only U.S.-made or U.S.-staffed vessels can do commerce in American waters – to lapse.
That “reduce[s] the number of ships that can bring aid to the island,” she says. Both of these federal actions “have slowed Puerto Rico’s recovery considerably.”
On Nov. 13, Puerto Rico’s governor asked the federal government for US$94.4 billion in aid. Previously, Congress had approved just $5 billion in disaster funding for Puerto Rico.
3. Why is the power still out?
Two months after Hurricane Maria, something like 75 percent of Puerto Ricans still don’t have electricity. At times, hundreds of thousands of households – particularly in San Juan and other urban areas – have seen power restored, only to be plunged into darkness again by a system failure.
That’s because almost half of Puerto Rico’s power generation comes from “old, very expensive oil-fired plants,” writes Peter Fox-Penner, director of Boston University’s Institute for Sustainable Energy.
Before it went bankrupt in 2017, PREPA, the island’s sole energy provider, had been hoping to upgrade these aged facilities and incorporate renewable energy sources like solar and gas. Then Maria knocked out the entire grid, and all of PREPA’s resources have gone toward just getting Puerto Rico’s lights turned back on.
The island’s extended outage is “a humanitarian crisis that has yet to be resolved,” Fox-Penner writes. He believes that any hope of Puerto Rico emerging from this storm with a greener, more durable grid – one better able to withstand future hurricanes – have been dashed.
On Nov. 17, PREPA’s director resigned.
4. How does living without power for so long affect people?
Shao Lin, Professor of Public Health at SUNY-Albany, has researched how prolonged blackouts impact health. She believes that Puerto Ricans can expect to see numerous lasting effects from this power outage, including mental health issues.
After Hurricane Sandy, the power was out for about 12 to 14 days in some parts of New York City. For months afterwards, Lin found, residents reported more emergency department visits due to anxiety and mood disorder. They were also more prone to excess drinking and problematic drug use.
The power outage in Puerto Rico has already lasted eight weeks, much longer than the blackout in New York City.
As a result, “We should expect to see a corresponding increase in disease – not only mental health issues, but also diseases that depend on electricity for treatment, such as renal failure, asthma and chronic obstructive pulmonary disease,” warns Lin.
5. Will this crisis change how the US treats Puerto Rico?
Pedro Caban, a professor at SUNY-Albany, thinks that the appalling aftermath of Hurricane Maria could improve Puerto Rico’s political status, moving the needle on longstanding harmful American policies.
Puerto Rico is an unincorporated territorial possession of the United States, meaning that the Puerto Rican government exercises only those powers that the Congress allows. “In other words,” says Caban, “it is still a colony.”
The humanitarian crisis there has prompted the Puerto Rican diaspora in the U.S. to fight for their island. They are actively lobbying against some of the most restrictive colonial policies, among them the Jones Act and the oversight board that has controlled Puerto Rico’s budget since it declared bankruptcy earlier this year.
This could be a “watershed moment that redefines U.S. treatment of Puerto Rico,” writes Caban.
Beyond pressuring local officials and the federal government, Puerto Ricans across the U.S. have organized a nationwide campaign to raise funding and collect donations for Puerto Rico. An outraged and emboldened diaspora, it turns out, may finally get the federal government to resolve Puerto Rico’s damaging colonial status.
Editor’s note: The following is a roundup of archival stories.
Federal investigators following up on the mass shooting at a Texas church on Nov. 5 have seized the alleged shooter’s smartphone – reportedly an iPhone – but are reporting they are unable to unlock it, to decode its encryption and read any data or messages stored on it.
The situation adds fuel to an ongoing dispute over whether, when and how police should be allowed to defeat encryption systems on suspects’ technological devices. Here are highlights of The Conversation’s coverage of that debate.
#1. Police have never had unfettered access to everything
The FBI and the U.S. Department of Justice have in recent years – especially since the 2015 mass shooting in San Bernardino, California – been increasing calls for what they term “exceptional access,” a way around encryption that police could use to gather information on crimes both future and past. Technology and privacy scholar Susan Landau, at Tufts University, argues that limits and challenges to investigative power are strengths of democracy, not weaknesses:
“[L]aw enforcement has always had to deal with blocks to obtaining evidence; the exclusionary rule, for example, means that evidence collected in violation of a citizen’s constitutional protections is often inadmissible in court.”
Further, she notes that almost any person or organization, including community groups, could be a potential target for hackers – and therefore should use strong encryption in their communications and data storage:
“This broad threat to fundamental parts of American society poses a serious danger to national security as well as individual privacy. Increasingly, a number of former senior law enforcement and national security officials have come out strongly in support of end-to-end encryption and strong device protection (much like the kind Apple has been developing), which can protect against hacking and other data theft incidents.”
#2. FBI has other ways to get this information
The idea of weakening encryption for everyone just so police can have an easier time is increasingly recognized as unworkable, writes Ben Buchanan, a fellow at Harvard’s Belfer Center for Science and International Affairs. Instead,
“The future of law enforcement and intelligence gathering efforts involving digital information is an emerging field that I and others who are exploring it sometimes call "lawful hacking.” Rather than employing a skeleton key that grants immediate access to encrypted information, government agents will have to find other technical ways – often involving malicious code – and other legal frameworks.“
Indeed he observes, when the FBI failed to force Apple to unlock the San Bernardino shooter’s iPhone,
"the FBI found another way. The bureau hired an outside firm that was able to exploit a vulnerability in the iPhone’s software and gain access. It wasn’t the first time the bureau had done such a thing.”
#3. It’s not just about iPhones
When the San Bernardino suspect’s iPhone was targeted by investigators, Android researchers William Enck and Adwait Nadkarni at North Carolina State University tried to crack a smartphone themselves. They found that one key to encryption’s effectiveness is proper setup:
“Overall, devices running the most recent versions of iOS and Android are comparably protected against offline attacks, when configured correctly by both the phone manufacturer and the end user. Older versions may be more vulnerable; one system could be cracked in less than 10 seconds. Additionally, configuration and software flaws by phone manufacturers may also compromise security of both Android and iOS devices.”
#4. What they’re not looking for
What are investigators hoping to find, anyway? It’s nearly a given that they aren’t looking for emails the suspect may have sent or received. As Georgia State University constitutional scholar Clark Cunningham explains, the government already believes it is allowed to read all of a person’s email, without the email owner ever knowing:
“[The] law allows the government to use a warrant to get electronic communications from the company providing the service – rather than the true owner of the email account, the person who uses it.
"And the government then usually asks that the warrant be "sealed,” which means it won’t appear in public court records and will be hidden from you. Even worse, the law lets the government get what is called a “gag order,” a court ruling preventing the company from telling you it got a warrant for your email.“
#5. The political stakes are high
With this new case, federal officials risk weakening public support for giving investigators special access to circumvent or evade encryption. After the controversy over the San Bernardino shooter’s phone, public demand for privacy and encryption climbed, wrote Carnegie Mellon professor Rahul Telang:
"Repeated stories on data breaches and privacy invasion, particularly from former NSA contractor Edward Snowden, appears to have heightened users’ attention to security and privacy. Those two attributes have become important enough that companies are finding it profitable to advertise and promote them.
"Apple, in particular, has highlighted the security of its products recently and reportedly is doubling down and plans to make it even harder for anyone to crack an iPhone.”
It seems unlikely this debate will ever truly go away: Police will continue to want easy access to all information that might help them prevent or solve crimes, and regular people will continue to want to protect their private information and communications from prying eyes, whether that’s criminals, hackers or, indeed, the government itself.
Editor’s note: The following is a roundup of stories from The Conversation’s archive.
Once again, Americans are asking themselves the same familiar, heartsick questions:
How can gun violence be prevented? What policy or program could help save innocent lives? What is an approach that would be tolerable to people on both sides of the political spectrum?
#1. No ready answers
“Unfortunately, the research we need to answer these questions doesn’t exist – and part of the problem is that the federal government largely doesn’t support it,” explains Lacey Wallace, a criminal justice researcher at Penn State University.
Why not? In 1996, Congress passed the Dickey Amendment, which mandated that “none of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control.” From 1996 to 2013, CDC funding for gun research dropped by 96 percent.
It’s an urgent problem, Wallace writes. In 2015, for example, roughly 85,000 people were injured by firearms, including nearly 10,000 children.
#2. A public health emergency
One in four American children reports having easy access to a gun in the home, the researchers pointed out.
“Parents sometimes do not fully understand child and youth development, impulsiveness or curiosity,” Xuan wrote.
“A recent study shows that what parents report about their children’s access to guns often contradicts children’s reports,” he continued. “The kids reveal that they know the location of guns in the house and have handled the gun, while parents reported they did not. For injury prevention, it is far more effective and long-lasting to change the environment by changing modifiable policies and norms than to try to change the way children behave.”
#3. A new debate
A mass shooting often brings out partisan politics. Those who want to regulate guns face off with those who want to protect the Second Amendment, and the political debate never seems to progress.
“A new dialogue is desperately needed among policymakers and the public,” writes Timothy M. Smith, a professor of International Business at the University of Minnesota. “It could begin by shifting our focus away from the regulation of guns toward understanding (and mitigating) the social costs of firearm fatalities.”
From a purely economic perspective, Smith writes, the social costs of gun deaths likely exceeded US$300 billion in 2013. This is a staggering number, more than what the federal government spent on Medicaid in the same year. And that’s not including the more than 80,000 nonfatal firearm injuries each year.
Smith’s argument is not that you can put a price on human life, but that a government policy that discourages people from owning the most lethal types of guns – handguns – could protect society as a whole.
#4 Testing existing laws
Michael Siegel, a professor of community health services, and Molly Pahn of Boston University recently created a new database that offers insights into the effectiveness of existing laws across all 50 states for the past 27 years.
“This database is intended to help researchers evaluate the effectiveness of different state-level approaches to reducing gun violence,” Siegel and Pahn write. “By examining the relationship between changes in these laws over time and changes in firearm mortality, researchers may be able to identify which policies are effective and which are not.”
Editor’s note: On Friday, Oct. 20, “Third Rail with OZY” will ask: Is marriage dead?
This roundup of stories from The Conversation archive explores trends and pressures affecting the institution of marriage around the world.
#1: Fewer ‘I dos’
There’s no doubt: Fewer people are making a commitment to marriage.
Barely “more than half of adults in the U.S. say they’re living with a spouse,” writes Jay Zagorsky, an economist at The Ohio State University. “It is the lowest share on record, and down from 70 percent in 1967.”
What’s behind the trend?
“Some blame widening U.S. income and wealth inequality,” Zagorsky writes. “Others point the finger at the fall in religious adherence or cite the increase in education and income of women, making women choosier about whom to marry. Still others focus on rising student debt and rising housing costs, forcing people to put off marriage. Finally some believe marriage is simply an old, outdated tradition that is no longer necessary.”
However, Zagorsky writes, none of these factors alone can explain the trend.
#2: Delayed adolescence
It could be that marriage rates are down because America’s youth is suffering from a Peter Pan syndrome?
Today’s teens are in no hurry to grow up, according to Jean Twenge, a professor of psychology of San Diego State University.
“The teen pregnancy rate has reached an all-time low,” Twenge writes. “Fewer teens are drinking alcohol, having sex or working part-time jobs. And as I found in a newly released analysis of seven large surveys, teens are also now less likely to drive, date or go out without their parents than their counterparts 10 or 20 years ago.”
“‘Adulting’ – which refers to young adults performing adult responsibilities as if this were remarkable – has now entered the lexicon,” Twenge writes. “The entire developmental path from infancy to full adulthood has slowed.”
This slowed path could also mean a delay in walking down the aisle.
#3: No matchmaking skills
This downward trend in marriage is worth our attention because “stable, satisfying marriages promote physical and mental health for adults and their children,” according to psychology professors Justin Lavner, Benjamin Karney and Thomas Bradbury.
What’s more, they explain, the abandonment of marriage is more pronounced among low-income Americans – prompting the government to try to turn things around.
The professors explain that relationship education programs are the cornerstone of government efforts to strengthen low-income Americans’ relationships and encourage them to get, or stay, married.
By interviewing low-income couples in Los Angeles, the professors concluded that communication may not be the main driver of relationship satisfaction for these couples.
They asked the couples themselves about the biggest sources of disagreement in their marriages. Their responses? Money management, household chores, leisure time, in-laws and children.
#4: Pity the bare branches
The downward trend in marriage is not limited to the United States.
“The United Nations gathered data for roughly 100 countries, showing how marriage rates changed from 1970 to 2005,” Zagorsky notes. “Marriage rates fell in four-fifths of them.”
“Australia’s marriage rate, for example, fell from 9.3 marriages per 1,000 people in 1970 to 5.6 in 2005. Egypt’s declined from 9.3 to 7.2. In Poland, it dropped from 8.6 to 6.5.
"The drop occurred in all types of countries, poor and rich.”
Xuan Li, an assistant professor of psychology at NYU Shanghai introduced The Conversation readers to China’s involuntary bachelors.
Those “who fail to add fruit to their family tree are often referred to as "bare branches,” or guanggun,“ Li writes. "And the Chinese state has recently started to worry about the dire demographic trend posed by the growing number of bare branches.”
According to the 2010 national census data, “82.44 percent of Chinese men between 20 and 29 years of age have never been married, which is 15 percent more than women of the same age.”
Soon after the U.S. gained independence, Uncle Sam began to tax inherited wealth. These levies applied only intermittently, however, until 1916, when Congress and the Wilson administration established the modern estate tax in time for it to finance U.S. involvement in World War I.
Once a significant moneymaker that generated 10 percent of federal tax revenue, the estate tax now reaps only about 1 percent. Just one out of 500 estates left by people with at least US$5.5 million to their name – or couples with more than $11 million – get taxed today.
Still, if Congress were to end the estate tax, as the Trump administration and Republican lawmakers propose, the government might miss those funds. What’s more, nonprofits could see their budgets pinched by a decline in giving.
What happens after repeal
Without an estate tax, there are two likely scenarios. The people inheriting a larger share of great fortunes might give more of their windfalls to charity. Alternatively, they could keep more of the money to invest, enjoy or share with their families and friends.
The estate tax encourages giving by providing a dollar-for-dollar deduction from estate and gift tax liabilities matching any amount of money bequeathed to charities after death. Estates are officially taxed at a 40 percent rate now, but loopholes and workarounds push the average rate down to 17 percent, according to the Tax Policy Center.
When the price of anything rises – whether it be bacon or tennis balls – economists expect demand for that product or service to fall. Without an estate tax, there’s nothing to be gained, accounting-wise, from rich people writing posthumous charitable gifts into their wills.
The question is, do fewer multimillionaires write charities into their wills when this incentive goes away?
The money at stake is significant. Bequest giving has more than tripled in inflation-adjusted dollars over the last 40 years, rising to $30.36 billion in 2016 from $9.7 billion in 1976, according to the Giving USA report, which the Indiana University Lilly Family School of Philanthropy researches and writes in partnership with the Giving USA Foundation.
To be clear, the volume of bequests is often unpredictable over the short term and does not purely track tax policy changes. Frequent adjustments to the estate tax rate and exemption level, as well as market swings – which alter the value of assets like stocks, bonds and fine art – affect what happens in a given year.
So do some deaths. David Rockefeller, the successful banker and heir to a great fortune, who died this year at 101, had a net worth in excess of $3 billion despite giving $2 billion away during his lifetime.
When his estate auctions off an estimated $700 million in European ceramics, Chinese porcelain, paintings, furniture and other items from his assorted collections, the proceeds will go to charity, bumping up the total for bequests.
But I have found in my own research that, controlling for various factors, a 10 percent increase in the estate tax rate is associated with a nearly 7 percent increase in charitable bequest giving.
Conversely, raising the threshold for how large estates must be before they are subject to the tax, knows as the “exemption level,” is associated with decreases in bequest giving, especially when the exemption is at or above $3 million.
What I saw indicates that when more wealthy people were exempted from the estate tax altogether, fewer of them wrote charities into their wills. Alternatively, those that did leave money for a cause tended to make smaller donations.
In 2004 the Congressional Budget Office estimated a smaller decline of as little as 6 percent.
Since this research is fairly old and lots of things have changed since then, these studies may either underestimate or overestimate the effects of a repeal of the estate tax today.
Personally, I believe that these studies probably underestimate the effects of repeal because extrapolating what would happen with a hypothetical situation is always trickier than modeling an outcome based on a real-world event.
Until 2010, there was no relatively recent evidence of what might happen with a complete repeal. That year, the estate tax was essentially paused.
While its brevity and multiple idiosyncrasies limit what it shows, the episode does provide a case study of what might happen if the estate tax were repealed.
During the decade before 2010, bequests ranged from $18 billion to $24 billion a year – except in 2008 when it surged over $31 billion.
In 2009, bequest giving plummeted to $19 billion. This was for two reasons: The exemption level rose from $2 million to $3.5 million and the Great Recession dramatically drove down the value of stocks, bonds, real estate and other assets.
There were two options for the estates of people who died in 2010, when the Great Recession was over but its effects were lingering: Take a $5 million exemption and a 35 percent top marginal tax rate or a $0 exemption and a 0 percent top marginal tax rate.
Unsurprisingly, most chose the tax-free option.
Partly as a result of these two tracks, the Internal Revenue Service still collected $7 billion in estate and gift tax revenue in the 2010 fiscal year – even though theoretically the estate tax had been waived. And bequest giving, according to Giving USA, grew by 22 percent to $23.4 billion between 2009 and 2010 – admittedly from a Great Recession-induced, below-normal amount in 2009.
In 2011, once the estate tax was reinstated and the exemption returned to $5 million with a top marginal tax rate of 35 percent, bequest giving grew 7.6 percent to $25 billion. Since then, the exemption has been adjusted only for inflation. The estate tax rate held steady at 35 percent in 2012, later rising to 40 percent.
Bequest giving remained in the mid-$20 billion range for 2012 and 2013 then edged up to the low $30 billion range – about where it stood prior to the Great Recession in inflation-adjusted dollars.
What does all that mean? While there’s no clear pattern, suspending the estate tax didn’t eliminate bequest giving in 2010, even if it appears to have reduced it.
It’s hard to draw firm conclusions from this episode, as few estate lawyers or wealthy people anticipated this one-year repeal – even if it was rumored at the time that many rich families had prepared multiple wills to be deployed as needed according to the latest estate tax policies or took other steps to take advantage of the unusual and shifting circumstances.
And it’s important to remember that people give for many different reasons and that bequests are different from other kinds of donations – it is truly a last chance to support a charity or cause. As such, people usually don’t give just because of a tax deduction, but all the studies I have seen indicate that taxing inherited wealth makes a difference.
What’s more, research also suggests that estate taxes can encourage donors to give more during their lifetimes. Several studies have estimated that eliminating the estate tax would usher in a decline of non-bequest giving in the 6 percent to 12 percent range or more.
In other words, repealing the estate tax would probably reduce giving to charity both during donors’ lifetimes and after their deaths.
Patrick Rooney is affiliated with the public policy advisory committees for Independent Sector and The Philanthropy Roundtable. The author and the IU Lilly Family School of Philanthropy have received grants, contracts, and donations from many foundations, corporations, charities, and individuals. However, none of them funded this research. The views expressed in this essay are strictly my own and do not reflect policy stances of Indiana University or the Lilly Family School of Philanthropy.
Editor’s Note: On Friday, Oct. 6, “Third Rail with OZY” will discuss violence in the United States.
These stories from The Conversation archive explore how violence permeates different aspects of American society.
#1. Kids today
Do American parents teach their kids violent behavior through the use of corporal punishment?
A professor of psychiatry at SUNY Upstate Medical University and Tufts Medical School, Ronald Pies takes up the question “Is it OK to spank a misbehaving child once in a while?”
Pies begins by acknowledging that researchers and parents often disagree on this topic, but ultimately concludes “spanking a child may seem helpful in the short term, but is ineffective and probably harmful in the long term. The child who is often spanked learns that physical force is an acceptable method of problem solving.”
And yet, Pies doesn’t feel that parents who spank their children need a stern lecture – and certainly not an even stronger punishment.
“It isn’t that the parent is "evil” by nature or is a “child abuser,” Pies writes. “Often, the parent has been stressed to the breaking point, and is not aware of alternative methods of discipline – for example, the use of "time-outs,” removal of privileges and positive reinforcement of the child’s appropriate behaviors.“
#2. Paddling still frequent
Unfortunately, parents’ belief in corporal punishment often follows their children to school.
As Joseph Gagnon of the the University of Florida writes, "19 states still allow corporal punishment [in schools], despite research that clearly indicates such public humiliation is ineffective for changing student behavior and can, in fact, have long-term negative effects.”
According to Gagnon, every day approximately 838 students are paddled in American schools. And children in less affluent communities were more likely to be hit.
Why is this practice still so pervasive? Gagnon and his colleagues talked to school principals to find out. They learned, “principals cite pressure from parents as a primary reason for using corporal punishment. Despite the science, the idea that corporal punishment is effective, ‘Because that’s how I was raised,’ pervades the discussion.”
#3. A culture of aggression
Of course, schools aren’t the only institutions in the U.S. were physical violence takes place. The criminal justice system is another.
Paul Hirschfield of Rutgers University studies violence perpetuated by police in various countries.
“American police kill a few people each day, making them far more deadly than police in Europe,” Hirschfield writes.
Although the cause of police killings is complex, Hirschfield believes one factor is American gun culture – which causes the police to fear for their own safety in too many situations.
“American police are primed to expect guns …” Hirschfield writes. “It may make American policing more dangerous and combat-oriented. It also fosters police cultures that emphasize bravery and aggression.”
#4. Behind prison walls
Too few of us take the time to think about how that culture of aggression follows prisoners behind bars, writes Heather Ann Thompson, a professor of History and Afroamerican and African Studies at the University of Michigan.
“That so many are blissfully unaware of just how many people are, or have been, subject to containment or control is, perhaps, unsurprising,” Thompson writes. “Prisons are built to be out of sight and are, thus, out of mind.”
And yet, Thompson writes, “the closed nature of prisons remains a serious problem in this country” – and one that demands closer scrutiny.
“In September 2016, prisoners at facilities across the country erupted in protests for better conditions,” Thompson writes. “In March and April of 2017, prisons in Delaware and Tennessee similarly exploded. In each of these rebellions, the public was told little about what had prompted the chaos and even less about what happened to the protesting prisoners once order was restored.”
But, she writes, “it is obvious that much trauma takes place behind bars while we aren’t watching.”
Editor’s note: This is a roundup of gun control articles published by scholars from the U.S. and two other countries where deadly mass shootings are far less common.
An underresearched epidemic
Guns are a leading cause of death of Americans of all ages, including children. Yet “while gun violence is a public health problem, it is not studied the same way other public health problems are,” explains Sandro Galea, dean of Boston University’s School of Public Health.
That’s no accident. Congress has prohibited firearm-related research by the Centers for Disease Control and Prevention and the National Institutes of Health since 1996. Galea says:
“Unfortunately, a shortage of data creates space for speculation, conjecture and ill-informed argument that threatens reasoned public discussion and progressive action on the issue.”
The Australian model
The contrast with Australia is especially stark. Just as Congress was barring any research that might strengthen the case for tighter gun regulations, that country established very strict firearm laws in response to the Port Arthur massacre, which killed 35 people in 1996.
To clamp down on guns, the federal government worked with Australia’s states to ban semiautomatic rifles and pump action shotguns, establish a uniform gun registry and buy the now-banned guns from people who had purchased them before owning them became illegal. The country also stopped recognizing self-defense as an acceptable reason for gun ownership and outlawed mail-order gun sales.
“When it comes to firearms, Australia is far a safer place today than it was in the 1990s and in previous decades.”
There have been no mass murders since the Port Arthur massacre and the subsequent clampdown on guns, Chapman observes. In contrast, there were 13 of those tragic incidents over the previous 18 years – in which a total of 104 victims died. Other gun deaths have also declined.
Concerns about complacency
After so many years with no mass killings, some Australian scholars fear that their country may be moving in the wrong direction.
Twenty years after doing more than any other nation to strengthen firearm regulation, “many people think we no longer have to worry about gun violence,” say Rebecca Peters of the University of Sydney and Chris Cunneen at the University of New South Wales. They write:
“Such complacency jeopardizes public safety. The pro-gun lobby has succeeded in watering down the laws in several states. Weakening the rules on pistols so that unlicensed shooters can walk into a club and shoot without any waiting period for background checks has resulted in at least one homicide in New South Wales.”
In the UK
Like Australia, the U.K. tightened its gun regulations following its own 1996 tragedy – when a man killed 16 children and their teacher at Dunblane Primary School, near Stirling, Scotland.
Subsequently, the U.K. banned some handguns and bought back many banned weapons. There, however, progress has been less impressive, notes Helen Williamson, a researcher at the University of Brighton. On the one hand, the number of firearms offenses has declined from a high of 24,094 in 2004 to 7,866 in 2015. On the other, criminals are growing more “resourceful in identifying alternative sources of firearms,” she says, adding:
“Although the availability of high-quality firearms may have fallen, the demand for weapons remains. This demand has driven criminals to be resourceful in identifying alternative sources of firearms. There are growing concerns about how they could acquire instructions online on how to build a homemade gun, or even 3D-print a functioning pistol.”
Editor’s note: the following is roundup of previously published articles.
Passwords are everywhere – and they present an impossible puzzle. Social media profiles, financial records, personal correspondence and vital work documents are all protected by passwords. To keep all that information safe, the rules sound simple: Passwords need to be long, different for every site, easy to remember, hard to guess and never written down. But we’re only human! What is to be done about our need for secure passwords?
Get good advice
Sadly, much of the password advice people have been given over the past decade-plus is wrong, and in part that’s because the real threat is not an individual hacker targeting you specifically, write five scholars who are part of the Carnegie Mellon University passwords research group:
“People who are trying to break into online accounts don’t just sit down at a computer and make a few guesses…. [C]omputer programs let them make millions or billions of guesses in just a few hours…. [So] users need to go beyond choosing passwords that are hard for a human to guess: Passwords need to be difficult for a computer to figure out.”
To help, those researchers have developed a system that checks passwords as users create them, and offers immediate advice about how to make each password stronger.
Use a password manager
All that computing power can work to our advantage too, writes Elon University computer scientist Megan Squire:
“The average internet user has 19 different passwords. It’s easy to see why people write them down on sticky notes or just click the ‘I forgot my password’ link. Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.”
That sounds like a good start.
Getting emoji – 🐱💦🎆🎌 – into the act
Then again, it might be even better not to use any regular characters. A group of emoji could improve security, writes Florian Schaub, an assistant professor of information and of electrical engineering and computer science at the University of Michigan:
“We found that emoji passcodes consisting of six randomly selected emojis were hardest to steal over a user’s shoulder. Other types of passcodes, such as four or six emojis in a pattern, or four or six numeric digits, were easier to observe and recall correctly.”
Still, emoji are – like letters and numbers – drawn from a finite library of options. So they’re vulnerable to being guessed by powerful computers.
Drawing toward a solution
To add even more potential variation to the mix, consider making a quick doodle-like drawing to serve as a password. Janne Lindqvist from Rutgers University calls that sort of motion a “gesture,” and is working on a system to do just that:
“We have explored the potential for people to use doodles instead of passwords on several websites. It appeared to be no more difficult to remember multiple gestures than it is to recall different passwords for each site. In fact, it was faster: Logging in with a gesture took two to six seconds less time than doing so with a text password. It’s faster to generate a gesture than a password, too: People spent 42 percent less time generating gesture credentials than people we studied who had to make up new passwords. We also found that people could successfully enter gestures without spending as much attention on them as they had to with text passwords.”
Easier to make, faster to enter, and not any more difficult to remember? That’s progress.
A world without passwords
Any type of password is inherently vulnerable, though, because it is an heir to centuries of tradition in writing, writes literature scholar Brian Lennon of Pennsylvania State University:
“[E]ven the strongest password … can be used anywhere and at any time once it has been separated from its assigned user. It is for this reason that both security professionals and knowledgeable users have been calling for the abandonment of password security altogether.”
What would be left then? Only attributes about who we are as living beings.
The unknowable password
Identifying people based not on what they know, but rather their actual biology, is perhaps the ultimate goal. This goes well beyond fingerprints and retina scans, Elon’s Squire explains:
“[A] computer game similar to ‘Guitar Hero’ [can] train the subconscious brain to learn a series of keystrokes. When a musician memorizes how to play a piece of music, she doesn’t need to think about each note or sequence. It becomes an ingrained, trained reaction usable as a password but nearly impossible even for the musician to spell out note by note, or for the user to disclose letter by letter.”
That might just do away with passwords altogether. And yet if you’re really just longing for the days of deadbolts, padlocks and keys, you’re not alone.
Don’t just leave things to a password
User authentication using an electronic key is here, as Penn State-Altoona information sciences and technology professor Jungwoo Ryoo writes:
“A new, even more secure method is gaining popularity, and it’s a lot like an old-fashioned metal key. It’s a computer chip in a small portable physical form that makes it easy to carry around. (It even typically has a hole to fit on a keychain.) The chip itself contains a method of authenticating itself … And it has USB or wireless connections so it can either plug into any computer easily or communicate wirelessly with a mobile device.”
Just don’t leave your keys on the table at home.
On Friday, Sept. 8, “Third Rail with OZY” opened by asking: “Is truth overrated? Is lying the American way?”
Of course, lies have long been a big part of American politics, but fibs, tall tales and whoppers also affect our home and work lives.
We searched The Conversation archive for stories that explore how, why and when people lie – and what happens as a result.
Do you lie – and why?
Liars aren’t born – but they do start early.
Gail Heyman is a professor of psychology at the University of California, San Diego who studies how children as young as three-and-a-half years old need to develop before they can become successful liars. Heyman acknowledges the corrosive power of lying on relationships, organizations and institutions. But she also admits that lying is “a source of great social power, as it allows people to shape interactions in ways that serve their interests: They can evade responsibility for their misdeeds, take credit for accomplishments that are not really theirs, and rally friends and allies to the cause.”
Have you ever harnessed this “great social power” by telling a lie?
If you answered “no,” perhaps that’s true, but perhaps that’s just something you mistakenly believe – a falsehood.
Ronald W. Pies, a clinical professor of psychiatry at Tufts University School of Medicine, walked us through the difference between those two terms.
Someone “who deliberately misrepresents what he or she knows to be true is lying – typically, to secure some personal advantage,” Pies writes. “In contrast, someone who voices a mistaken claim without any intent to deceive is not lying. That person may simply be unaware of the facts, or may refuse to believe the best available evidence. Rather than lying, he’s stating a falsehood.”
Parsing lies from falsehoods requires us to understand another person’s motivation. That’s tricky business anytime – but it gets more complicated when the speaker you’re scrutinizing is the president of the United States.
Into the political realm
Donald Trump, of course, embraced what Kellyanne Conway later dubbed “alternative facts” in the first official act of his presidency – his inauguration speech.
During the speech, Trump claimed that unemployment went up under President Obama. It didn’t, as researchers at the University of Florida point out, but 67 percent of Trump’s supporters believed it at the time. Such misinformation contributes to Americans’ sense that there is a “reality gap” between conservatives and liberals in the United States.
But UF’s Lauren Griffin writes that these far-fetched claims aren’t “lies,” but something she sees as much more dangerous – bullshit.
Griffin quotes the philosopher Harry Frankfurt as explaining that a bullshitter “does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.”
Of course, “politically motivated skepticism of science is certainly not new,” as Elizabeth Suhay, an assistant professor of government at American University, observed. Trump didn’t invent the divide between scientists and politicians – or between science and policy.
“Science is consistently a political target precisely because of its political power,” Suhay writes. “The problem for science and evidence-based policy comes when politicians and other political actors decide to discredit the science on which a conclusion is based or bend the science to support their policy position. Call it "policy-based evidence” as opposed to “evidence-based policy.”
Could an embrace of “policy-based evidence” harm U.S. credibility in the world?
One example to consider is how the world reacted when American leadership turned its back on settled climate science and withdrew from the Paris Agreement. But perhaps embracing such hard truths is overrated.
Do you disagree? Make your voice heard in Third Rail’s online poll or tweet with the hashtag #ThirdRailPBS.
Nearly half a million people are expected to seek federal aid in the aftermath of the Category 4 hurricane, which already has dumped more than 30 inches on the Houston area.
While this horrible hurricane is extreme, the number of disasters has doubled globally since the 1980s, with the damage and losses estimated at an average US$100 billion a year since the new millennium, and the number of people affected also growing.
Hurricane Katrina in 2005 was the costliest natural disaster in the U.S., with estimates between $100 billion and $125 billion. The death toll of Katrina is still being debated, but we know that at least 2,000 were killed and thousands were left homeless.
Worldwide, the toll is staggering. The triple disaster of an earthquake, tsunami and nuclear meltdown that started March 11, 2011 in Fukushima, Japan killed thousands, as did the 2010 Haiti earthquake.
The challenges to disaster relief organizations, including nongovernmental organizations, are immense. The majority operate under a single, common, humanitarian principle of protecting the vulnerable, reducing suffering and supporting the quality of life. At the same time, they need to compete for financial funds from donors to ensure their own sustainability.
This competition is intense. The number of registered U.S. nonprofit organizations increased from 12,000 in 1940 to more than 1.5 million in 2012. Approximately $300 billion are donated to charities in the United States each year.
At the same time, many stakeholders believe that humanitarian aid has not been as successful in delivering on its goals due to a lack of coordination among NGOs, which results in duplication of services.
My team and I have been looking at a novel way to improve how we respond to natural disasters. One solution might be game theory.
Getting the right supplies to those in need is daunting
The need for improvement is strong.
Within three weeks following the 2010 earthquake in Haiti, 1,000 NGOs were operating in Haiti. News media attention of insufficient water supplies resulted in immense donations to the Dominican Red Cross to assist its island neighbor. As a result, Port-au-Prince was saturated with cargo and gifts-in-kind, so that shipments from the Dominican Republic had to be halted for multiple days. After the Fukushima disaster, there were too many blankets and items of clothing shipped and even broken bicycles.
In fact, about 60 percent of the items that arrive at a disaster site are nonpriority items. Rescue workers then waste precious time dealing with these nonpriority supplies, whereas victims suffer because they do not receive the critical needs supplies in a timely manner.
The delivery and processing of wrong supplies also adds to the congestion at transportation and distribution nodes, overwhelms storage capabilities and results in further delays of necessary items. The flood of donated inappropriate materiel in response to a disaster is often referred to as the second disaster.
The economics of disaster relief, on the supply side, is challenged as people need to secure donations and ensure the financial sustainability of their organizations. On the demand side, the victims’ needs must be fulfilled in a timely manner while avoiding wasteful duplication and congestion in terms of logistics.
Game theory in disasters
Game theory is a powerful tool for the modeling and analysis of complex behaviors of competing decision-makers. It received a tremendous boost from the contributions of the Nobel laureate John Nash.
Game theory has been used in numerous disciplines, from economics, operations research and management science even to political science.
In the context of disaster relief, however, there has been little work done in harnessing the scope of game theory. It is, nevertheless, clear that disaster relief organizations compete for financial funds and donors respond to the visibility of the organizations in the delivery of relief supplies to victims through media coverage of disasters.
We modeled the costs incurred in delivering relief supplies, including congestion, the gain from delivering goods (since these NGOs are nonprofits and also wish to do good) plus the financial donations they stand to acquire from media exposure at the disaster sites and compete for.
These comprised each NGOs “utility” function, which each sought to individually maximize. The NGOs also faced constraints in the volume of relief supplies that they had pre-positioned and could distribute to victims of the disaster.
We examined two scenarios:
When the NGOs were free from satisfying common minimum and maximum amounts of the relief item demands at points of need (a Nash Equilibrium model);
When the NGOs had to make sure they delivered the minimum needed supplies at each demand point for the victims but did not exceed the maximum amounts set by a higher-level organization.
Such constraints guarantee that the victims would be served appropriately while, at the same time, minimizing materiel convergence and congestion associated with unnecessary supplies (a Generalized Nash Equilibrium model because of the common/shared constraints). Such bounds would correspond to policies imposed by a higher-level humanitarian or governmental organization.
Policies and implications
We used a case study of Hurricane Katrina, because of its historic catastrophic nature.
We built the models using publicly available data, with the NGOs corresponding to the Red Cross, the Salvation Army and “other” NGOs collectively. Since Louisiana suffered the brunt of the damages, we selected, as demand points, 10 parishes in Louisiana.
Applying computer-based algorithms, we computed the relief item flows and the utilities of the NGOs in the noncooperative games without imposed policies in the form of bounds (Nash Equilibrium) and with (Generalized Nash Equilibrium).
An actionable framework for NGO decision-makers
A comparison of the outcomes under the Nash and Generalized Nash Equilibria quantifiably showed that coordination is critical to achieving better outcomes in humanitarian relief operations.
The Generalized Nash solution is not only capable of eliminating the possibility of having under- or over-supply, it guarantees – through competition – the efficient allocation of resources once the minimum requirements are met.
Without such imposed bounds, relief organizations may choose an “easy” route in delivering supplies because it is less costly, rather than the route that will end in a destination where there are the most in need.
Therefore, the game theory framework has significant benefits both for the disaster victims and for the NGOs. In addition, we also demonstrated that, under certain circumstances, the Generalized Nash solution is capable of attracting more donations than the unrestricted, competitive solution.
Our study has numerous implications to guide coordinating authorities. It provides a strong argument for the importance of these coordinating bodies in successful humanitarian relief efforts.
Specifically, our research demonstrates that, if authorities can impose the constraints on upper and lower demand levels for relief supplies, they can provide an effective mechanism to improve the disaster response. Response teams need a certain amount of supplies to save lives but not so much that it results in congestion and waste.
Governmental agencies or NGOs need to come together to set these values.
The Generalized Nash Equilibrium Game Theory model provides managers of NGOs with a strategic framework to analyze their interactions with other NGOs, while also providing insights into their own operations. Moreover, as our study reveals, the framework answers fundamental questions that every NGO must address: (1) How and where should we provide aid? and (2) How can we finance those operations? A computer-based model that can answer these questions provides an actionable framework for NGO decision-makers.
Our study further suggests that, despite the competition among NGOs for fundraising, there are strong reasons for them to collaborate, thereby strengthening their disaster response and achieving better results for those in need. In fact, our game theory analysis quantifiably shows that cooperation among NGOs may increase financial donations to all NGOs.
This is an updated version of an article that ran in The Conversation on March 9, 2017.
Anna Nagurney does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The following is a roundup of previously published articles.
The U.S. electricity grid, the sprawling network that delivers power to our homes and businesses, is changing rapidly – a point few experts will debate. But how policies should guide the future of the grid – and specifically which fuel sources should be used – is a highly contentious question.
Department of Energy Secretary Rick Perry in April asked DOE staff to prepare a study, which was released on August 24, to assess the electricity markets and its reliability. News of the review caused great trepidation among solar and wind advocates because Perry had singled out the importance of nuclear and coal – a favorite of President Trump – in maintaining grid reliability.
In the end, the study said the sharp decline in natural gas prices over the past decade is the primary reason coal has become less economic, rather than the spread of wind and solar. The report also found that wind and solar, which provide power intermittently, have not caused any insurmountable problems in the grid’s functioning – yet.
Lessons from Texas
Reactions to the report’s release have been mixed, and it’s not clear what policies might follow from it. But academics have written extensively about the dramatic changes happening behind the scenes on the grid. Most notably, four energy experts from the University of Texas looked at what happened when wind energy surged on the Texas grid, known as ERCOT – much of it during Perry’s time as governor.
Wind power did not crash the Texas grid because the state reformed how it operates its wholesale energy markets, they said. Yes, grid operators need to rely more on natural gas plants to compensate for varying wind and solar, but the wholesale price for energy has gone down. They wrote:
“Research at UT Austin shows that while installing significant amounts of solar power would increase annual grid management costs by $10 million in ERCOT, it would reduce annual wholesale electricity costs by $900 million. The result of all this is that renewables compete with conventional sources of power, but they do not displace nearly as much coal as cheap natural gas. In fact, cheap gas displaces, on average, more than twice as much coal than renewables have in ERCOT.”
Nuclear power plant operators cheered the DOE’s report because it noted the crucial role of nuclear in the current grid and recommended faster reviews for new plant construction. But should the federal government provide subsidies, as New York has done in one case, to keep nuclear power plants in operation?
Nuclear engineering professor Arthur Motta argued that policies should recognize the fact that nuclear power is reliable and emits no emissions during operation.
“Subsidizing carbon-free sources is justifiable to provide for the future greater good of the country because they provide climate change and clean air benefits. Perversely, however, the U.S. Environmental Protection Agency and most states have declined to consider rewarding the same benefits from existing nuclear power plants.”
On the other hand, Peter Bradford from the Vermont Law School and a former Nuclear Regulatory Commission member said that nuclear has always struggled to be economic, and policies to favor nuclear will cost consumers. He wrote,
“Nuclear power producers want government-mandated long-term contracts or other mechanisms that require customers to buy power from their troubled units at prices far higher than they would pay otherwise. Providing such open-ended support will negate several major energy trends that currently benefit customers and the environment.”
The stated rationale behind the study was that the U.S. grid needs to ensure it has “baseload” power sources that can operate around the clock, as nuclear, coal and natural plants can. And the DOE study does note that it’s worth studying what happens with a deeper penetration of solar and wind because they could cause reliability issues in the future.
California is on the vanguard of this change. When it shut down its last nuclear plant, UCLA researchers Eric Daniel Fournier and Alex Ricklefs explained that the state will need a number of techniques, including energy storage, to meet its aggressive renewable energy targets. They wrote,
“Careful planning is needed to ensure that energy storage systems are installed to take over the baseline load duties currently held by natural gas and nuclear power, as renewables and energy efficiency may not be able to carry the burden.”
Meanwhile, the future of coal still does not look particularly bright – at least in the U.S., wrote Lucas Davis from University of California Berkeley. He wrote,
“This dramatic change has meant tens of thousands of lost coal jobs, raising many difficult social and policy questions for coal communities. But it’s an unequivocal benefit for the local and global environment. The question now is whether the trend will continue in the U.S. and, more importantly, in fast-growing economies around the world.”
Editor’s note: The following is roundup of archival stories.
On June 19, the U.S. Supreme Court announced that it would hear Gill v. Whitford, a case on partisan gerrymandering in Wisconsin.
This controversial practice – where states are carved up into oddly shaped electoral districts favoring one political party over another – has already ignited debates in a number of states, including North Carolina, Pennsylvania and Maryland.
The Supreme Court’s decision may provide some long-awaited guidance on whether gerrymandering is constitutional. To better understand what this news means, we turned to stories in our archive.
What is gerrymandering?
Gerrymandering is far from a new problem, explains Michel Balinski at École Polytechnique – Université Paris-Saclay, nor will this be the first time that the Supreme Court has considered it:
Practiced as a political art form for some two centuries, gerrymandering is now an exact science. Computer programs using vast data banks describing sociological, ethnic, economic, religious and political characteristics of the electorate determine districts – often of incredibly weird contours – that favor the party that drew the maps.
For an example of those weird contours, take a look at Ohio’s ninth district, nicknamed “the snake on the lake” for the way it stretches from Toledo to Cleveland.
“The representation of communities is made a mockery by maps that either splinter cities and counties or overwhelm them with voters ‘tacked’ into the district from distant rural areas,” writes Richard Gunther at The Ohio State University.
Americans often seem proud of their democracy, notes Pippa Norris at Harvard University, but experts rank U.S. elections among the worst in all Western democracies. According to one analysis, the U.S. scores only 62 on a 100-point assessment of election integrity.
There are many issues with our electoral process – including problems with campaign finance and voter registration – but gerrymandering stands out as the worst, writes Norris:
[A] large part of the blame can be laid at the door of the degree of decentralization and partisanship in American electoral administration. Key decisions about the rules of the game are left to local and state officials with a major stake in the outcome. For example, gerrymandering arises from leaving the processes of redistricting in the hands of state politicians, rather than more impartial judicial bodies.
Thanks to gerrymandering, Democrats likely won’t win back the House in 2018 or 2020, predict experts at Strathclyde University, University of Richmond, University of California, Irvine and California Polytechnic State University. They argue that it’s difficult for today’s politicians to claim that gerrymandered districts occurred by accident:
If a state government could have drawn unbiased districts, but chose to draw to biased districts instead, then it has engaged in deliberate gerrymandering. It cannot claim that it did not realize what it was doing – modern districting software has allowed enough people to see the partisan consequences.
In search of solutions
Federal law dictates that congressional districts “distribute population evenly, be connected and be ‘compact,’” explains Kevin Knudson at the University of Florida.
Scholars have proposed a handful of ideas of how to redraw congressional districts more fairly. States might consider changing how votes are tabulated or appointing an independent commission to redraw the lines. Or, they could turn to new mathematical techniques and run simulations in search of the best map.
Some voters might wonder why all the bother, says Knudson:
One approach is to do nothing and leave the system as it is, accepting the current situation as part of the natural ebb and flow of the political process. But when one political party receives a majority of votes nationally yet does not have control of the House of Representatives – as occurred in the 2012 election – one begins to wonder if the system needs some tweaks.
Not just politics
Gerrymandering is often discussed in the realm of politics. But Derek W. Black at the University of South Carolina explores a case in Alabama where school districts have been redrawn to create racially segregated schools. He notes that this seems to be an unfortunate pattern across the country:
In many areas, this racial isolation has occurred gradually over time, and is often written off as the result of demographic shifts and private preferences that are beyond a school district’s control.
To commemorate African-American Music Appreciation Month this June, California Senator Kamala Harris released a Spotify playlist with songs spanning genres and generations, from TLC’s “Waterfalls” to Marvin Gaye’s “What’s Going On.”
In a nod to the integral role African-American musicians play in the country’s rich musical legacy, we’ve decided to highlight our own “playlist” of articles, pieces that feature icons like Michael Jackson and Tupac Shakur, along with forgotten – but no less important – voices, from Elizabeth Taylor Greenfield to the Rev. T.T. Rose.
The first black pop star is born
Before Aretha Franklin, before Ella Fitzgerald, there was Elizabeth Taylor Greenfield. A self-taught opera singer born in 1820, Greenfield had to overcome the belief that blacks couldn’t actually sing.
Penn State music instructor Adam Gustafson tells the story of Greenfield’s rise, which made audiences reconcile their racism with their ears:
“Greenfield was met with laughter when she took to the stage. Several critics blamed the uncouth crowd in attendance; others wrote it off as lighthearted amusement. Despite the inauspicious beginning, critics agreed that her range and power were astonishing.”
By the early 20th century, Americans were clamoring for the albums of black artists. The music industry was eager to oblige, but cordoned them off into a distinct genre: “race music.”
One of the most prominent early race labels was Paramount Records. Between 1917 and 1932, Paramount recorded a breathtaking range of seminal African-American artists. Unfortunately, as Penn State’s Jerry Zoltan explains, black artists like the Rev. T.T. Rose and the Pullman Porters Quartet were ruthlessly exploited – and eventually forgotten.
“Bottom line: if record companies could get away with it, there was no bottom line. No negotiated contract to sign. No publishing. No royalties. Anonymity was also implicit in the deal, so many black artists were forgotten, their only legacy the era’s brittle shellac disks that were able to withstand the wear of time.”
University of Maryland, Baltimore County’s Clifford Murphy describes how these same industry forces tried to pigeonhole an ex-con named Huddie “Lead Belly” Ledbetter as a black blues artist.
But Lead Belly loved country stars like Gene Autry, and while he sang blues and spirituals, he also created songs influenced by the string band traditions of the white working class. Promoters, however, were interested in only a certain type of song:
“Though he had an immense repertoire, he was urged to record and perform songs like ‘Pick A Bale of Cotton,’ while songs considered ‘white,’ like ‘Silver Haired Daddy of Mine,’ were either downplayed or cast aside… Lead Belly was constrained by a commercial and cultural industry that wanted to present a certain archetype of African-American music.”
Michael Jackson breaks the mold
Only later would black artists be able to move freely across musical genres. Perhaps no artist stitched together a more diverse range of styles and influences than Michael Jackson, the King of Pop.
But Jackson was simultaneously derided as “Wacko Jacko,” a hopelessly deluded freak. McMaster University’s Susan Fast sees it differently. To Fast, the way Jackson lived his life was an extension of the risks he took in his music. Both were united by a central tenet: to collapse boundaries considered irrevocable.
“Michael Jackson – gender ambiguous; adored and reviled; human, werewolf, panther; black, white, brown; child, adolescent, adult – shattered the assumptions of a society that craves neat categories and compartmentalization. Order and normality are illusions, he said through his life and art.”
The triumph and tragedy of Tupac
In the 1980s, hip-hop – then a budding musical genre – found itself gravitating toward black nationalist messages. It was during this time that Tupac Shakur, the son of a Black Panther, came of age.
While R&B, soul and jazz musicians were largely silent about the challenges poor black communities faced, Tupac, in his music, directly confronted the hostile forces that threatened him and his peers: mass incarceration, poverty, illegal drugs and police brutality. But in Tupac’s meteoric rise and swift fall, UConn’s Jeffrey O.G. Ogbar sees the tragedies of an entire generation of black youth:
“Tupac’s life isn’t just an embodiment of the struggles, contradictions, creativity and promise of a generation. It also serves as a cautionary tale. His life’s abrupt end was a consequence of the allure of success, much like the pull of the streets.”