Category Archives: Transparency-Secrecy

Two Plus Two Equals Five – A 2nd look at disaster death tolls

Two Plus Two Equals Five – By Philip Walker | Foreign Policy.

The death toll and level of destruction immediately following a disaster are always difficult to determine, but over time a consensus usually emerges between governments and aid organizations. But, as David Rieff points out, “Sadly, over the course of the past few decades, exaggeration seems to have become the rule in the world of humanitarian relief.… These days, only the most extreme, most apocalyptic situations are likely to move donors in the rich world.” And with donor fatigue an ever-present possibility, it is no surprise then that later studies that contradict the original, inflated estimates are criticized — or worse, ignored — for seemingly undermining the humanitarian cause.

Arriving at these estimates is no easy endeavor, as government agencies and relief organization are rarely able to survey entire populations. Instead, emergency management experts rely on sound statistical and epidemiological techniques. But debating and questioning the numbers behind man-made and natural disasters is not just an academic exercise: the implications are huge. For example, relief agencies were restricted from operating in Darfur, partly because of Sudan’s anger that the U.S.-based Save Darfur Coalition had estimated that 400,000 people were killed in the region. Moreover, the U.N. Security Council used the International Rescue Committee’s death toll of 5.4 million in the Congo to put together its largest peacekeeping operation ever. Similarly, government aid pledges increase or decrease depending upon the extent of the disaster. Numbers do matter, and much depends upon their validity and credibility. What follows is a look at some recent disasters where the numbers just don’t match up.

Above, a view of some of the destruction in Bandar Aceh, Indonesia, a week after the devastating earthquake and tsunami struck on Dec. 26, 2004. According to the U.S. Geological Survey, 227,898 people died and about 1.7 million people were displaced in 14 countries in Southeast Asia, South Asia, and East Africa. Indonesia, the hardest hit country by the disaster, initially claimed that 220,000 people had died or went missing but ended up revising that number down to around 170,000.

THE DEADLIEST WAR IN THE WORLD

Discrepancy: 5.4 million vs. 900,000 dead in the Democratic Republic of the Congo between 1998 and 2008

The Democratic Republic of the Congo (DRC) has seen more than its fair share of conflict over the past 15 years. The war in the DRC officially broke out in 1998 and although the conflict technically ended in 2003 when the transitional government took over, fighting has continued in many of the country’s provinces. The conflict has been dubbed “Africa’s World War,” both due to the magnitude of the devastation and the number of African countries that have, at different times, been involved in the conflict. According to a widely cited 2008 report by the New York-based International Rescue Committee (IRC), “an estimated 5.4 million people have died as a consequence of the war and its lingering effects since 1998,” making it the world’s deadliest crisis since World War II. The organization is one of the largest providers of humanitarian aid in the Congo and is therefore deemed one of the few reliable sources on the conflict.

However, Andrew Mack, director of the Human Security Report Project at Simon Fraser University in Canada, said the IRC study did not employ appropriate scientific methodologies and that in reality far less people have died in the Congo. “When we used an alternative measure of the pre-war mortality rate, we found that the IRC estimates of their final three surveys, the figure dropped from 2.83 million to under 900,000,” Mack argued. (He also argued that international relief agencies — such as the International Rescue Committee — are facing a potential conflict of interest because they depend on donations that, in turn, are stimulated by their studies of death tolls. Those studies should be done by independent experts, not by relief agencies that depend on donations, he says.)

Above, the body of a young man lying on the central market avenue of Ninzi, about 25 miles north of Bunia, where on June 20, 2003, Lendu militias launched an attack, killing and mutilating at least 22 civilians.

Discrepancy: 400,000 vs. 15,000 women raped in the Democratic Republic of the Congo between 2006 and 2007

A June 2011 study in the American Journal of Public Health found that 400,000 women aged 15-49 were raped in the DRC over a 12-month period in 2006 and 2007. The shockingly high number is equivalent to four women being raped every five minutes. Perhaps even more alarming, the new number is 26 times higher than the 15,000 rapes that the United Nations reported during the same period.

Maria Eriksson Baaz, a Swedish academic from the University of Gothenburg, has called the study into question by arguing that it is based on out-of-date and questionable figures. As a long-time researcher on women’s rights in the DRC, Baaz claims that extrapolations made from these figures cannot be backed up scientifically. In a recent interview with the BBC, she said it was difficult to collect reliable data in the Congo and that women sometimes claim to be victims in order to get free health care. “Women who have been raped can receive free medical care while women who have other conflict-related injuries or other problems related to childbirth have to pay,” she said. “In a country like the DRC, with [its] extreme poverty where most people can simply not afford health care, it’s very natural this happens.”

Above, Suzanne Yalaka breastfeeds her baby Barunsan on Dec. 11, 2003, in Kalundja, South Kivu province. Her son is the consequence of her being raped by ten rebels from neighboring Burundi. She was left behind by her husband and her husband’s family.

NORTH KOREAN FAMINE

Discrepancy: 2.4 million vs. 220,000 dead in North Korea between 1995 and 1998

Due to the regime’s secretive nature, reliable statistics on the 1990s famine in North Korea are hard to come by. Yet, surprisingly, on May 15, 2001, at a UNICEF conference in Beijing, Choe Su-hon, one of Pyongyang’s nine deputy foreign ministers at the time, stated that between 1995 and 1998, 220,000 North Koreans died in the famine. Compared with outside estimates, these figures were on the low end — presumably because it was in the regime’s interest to minimize the death toll.

A 1998 report by U.S. congressional staffers, who had visited the country, found that from 1995 to 1998 between 900,000 and 2.4 million people had died as a result of food shortages. It noted that other estimates by exile groups were substantially higher but that these numbers were problematic because they were often based on interactions with refugees from the northeastern province of North Hamgyong, which was disproportionately affected by the famine.

Above, North Koreans rebuilding a dike in Mundok county, South Pyongan province, in September 1997, following an August tidal wave after typhoon Winnie. The rebuilding effort was part of an emergency food-for-work project organized by the World Food Program. According to a former North Korean government official, during the famine — from 1993 to 1999 — life expectancy fell from 73.2 to 66.8 and infant mortality almost doubled from 27 to 48 per 1,000 people.

GENOCIDE IN DARFUR

Discrepancy: 400,000 vs. 60,000 dead in Darfur between 2003 and 2005

In 2006, three years after the conflict in Darfur began, Sudanese President Omar al-Bashir publically criticized the United Nations for exaggerating the extent of the fighting in Darfur. “The figure of 200,000 dead is false and the number of dead is not even 9,000,” he proclaimed. At the same time, outside groups like the Save Darfur Coalition and various governments, including the United States, were having a difficult time producing concrete numbers as well. Their only consensus was that the real death toll was exponentially higher than those numbers provided by Bashir.

In 2005, a year after U.S. Secretary of State Colin Powell told a U.S. congressional committee that the ethnic violence in Darfur amounted to “genocide,” Deputy Secretary of State Robert Zoellick estimated the death toll between 60,000 and 160,000. Zoellick was widely criticized for understating the numbers. The World Health Organization estimated that 70,000 people had died over a seven-month period alone. At the same time, researchers for the Coalition for International Justice contended that 396,563 people had died in Darfur. Today, the Sudanese authorities claim that since the conflict began in 2003, 10,000 people have died, while the U.N. estimates that over 300,000 have been killed and another 2.7 million have been displaced.

Above, an armed Sudanese rebel arrives on Sept. 7, 2004, at the abandoned village of Chero Kasi less than an hour after Janjaweed militiamen set it ablaze in the violence-plagued Darfur region.

CYCLONE NARGIS 

Discrepancy: 138,000 vs. unknown death toll in Burma in 2008

Tropical cyclone Nargis made landfall in southern Burma on May 2, 2008, leaving a trail of death and destruction before petering out the next day. It devastated much of the fertile Irrawaddy delta and Yangon, the nation’s main city. Nargis brought about the worst natural disaster in the country’s history — with a death toll that may have exceeded 138,000, according to a study by the Georgia Institute of Technology. But, with a vast number of people still unaccounted for three years later, the death toll might even be higher. The Burmese authorities allegedly stopped counting for fear of political fallout.

It’s more common for countries hit by a devastating disaster to share their plight with the world and plead for a robust relief effort, but in the aftermath of cyclone Nargis the Burmese military regime sought to maintain control over news of the disaster — restricting access to journalists and censoring the release of information and images. Moreover, the United Nations and other relief agencies were initially banned from setting up operations. At the time, with over 700,000 homes blown away, the U.N. and the Red Cross estimated that over 2.5 million people were in desperate need of aid.

Above, school teacher Hlaing Thein stands on the wreckage of a school destroyed by cyclone Nargis in Mawin village in the Irrawaddy delta region on June 9, 2008.

 

Two Plus Two Equals Five

What numbers can we trust? A second look at the death toll from some of the world’s worst disasters.

BY PHILIP WALKER | AUGUST 17, 2011

EARTHQUAKE IN HAITI

Discrepancy: 318,000 vs. 46,000-85,000 dead in Haiti in 2010

The devastating earthquake of Jan. 12, 2010, killed over 318,000 people and left over 1.5 million people homeless, according to the Haitian government. International relief organizations generally estimate anywhere between 200,000 and 300,000 casualties.

However, a recently leaked report compiled for USAID by a private consulting firm claims that the death toll is likely between 46,000 and 85,000, and that roughly 900,000 people were displaced by the earthquake. The report has not yet been published, but its alleged findings have already been disputed by both Haitian authorities and the United Nations. Even the U.S. State Department, for now, is reluctant to endorse it, saying “internal inconsistencies” in some of the statistical analysis are currently being investigated prior to publication.

PAKISTAN FLOODS

Discrepancy: Large numbers affected vs. small death toll in Pakistan in 2010

A young girl washes the mud from her toy at a water pump in the middle of collapsed buildings at a refugee camp near Nowshera in northwest Pakistan on Sept. 23, 2010. Figures provided by the United Nations and Pakistan’s government estimate that 20 million people were affected by the 2010 summer floods — the worst in the country’s history. Almost 2,000 people died, 3,000 were injured, 2 million homes were damaged or destroyed, and over 12 million people were left in need of emergency food aid, according to Pakistan’s National and Provincial Disaster Management Authority. Flood waters wiped out entire villages and vast stretches of farmland affecting an area roughly the size of England. After surveying 15 key sectors across the country, in Oct. 2010, the World Bank and Asian Development Bank announced an estimated damage of $9.7 billion — an amount more than twice that of Pakistan’s 2005 earthquake which killed approximately 86,000 people. U.N. Secretary-General Ban Ki-moon characterized the destruction as more dire than that caused by the 2004 Indian Ocean tsunami and the Pakistani earthquake combined. “In the past I have visited the scenes of many natural disasters around the world, but nothing like this,” he stated.

David Rieff warns that, “By continually upping the rhetorical ante, relief agencies, whatever their intentions, are sowing the seeds of future cynicism, raising the bar of compassion to the point where any disaster in which the death toll cannot be counted in the hundreds of thousands, that cannot be described as the worst since World War II or as being of biblical proportions, is almost certainly condemned to seem not all that bad by comparison.” This was the case in Pakistan where the number affected by the flooding was gigantic but the death toll was relatively low — especially compared to the Haiti earthquake a few months earlier. As a result, the United Nations and other aid organizations were unable to raise large sums for the relief effort compared to previous disasters. “Right now, our level of needs in terms of funding is huge compared to what we’ve been receiving, even though this is the largest, by far, humanitarian crisis we’ve seen in decades, ” said Louis-George Arsenault, director of emergency operations for UNICEF, in an interview with the BBC in Aug. 2010.

As David Meltzer, senior vice president of international services for the American Red Cross, discerningly put it, “Fortunately, the death toll [in Pakistan] is low compared to the tsunami and the quake in Haiti. … The irony is, our assistance is focused on the living — and the number of those in need is far greater than in Haiti.”

 

Global cyber-espionage operation uncovered

Global cyber-espionage operation uncovered | InSecurity Complex – CNET News.

30 comments

Shady RAT intrusions in 2008

Shady RAT intrusions were rampant in 2008, the year of the Beijing Olympics. (Click image for a large, readable version.)

(Credit: McAfee)

A widespread cyber-espionage campaign stole government secrets, sensitive corporate documents, and other intellectual property for five years from more than 70 public and private organizations in 14 countries, according to the McAfee researcher who uncovered the effort.

The campaign, dubbed “Operation Shady RAT” (RAT stands for “remote access tool”) was discovered by Dmitri Alperovitch, vice president of threat research at the cyber-security firm McAfee. Vanity Fair‘s Michael Joseph Gross was first to write about the findings. The targets cut across industries, including government, defense, energy, electronics, media, real estate, agriculture, and construction. The governments hit include the U.S., Canada, South Korea, Vietnam, Taiwan, and India.

While most of the targets have removed the malware, the operation continues, according to McAfee. The company learned of the campaign in March while investigating a command-and-control operation it had discovered in 2009, but traced the activity back to 2006, Alperovitch said in a conference call. McAfee was able to gain control of the command-and-control server and monitor the activity.

Alperovitch said he had briefed senior White House officials, government agencies in the U.S. and other countries, and U.S. congressional staff. He also has notified the victims and is working with U.S. law enforcement agencies on the investigation, including shutting down the command-and-control server.

“We actually know of hundreds if not thousands of these servers also used by this actor,” he said in the conference call. “The entire economy is impacted by these intrusions. Every sector of the economy is effectively owned persistently and intellectual property is going out the door…It will have an impact on our jobs, the competitiveness of our industries, and on our overall economy.”

 

Related stories:
China linked to new breaches tied to RSA
U.S. military wants to ‘protect’ key civilian networks
Researchers warn of SCADA equipment discoverable via Google

Typically, a target would get compromised when an employee with necessary access to information received a targeted spear-phishing e-mail containing an exploit that would trigger a download of the implant malware when opened on an unpatched system. The malware would execute and initiate a backdoor communication channel to the command-and-control server, Alperovitch wrote in the report, which was posted to the McAfee blog.

“This would be followed by live intruders jumping on to the infected machine and proceeding to quickly escalate privileges and move laterally within the organization to establish new persistent footholds via additional compromised machines running implant malware, as well as targeting for quick exfiltration the key data they came for,” Alperovitch wrote.

“Having investigated intrusions such as Operation Aurora [which targeted Google and others] and Night Dragon (systemic long-term compromise of Western oil and gas industry), as well as numerous others that have not been disclosed publicly, I am convinced that every company in every conceivable industry with significant size and valuable intellectual property and trade secrets has been compromised (or will be shortly), with the great majority of the victims rarely discovering the intrusion or its impact,” Alperovitch wrote. “In fact, I divide the entire set of Fortune Global 2000 firms into two categories: those that know they’ve been compromised and those that don’t yet know.”

Unlike recent denial-of-service attacks and data breaches from groups like Anonymous and LulzSec (see chart of recent attacks), these espionage cases are more persistent, insidious, and threatening, and they cause much more harm, revealing important research and development information that can help countries better compete in markets, according to Alperovitch.

 

“I divide the entire set of Fortune Global 2000 firms into two categories: those that know they’ve been compromised and those that don’t yet know.”

–Dmitri Alperovitch, VP, McAfee

“What we have witnessed over the past five to six years has been nothing short of a historically unprecedented transfer of wealth — closely guarded national secrets (including from classified government networks), source code, bug databases, email archives, negotiation plans and exploration details for new oil and gas field auctions, document stores, legal contracts, SCADA configurations, design schematics and much more has ‘fallen off the truck’ of numerous, mostly Western companies and disappeared in the ever-growing electronic archives of dogged adversaries,” he wrote.

“What is happening to all this data — by now reaching petabytes as a whole — is still largely an open question,” he continued. “However, if even a fraction of it is used to build better competing products or beat a competitor at a key negotiation (due to having stolen the other team’s playbook), the loss represents a massive economic threat not just to individual companies and industries but to entire countries that face the prospect of decreased economic growth in a suddenly more competitive landscape and the loss of jobs in industries that lose out to unscrupulous competitors in another part of the world, not to mention the national security impact of the loss of sensitive intelligence or defense information.”

It’s unclear exactly who is behind the operation, but Alperovitch believes it is state-sponsored, although he declined to speculate which country might be responsible.

An educated guess might be China, given the targets. They include organizations in the U.S., most countries in Southeast Asia, but none in China, and many defense contractors. Also attacked were the United Nations, the World Anti-doping Agency, and the International Olympic Committee and Olympic committees in three countries, which were targeted right before and after the 2008 Olympic Games in Beijing, according to the report. China has disputed allegations that it has engaged in cyber espionage or attacks in the past.

“The presence of political non-profits, such as the private western organization focused on promotion of democracy around the globe or U.S. national security think tank is also quite illuminating,” Alperovitch wrote.

The report has a chart that lists all 72 targets; most are not named but are listed by type and country or location, along with country of origin, start date of the initial compromise, and duration of the intrusions. There is also a fascinating timeline that shows each intrusion and its duration by year.

Espionage goes on all the time, but it’s not often that details surface publicly. Several weeks ago security firm Invincea disclosed information about a spear-phishing campaign that was targeting the U.S. defense industry. In that case the e-mail purported to come from the U.S. Intelligence Advanced Research Projects Activity (IARPA) and used an Excel spreadsheet with defense contacts as bait, Invincea Chief Executive Anup Ghosh said in an interview today. More details are on the Invincea blog..

Researchers have to be careful in disclosing information about foreign cyber-espionage campaigns so they don’t compromise surveillance and investigations the U.S. government might be conducting related to those operations, Ghosh said.

“We couldn’t tie the operation to a nation-state, like McAfee did,” he said.

Updated August 3 at 6:30 a.m. PT with details from the McAfee report, at 9:58 a.m. PT with details from a conference call, and at 11:56 a.m. PT to clarify timing of McAfee investigation and include Invincea disclosing espionage campaign.

climate sceptics take note: raw data you wanted now available

OK, climate sceptics: here’s the raw data you wanted – environment – 28 July 2011 – New Scientist.

Anyone can now view for themselves the raw data that was at the centre of last year’s “climategate” scandal.

Temperature records going back 150 years from 5113 weather stations around the world were yesterday released to the public by the Climatic Research Unit (CRU) at the University of East Anglia in Norwich, UK. The only records missing are from 19 stations in Poland, which refused to allow them to be made public.

“We released [the dataset] to dispel the myths that the data have been inappropriately manipulated, and that we are being secretive,” says Trevor Davies, the university’s pro-vice-chancellor for research. “Some sceptics argue we must have something to hide, and we’ve released the data to pull the rug out from those who say there isn’t evidence that the global temperature is increasing.”

Hand it over

The university were ordered to release data by the UK Information Commissioner’s Office, following a freedom-of-information request for the raw data from researchers Jonathan Jones of the University of Oxford and Don Keiller of Anglia Ruskin University in Cambridge, UK.

Davies says that the university initially refused on the grounds that the data is not owned by the CRU but by the national meteorological organisations that collect the data and share it with the CRU.

When the CRU’s refusal was overruled by the information commissioner, the UK Met Office was recruited to act as a go-between and obtain permission to release all the data.

Poland refused, and the information commissioner overruled Trinidad and Tobago’s wish for the data it supplied on latitudes between 30 degrees north and 40 degrees south to be withheld, as it had been specifically requested by Jones and Keiller in their FOI request and previously shared with other academics.

The price

The end result is that all the records are there, except for Poland’s. Davies’s only worry is that the decision to release the Trinidad and Tobago data against its wishes may discourage the open sharing of data in the future. Other research organisations may from now on be reluctant to pool data they wish to be kept private.

Thomas Peterson, chief scientist at the National Climatic Data Center of the US National Oceanographic and Atmospheric Administration (NOAA) and president of the Commission for Climatology at the World Meteorological Organization, agrees there might be a cost to releasing the data.

“I have historic temperature data from automatic weather stations on the Greenland ice sheet that I was able to obtain from Denmark only because I agreed not to release them,” he says. “If countries come to expect that sharing of any data with anyone will eventually lead to strong pressure for them to fully release those data, will they be less willing to collaborate in the future?”

Davies is confident that genuine and proper analysis of the raw data will reproduce the same incontrovertible conclusion – that global temperatures are rising. “The conclusion is very robust,” he says, explaining that the CRU’s dataset of land temperatures tally with those from other independent research groups around the world, including those generated by the NOAA and NASA.

“Should people undertake analyses and come up with different conclusions, the way to present them is through publication in peer-reviewed journals, so we know it’s been through scientific quality control,” says Davies.

No convincing some people

Other mainstream researchers and defenders of the consensus are not so confident that the release will silence the sceptics. “One can hope this might put an end to the interminable discussion of the CRU temperatures, but the experience of GISTEMP – another database that’s been available for years – is that the criticisms will continue because there are some people who are never going to be satisfied,” says Gavin Schmidt of Columbia University in New York.

“Sadly, I think this will just lead to a new round of attacks on CRU and the Met Office,” says Bob Ward, communications director of the Grantham Research Institute on Climate Change and the Environment at the London School of Economics. “Sceptics will pore through the data looking for ways to criticise the processing methodology in an attempt to persuade the public that there’s doubt the world has warmed significantly.”

The CRU and its leading scientist, Phil Jones, were at the centre of the so-called “climategate” storm in 2009 when the unit was accused of withholding and manipulating data. It was later cleared of the charge.

9 Out of 10 Climate-Change-Denying Scientists Have Ties to Exxon

Nine Out of Ten Climate Denying Scientists Have Ties to Exxon Mobil Money – Environment – GOOD.

If you spend any time at all browsing comments on articles about climate change (and bless you if you’ve managed to avoid it), you’ve likely read the same handful of long-debunked arguments against the reality of anthropogenic global warming (or “man-made” global warming). Recently, you’ve also almost definitely seen links to this website—”900+ Peer-Reviewed Papers Supporting Skepticism of “Man-Made” Global Warming (AGW) Alarm”—created by the Global Warming Policy Foundation.

The problem is, of the top ten contributors of articles to that list, nine are financially linked to Exxon Mobil. Carbon Brief, which examined the list in detail, explains:

Once you crunch the numbers, however, you find a good proportion of this new list is made up of a small network of individuals who co-author papers and share funding ties to the oil industry. There are numerous other names on the list with links to oil-industry funded climate sceptic think-tanks, including more from the International Policy Network (IPN) and the Marshall Institute.

Compiling these lists is dramatically different to the process of producing IPCC reports, which reference thousands of scientific papers. The reports are thoroughly reviewed to make sure that the scientific work included is relevant and diverse.

It’s well worth reading the rest of the Carbon Brief analysis. According to the GWPF, the purpose of the post is to “provide a resource for peer-reviewed papers that support skepticism of AGW or AGW Alarm and to prove that these papers exist contrary to widely held beliefs.” It’s true that supporters of real climate science too often trot out the “peer-review” argument. While an essential cornerstone of science, peer-review “is not foolproof,” as the founders of Real Climate explained a long while back.

Unfortunately, exposes like this one don’t seem to matter much. Nearly four years ago, Newsweek ran a bombshell of a feature (the image above is from the issue’s cover) that broke down exactly how fossil fuel companies—and specifically Exxon Mobil—were funding the climate denial machine. A couple years ago, Climate Cover Up gave a much deeper, book-length look at exactly that. Last year, Naomi Oreskes and and Erik Conway released Merchants of Doubt, that looks at how the very same tactics (and in some cases, the very same scientists) are being used in the anti-climate science field now as were used by those who denied the health risks of cigarettes half a century ago.

Anyone paying close attention knows that Exxon Mobil and others who profit from selling fossil fuels are underwriting “science” that calls the reality of climate change into question. But the money shapes the messaging that pollutes the minds of those who aren’t paying quite as close attention.

Why Is The Federal Government Running Ads Secretly Created & Owned By NBC Universal?

Why Is The Federal Government Running Ads Secretly Created & Owned By NBC Universal? | Techdirt.

from the so-that’s-how-it-works… dept

We certainly suspected this when New York City first announced that it was running a series of silly and misleading videos as part of a media campaign to “Stop Piracy in NYC,” but now it’s been confirmed that these videos were not, in fact, New York City’s, but are purely NBC Universal’s. At the time, NYC had “thanked” NBC Universal (among others), but had not admitted that NBC Universal “owned” and had created the videos themselves. However, in response to one of the Freedom of Information requests that I filed with New York City, the city noted that the videos are property of NBC Universal. I had asked for any licensing info between NYC and Homeland Security/ICE because ICE was using the same videos. Since NYC had clearly suggested that those videos were the creation of the NYC government, I assumed that ICE must have licensed the videos from NYC. However, NYC responded to my request by saying that there was no such info to hand over, because it did not license the videos to Homeland Security. And the reason was that NYC did not own the videos:

The Mayor’s Office of Media and Entertainment has no records responsive to your request. Please note that NBC Universal owns the material, not the City of New York.

That’s fascinating information. Of course, I had also filed a separate FOI request for any info on the licensing agreement between NYC and NBC Universal. As of this writing there has been no response from NYC, in violation of New York State’s Freedom of Information Law, which requires a response within 5 business days (we’re way beyond that).

Still, at least give NYC credit for making it clear that NBC Universal had a hand in the creation of the videos, even if it left out the rather pertinent information that it created and owned the videos. While I find it immensely troubling that a municipal government would run PSAs created by corporate interests (without making that clear), I’m extremely troubled by the news that the federal government would run those same videos with absolutely no mention of the fact that the videos were created and owned by a private corporation with a tremendous stake in the issue.

Could you imagine how the press would react if, say, the FDA ran PSAs that were created and owned by McDonald’s without making that clear to the public? How about if the Treasury Department ran a PSA created and owned by Goldman Sachs? So, shouldn’t we be asking serious questions about why Homeland Security and ICE are running a one-sided, misleading corporate propaganda video, created and owned by a private company, without mentioning the rather pertinent information of who made it?

Does Homeland Security work for the US public… or for NBC Universal? 11 Comments

Earthquake? Terrorist bomb? Call in the AI

Earthquake? Terrorist bomb? Call in the AI – tech – 23 May 2011 – New Scientist.

In the chaos of large-scale emergencies, artificially intelligent software could help direct first responders

9.47 am, Tavistock Square, London, 7 July 2005. Almost an hour has passed since the suicide bombs on board three underground trains exploded. Thirty-nine commuters are now dead or dying, and many more are badly injured.

Hassib Hussain, aged 18, now detonates his own device on the number 30 bus – murdering a further 13 and leaving behind one of the most striking images of the day: a bus ripped open like a tin of sardines.

In the aftermath of the bus bomb, questions were raised about how emergency services had reacted to the blast. Citizens and police called emergency services within 5 minutes, but ambulance teams did not arrive on the scene for nearly an hour.

As the events of that day show, the anatomy of a disaster – whether a terrorist attack or an earthquake – can change in a flash, and lives often depend on how police, paramedics and firefighters respond to the changing conditions. To help train for and navigate such chaos, new research is employing computer-simulation techniques to help first responders adapt to emergencies as they unfold.

Most emergency services prepare for the worst with a limited number of incident plans – sometimes fewer than 10 – that tell them how to react in specific scenarios, says Graham Coates of Durham University, UK. It is not enough, he says. “They need something that is flexible, that actually presents them with a dynamic, tailor-made response.”

A government inquest, concluded last month, found that no additional lives were lost because of the delay in responding to the Tavistock Square bomb, but that “communication difficulties” on the day were worrying.

So Coates and colleagues are developing a training simulation that will help emergency services adapt more readily. The “Rescue” system comprises up to 4000 individual software agents that represent the public and members of emergency services. Each is equipped with a rudimentary level of programmed behaviours, such as “help an injured person”.

In the simulation, agents are given a set of orders that adhere to standard operating procedure for emergency services – such as “resuscitate injured victims before moving them”. When the situation changes – a fire in a building threatens the victims, for example – agents can deviate from their orders if it helps them achieve a better outcome.

Meanwhile, a decision-support system takes a big-picture view of the unfolding situation. By analysing information fed back by the agents on the ground, it can issue updated orders to help make sure resources like paramedics, ambulances and firefighters are distributed optimally.

Humans that train with the system can accept, reject or modify its recommendations, and unfolding event scenarios are recorded and replayed to see how different approaches yield different results. Coates presented his team’s work at the International Conference on Information Systems for Crisis Response and Management in Lisbon, Portugal, last week.

That still leaves the problem of predicting how a panicked public might react to a crisis – will fleeing crowds hamper a rescue effort, or will bystanders comply with any instructions they receive?

To explore this, researchers at the University of Notre Dame in South Bend, Indiana, have built a detailed simulation of how crowds respond to disaster. The Dynamic Adaptive Disaster Simulation (DADS) also uses basic software agents representing humans, only here they are programmed to simply flee from danger and move towards safety.

When used in a real emergency situation, DADS will utilise location data from thousands of cellphones, triangulated and streamed from masts in the region of the emergency. It can make predictions of how crowds will move by advancing the simulation faster than real-time events. This would give emergency services a valuable head start, says Greg Madey, who is overseeing the project.

A similar study led by Mehdi Moussaïd of Paul Sabatier University in Toulouse, France, sought to address what happens when such crowds are packed into tight spaces.

In his simulation, he presumed that pedestrians choose the most direct route to their destination if there is nothing in their way, and always try to keep their distance from those around them. Running a simulation based on these two rules, Moussaïd and his colleagues found that as they increased the crowd’s density, the model produced crushes and waves of people just like those seen in real-life events such as stampedes or crushes at football stadiums (Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1016507108). The team hope to use their model to help plan emergency evacuations.

Jenny Cole, head of emergency services at London-based independent think tank The Royal United Services Institute, wrote a report on how the different emergency services worked together in the wake of the London bombings. She remains “sceptical” about these kinds of simulations. “No matter how practical or useful they would be, there’s usually no money left in the end to implement them,” she says.

For his part, Coates says he plans to release his system to local authorities for free as soon as it is ready.

A cacophony of tweets

In the chaotic moments after disaster strikes, people often turn to Twitter for information. But making sense of a flurry of Twitter posts can be difficult.

Now Jacob Rogstadius at the University of Madeira in Portugal and his team have developed a system that sorts updates from Twitter by keyword – for example “Japan” or “earthquake” – and places them into an event timeline, without the need for hashtags.

In the next phase of development, people will look at tweets clustered in this way to judge the pertinence and reliability of different sources of information, or request more – pictures of the area, for example – to create a virtual “incident room” as the crisis unfolds.

Effects of climate change in Arctic more extensive than expected; a clever proposal for climate change legislation

Effects of climate change in Arctic more extensive than expected, report finds.

ScienceDaily (May 4, 2011) — A much reduced covering of snow, shorter winter season and thawing tundra: The effects of climate change in the Arctic are already here. And the changes are taking place significantly faster than previously thought. This is what emerges from a new research report on the Arctic, presented in Copenhagen this week. Margareta Johansson, from Lund University, is one of the researchers behind the report.

Together with Terry Callaghan, a researcher at the Royal Swedish Academy of Sciences, Margareta is the editor of the two chapters on snow and permafrost.

“The changes we see are dramatic. And they are not coincidental. The trends are unequivocal and deviate from the norm when compared with a longer term perspective,” she says.

The Arctic is one of the parts of the globe that is warming up fastest today. Measurements of air temperature show that the most recent five-year period has been the warmest since 1880, when monitoring began. Other data, from tree rings among other things, show that the summer temperatures over the last decades have been the highest in 2000 years. As a consequence, the snow cover in May and June has decreased by close to 20 per cent. The winter season has also become almost two weeks shorter — in just a few decades. In addition, the temperature in the permafrost has increased by between half a degree and two degrees.

“There is no indication that the permafrost will not continue to thaw,” says Margareta Johansson.

Large quantities of carbon are stored in the permafrost.

“Our data shows that there is significantly more than previously thought. There is approximately double the amount of carbon in the permafrost as there is in the atmosphere today,” says Margareta Johansson.

The carbon comes from organic material which was “deep frozen” in the ground during the last ice age. As long as the ground is frozen, the carbon remains stable. But as the permafrost thaws there is a risk that carbon dioxide and methane, a greenhouse gas more than 20 times more powerful than carbon dioxide, will be released, which could increase global warming.

“But it is also possible that the vegetation which will be able to grow when the ground thaws will absorb the carbon dioxide. We still know very little about this. With the knowledge we have today we cannot say for sure whether the thawing tundra will absorb or produce more greenhouse gases in the future,” says Margareta Johansson.

Effects of this type, so-called feedback effects, are of major significance for how extensive global warming will be in the future. Margareta Johansson and her colleagues present nine different feedback effects in their report. One of the most important right now is the reduction of the Arctic’s albedo. The decrease in the snow- and ice-covered surfaces means that less solar radiation is reflected back out into the atmosphere. It is absorbed instead, with temperatures rising as a result. Thus the Arctic has entered a stage where it is itself reinforcing climate change.

The future does not look brighter. Climate models show that temperatures will rise by a further 3 to 7 degrees. In Canada, the uppermost metres of permafrost will thaw on approximately one fifth of the surface currently covered by permafrost. The equivalent figure for Alaska is 57 per cent. The length of the winter season and the snow coverage in the Arctic will continue to decrease and the glaciers in the area will probably lose between 10 and 30 per cent of their total mass. All this within this century and with grave consequences for the ecosystems, existing infrastructure and human living conditions.

New estimates also show that by 2100, the sea level will have risen by between 0.9 and 1.6 metres, which is approximately twice the increase predicted by the UN’s panel on climate change, IPCC, in its 2007 report. This is largely due to the rapid melting of the Arctic icecap. Between 2003 and 2008, the melting of the Arctic icecap accounted for 40 per cent of the global rise in sea level.

“It is clear that great changes are at hand. It is all happening in the Arctic right now. And what is happening there affects us all,” says Margareta Johansson.

The report “Impacts of climate change on snow, water, ice and permafrost in the Arctic” has been compiled by close to 200 polar researchers. It is the most comprehensive synthesis of knowledge about the Arctic that has been presented in the last six years. The work was organised by the Arctic Council’s working group for environmental monitoring (the Arctic Monitoring and Assessment Programme) and will serve as the basis for the IPCC’s fifth report, which is expected to be ready by 2014.

Besides Margareta Johansson, Torben Christensen from Lund University also took part in the work.

More information on the report and The Artic as a messenger for global processes – climate change and pollution conference in Copenhagen can be found at:

http://amap.no/Conferences/Conf2011/

**************************

2 °C or not 2 °C? That is the climate question

Targets to limit the global temperature rise won’t prevent climate disruption. Tim Lenton says that policy-makers should focus on regional impacts.

As a scientist who works on climate change, I am not comfortable with recommending policy. Colleagues frown on it, and peer review of scientific papers slams anything that could be construed as policy prescription. Yet climate science is under scrutiny in multiple arenas, and climate scientists have been encouraged to engage more openly in societal debate.

I don’t want to write policies, but I do want to ensure that global efforts to tackle the climate problem are consistent with the latest science, and that all useful policy avenues remain open. Ongoing negotiations for a new climate treaty aim to establish a target to limit the global temperature rise to 2 °C above the average temperature before the industrial revolution. But that is not enough.

The target is linked to the United Nations Framework Convention on Climate Change (UNFCCC), which aims to “prevent dangerous anthropogenic interference with the climate system”. But that noble objective is nearly 20 years old and is framed too narrowly, in terms of the “stabilization of greenhouse gas concentrations in the atmosphere”. Long-term goals to limit temperature or concentrations have so far failed to produce effective short-term action, because they do not have the urgency to compel governments to put aside their own short-term interests.

“Global average warming is not the only kind of climate change that is dangerous.”

Global average warming is not the only kind of climate change that is dangerous, and long-lived greenhouse gases are not the only cause of dangerous climate change. Target setters need to take into account all the factors that threaten to tip elements of Earth’s climate system into a different state, causing events such as irreversible loss of major ice sheets, reorganizations of oceanic or atmospheric circulation patterns and abrupt shifts in critical ecosystems.

Such ‘large-scale discontinuities’ are arguably the biggest cause for climate concern. And studies show that some could occur before global warming reaches 2 °C, whereas others cannot be meaningfully linked to global temperature.

Disruption of the south- or east-Asian monsoons would constitute dangerous climate change, as would a repeat of historic droughts in the Sahel region of Africa or a widespread dieback of the Amazon rainforest. These phenomena are not directly dependent on global average temperature, but on localized warming that alters temperature gradients between regions. In turn, these gradients are influenced by uneven distribution of anthropogenic aerosols in the atmosphere.

Equally, an abrupt shift in the regions in which dense masses of water form in the North Atlantic could dangerously amplify sea-level rises along the northeastern seaboard of the United States. But the point at which that will occur depends on the speed of climate change more than its magnitude.

Even when a threshold can be directly related to temperature, as with the melting of ice sheets, it is actually the net energy input that is important. The rapid warming of the Arctic in recent years is attributable less to increasing carbon dioxide levels than to reductions in emissions of sulphate aerosols (which have a cooling effect), and to increases in levels of warming agents, including black-carbon aerosols and the shorter-lived greenhouse gases methane and tropospheric ozone.

Ultimately, crucial climate events are driven by changes in energy fluxes. However, the one metric that unites them, radiative forcing, is missing from most discussions of dangerous climate change. Radiative forcing measures the change in the net imbalance of energy that enters and leaves the lower atmosphere; it is a better guide to danger than greenhouse-gas concentrations or global warming. It takes into account almost all anthropogenic activities that affect our climate, including emissions of methane, ozone-producing gases and hydrofluorocarbons, and changes in land use and aerosol levels.

I suggest that the UNFCCC be extended. The climate problem, and the political targets presented as a solution, should be aimed at restricting anthropogenic radiative forcing to limit the rate and gradients of climate change, before limiting its eventual magnitude.

ADVERTISEMENT

 

How would this help? A given level of radiative forcing is reached long before the resulting global temperature change is fully realized, which brings urgency to the policy process. The 2 °C target would translate into a radiative forcing of about 2.5 Watts per square metre (W m?2), but to protect major ice sheets, we might need a tougher global target of 1.5 W m?2. We will still need a binding target to limit long-term global warming. And because CO2 levels remain the most severe threat in the long term, a separate target could tackle cumulative carbon emissions. But while we wait for governments to reach an agreement on CO2, we can get to work on shorter-lived radiative-forcing agents.

The beauty of this approach is that it opens separate policy avenues for different radiative-forcing agents, and regional treaties to control those with regional effects. For example, hydrofluorocarbons emissions could be tackled under a modification of the 1987 Montreal Protocol, which aimed to halt ozone depletion. And emissions of black-carbon aerosols and ozone-producing gases could be regulated under national policies to limit air pollution. This would both break the political impasse on CO2 and help to protect vulnerable elements of the Earth system.

Tim Lenton is professor of Earth system science in the College of Life and Environmental Sciences, University of Exeter, UK. e-mail: t.m.lenton@exeter.ac.uk

 

WikiLeaks wars: Digital conflict spills into real life

WikiLeaks wars: Digital conflict spills into real life – tech – 15 December 2010 – New Scientist.

Editorial: Democracy 2.0: The world after WikiLeaks

WHILE it is not, as some have called it, the “first great cyberwar“, the digital conflict over information sparked by WikiLeaks amounts to the greatest incursion of the online world into the real one yet seen.

In response to the taking down of the WikiLeaks website after it released details of secret diplomatic cables, a leaderless army of activists has gone on the offensive. It might not have started a war, but the conflict is surely a sign of future battles.

No one is quite sure what the ultimate political effect of the leaks will be. What the episode has done, though, is show what happens when the authorities attempt to silence what many people perceive as a force for freedom of information. It has also shone a light on the evolving world of cyber-weapons (see “The cyber-weapon du jour”).

WikiLeaks was subjected to a distributed denial of service (DDoS) attack, which floods the target website with massive amounts of traffic in an effort to force it offline. The perpetrator of the attack is unknown, though an individual calling himself the Jester has claimed responsibility.

WikiLeaks took defensive action by moving to Amazon’s EC2 web hosting service, but the respite was short-lived as Amazon soon dumped the site, saying that WikiLeaks violated its terms of service. WikiLeaks responded via Twitter that: “If Amazon are so uncomfortable with the first amendment, they should get out of the business of selling books”.

With WikiLeaks wounded and its founder Julian Assange in custody, a certain section of the internet decided to fight back. Armed with freely available software, activists using the name “Anonymous” launched Operation Avenge Assange, targeting DDoS attacks of their own at the online services that had dropped WikiLeaks.

With WikiLeaks wounded and its founder in custody, a section of the internet decided to fight back

These efforts have so far had limited success, in part due to the nature of Anonymous. It is not a typical protest group with leaders or an organisational structure, but more of a label that activists apply to themselves. Anonymous has strong ties to 4chan.org, a notorious and anarchic message board responsible for many of the internet’s most popular memes, such as Rickrolling and LOLcats. The posts of unidentified 4chan users are listed as from “Anonymous”, leading to the idea of a collective anonymous campaigning force.

This loose group has previously taken action both on and offline against a number of targets, including Scientologists and the Recording Industry Association of America, but the defence of WikiLeaks is their most high-profile action yet. Kristinn Hrafnsson, a spokesman for WikiLeaks, said of the attacks: “We believe they are a reflection of public opinion on the actions of the targets.”

The “public” have certainly played a key role. The kind of DDoS attacks perpetrated by Anonymous are usually performed by botnets – networks of “zombie” computers hijacked by malicious software and put to use without their owner’s knowledge. Although Anonymous activists have employed traditional botnets in their attacks, the focus now seems to be on individuals volunteering their computers to the cause.

“I think there are two groups of people involved,” says Tim Stevens of the Centre for Science and Security Studies at Kings College London. The first group are the core of Anonymous, who have the technological know-how to bring down websites. The second group are ordinary people angry at the treatment of WikiLeaks and wanting to offer support. “Anonymous are providing the tools for these armchair activists to get involved,” says Stevens.

The human element of Anonymous is both a strength and a weakness. Though the group’s freely available LOIC software makes it easy for anyone to sign up to the cause, a successful DDoS requires coordinated attacks. This is often done through chat channels, where conversations range from the technical – “I have Loic set to 91.121.92.84 and channel set to #loic, is that correct” – to the inane – “please send me some nutella ice cream”.

There are continual disagreements about who and when to attack, though new tactics also emerge from the chat, such as Leakspin, an effort to highlight some of the less-publicised leaks, and Leakflood, a kind of analogue DDoS that attempts to block corporate fax machines with copies of the cables.

These chat channels are also occasionally knocked offline by DDoS attacks. Some blame “the feds”, but could governments – US or otherwise – actually be involved? (see “Are states unleashing the dogs of cyberwar?”)

The US Department of Defense’s recently launched Cyber Command has a dual remit: to defend US interests online and conduct offensive operations. Cyber Command is meant to defend .mil and .gov web domains, but do commercial websites qualify too? “Is PayPal really that important to national security that the US military would have a role in defending it?” asks Stevens, who also teaches in the Department of War Studies at King’s College London. “The US doesn’t have an answer to that particular conundrum, and they’re not alone – nobody does”.

Is PayPal so important to national security that the US military would have a role in defending it?

The difficulty comes in assessing whether DDoS attacks are an act of cyberwar, a cybercrime or more akin to online civil disobedience.

Individual LOIC users may not even be breaking the law. “All that DDoS does is send the normal kind of traffic that a website receives,” says Lilian Edwards, professor of internet law at the University of Strathclyde in Glasgow, UK. “That has always been the legal problem with regulating DDoS – each individual act is in fact authorised by the site, but receiving 10 million of them isn’t.”

It’s hard to say what will happen next. Anonymous might continue its attempt to cause as much disruption as possible, but it could just as easily become fragmented and give up. With no leaders or central structure, it is unlikely to be stopped by a few arrests or server takedowns but may equally find it difficult to coordinate well enough to have an impact.

More worrying is the prospect that more organised groups may follow Anonymous’s example. If that happens, who will be responsible for stopping them – and will they be able to?

Read more: Are states unleashing the dogs of cyber war?

ForeclosureGate Could Force Bank Nationalization

t r u t h o u t | ForeclosureGate Could Force Bank Nationalization.

by: Ellen Brown, t r u t h o u t | News Analysis

photo
(Photo: Joey Parsons / Flickr)

For two years, politicians have danced around the nationalization issue, but ForeclosureGate may be the last straw. The megabanks are too big to fail, but they aren’t too big to reorganize as federal institutions serving the public interest.

In January 2009, only a week into Obama’s presidency, David Sanger reported in The New York Times that nationalizing the banks was being discussed. Privately, the Obama economic team was conceding that more taxpayer money was going to be needed to shore up the banks. When asked whether nationalization was a good idea, House Speaker Nancy Pelosi replied:

“Well, whatever you want to call it…. If we are strengthening them, then the American people should get some of the upside of that strengthening. Some people call that nationalization.

“I’m not talking about total ownership,” she quickly cautioned – stopping herself by posing a question: “Would we have ever thought we would see the day when we’d be using that terminology? ‘Nationalization of the banks?'”

Noted Matthew Rothschild in a March 2009 editorial:

[T]hat’s the problem today. The word “nationalization” shuts off the debate. Never mind that Britain, facing the same crisis we are, just nationalized the Bank of Scotland. Never mind that Ronald Reagan himself considered such an option during a global banking crisis in the early 1980s.

Although nationalization sounds like socialism, it is actually what is supposed to happen under our capitalist system when a major bank goes bankrupt. The bank is put into receivership under the FDIC, which takes it over.

What fits the socialist label more, in fact, is the TARP bank bailout, sometimes called “welfare for the rich.” The banks’ losses and risks have been socialized, but the profits have not. The bankers have been feasting on our dime without sharing the spread.

And that was before ForeclosureGate – the uncovering of massive fraud in the foreclosure process. Investors are now suing to put defective loans back on bank balance sheets. If they win, the banks will be hopelessly under water.

“The unraveling of the ‘foreclosure-gate‘ could mean banking crisis 2.0,” warned economist Dian Chu on October 21, 2010.

Banking Crisis 2.0 Means TARP II

The significance of ForeclosureGate is being downplayed in the media, but independent analysts warn that it could be the tsunami that takes the big players down.

John Lekas, senior portfolio manager of the Leader Short Term Bond Fund, said on “The Street” on November 2, 2010, that the banks will prevail in the lawsuits brought by investors. The paperwork issues, he said, are just “technical mumbo jumbo”; there is no way to unwind years of complex paperwork and securitizations.

But Yves Smith, writing in The New York Times on October 30, says it’s not that easy:

“The banks and other players in the securitization industry now seem to be looking to Congress to snap its fingers to make the whole problem go away, preferably with a law that relieves them of liability for their bad behavior. But any such legislative fiat would bulldoze regions of state laws on real estate and trusts, not to mention the Uniform Commercial Code. A challenge on constitutional grounds would be inevitable.

“Asking for Congress’s help would also require the banks to tacitly admit that they routinely broke their own contracts and made misrepresentations to investors in their Securities and Exchange Commission filings. Would Congress dare shield them from well-deserved litigation when the banks themselves use every minor customer deviation from incomprehensible contracts as an excuse to charge a fee?”

Chris Whalen of Institutional Risk Analytics told Fox Business News on October 1 that the government needs to restructure the largest banks. “Restructuring” in this context means bankruptcy receivership. “You can’t prevent it,” said Whalen. “We’ve wasted two years, and haven’t restructured the top banks, but for Citi. Bank of America will need to be restructured; this isn’t about the documentation problem, this is because [of the high] cost of servicing the property.”

Profs. William Black and Randall Wray are calling for receivership for another reason – the industry has engaged in flagrant, widespread fraud. “There was fraud at every step in the home finance food chain,” they wrote in The Huffington Post on October 25:

“[T]he appraisers were paid to overvalue real estate; mortgage brokers were paid to induce borrowers to accept loan terms they could not possibly afford; loan applications overstated the borrowers’ incomes; speculators lied when they claimed that six different homes were their principal dwelling; mortgage securitizers made false reps and warranties about the quality of the packaged loans; credit ratings agencies were overpaid to overrate the securities sold on to investors; and investment banks stuffed collateralized debt obligations with toxic securities that were handpicked by hedge fund managers to ensure they would self destruct.”

Players all down the line were able to game the system, suggesting there is something radically wrong not just with the players, but with the system itself. Would it be sufficient just to throw the culprits in jail? And which culprits? One reason there have been so few arrests to date is that “everyone was doing it.” Virtually the whole securitized mortgage industry might have to be put behind bars.

The Need for Permanent Reform

The Kanjorski amendment to the Banking Reform Bill passed in July allows federal regulators to preemptively break up large financial institutions that pose a threat to US financial or economic stability. In the financial crises of the 1930s and 1980s, the banks were purged of their toxic miscreations and delivered back to private owners, who proceeded to engage in the same sorts of chicanery all over again. It could be time to take the next logical step and nationalize not just the losses, but the banks themselves, and not just temporarily, but permanently.

The logic of that sort of reform was addressed by Willem Buiter, chief economist of Citigroup and formerly a member of the Bank of England’s Monetary Policy Committee, in The Financial Times following the bailout of AIG in September 2008. He wrote:

If financial behemoths like AIG are too large and/or too interconnected to fail but not too smart to get themselves into situations where they need to be bailed out, then what is the case for letting private firms engage in such kinds of activities in the first place?

Is the reality of the modern, transactions-oriented model of financial capitalism indeed that large private firms make enormous private profits when the going is good and get bailed out and taken into temporary public ownership when the going gets bad, with the tax payer taking the risk and the losses?

If so, then why not keep these activities in permanent public ownership? There is a long-standing argument that there is no real case for private ownership of deposit-taking banking institutions, because these cannot exist safely without a deposit guarantee and/or lender of last resort facilities, that are ultimately underwritten by the taxpayer.

Even where private deposit insurance exists, this is only sufficient to handle bank runs on a subset of the banks in the system. Private banks collectively cannot self-insure against a generalised run on the banks. Once the state underwrites the deposits or makes alternative funding available as lender of last resort, deposit-based banking is a license to print money. [Emphasis added.]

All money today except coins originates as a debt to a bank, and debts are just legal agreements to pay in the future. Legal agreements are properly overseen by the judiciary, a branch of government. Perhaps it is time to make banking a fourth branch of government.

That probably won’t happen any time soon, but in the meantime we can try a few experiments in public banking, beginning with the Bank of America, predicted to be the first of the behemoths to be put into receivership.

Leo Panitch, Canada Research Chair in comparative political economy at York University, wrote in The Globe and Mail in December 2009 that “there has long been a strong case for turning the banks into a public utility, given that they can’t exist in complex modern society without states guaranteeing their deposits and central banks constantly acting as lenders of last resort.”

Nationalization Is Looking Better

David Sanger wrote in The New York Times in January 2009:

Mr. Obama’s advisers say they are acutely aware that if the government is perceived as running the banks, the administration would come under enormous political pressure to halt foreclosures or lend money to ailing projects in cities or states with powerful constituencies, which could imperil the effort to steer the banks away from the cliff. “The nightmare scenarios are endless,” one of the administration’s senior officials said.

Today, that scenario is looking less like a nightmare and more like relief. Calls have been made for a national moratorium on foreclosures. If the banks were nationalized, the government could move to restructure the mortgages, perhaps at subsidized rates.

Lending money to ailing projects in cities and states is also sounding rather promising. Despite massive bailouts by the taxpayers and the Fed, the banks are still not lending to local governments, local businesses or consumers. Matthew Rothschild, writing in March 2009, quoted Robert Pollin, professor of economics at the University of Massachusetts at Amherst:

“Relative to a year ago, lending in the US economy is down an astonishing 90 percent. The government needs to take over the banks now, and force them to start lending.”

When the private sector fails, the public sector needs to step in. Under public ownership, wrote Nobel Prize winner Joseph Stiglitz in January 2009, “the incentives of the banks can be aligned better with those of the country. And it is in the national interest that prudent lending be restarted.”

For a model, Congress can look to the nation’s only state-owned bank, the Bank of North Dakota (BND). The 91-year-old BND has served its community well. As of March 2010, North Dakota was the only state boasting a budget surplus; it had the lowest default rate in the country; it had the lowest unemployment rate in the country; and it had received a 2009 dividend from the BND of $58.1 million, quite a large sum for a sparsely populated state.

For our newly-elected Congress, the only alternative may be to start budgeting for TARP II.

Fracking for Natural Gas: EPA Hearings Bring Protests

t r u t h o u t | Fracking for Natural Gas: EPA Hearings Bring Protests.

by: Mark Clayton  |  The Christian Science Monitor | Report

Fracking, or hydraulic fracturing, is a controversial process for extracting natural gas from shale. Critics of fracking question the environmental and health effects of pumping thousands of gallons of water and chemicals underground.

Public hearings over hydraulic fracturing or “fracking” brought hundreds of protesters to Binghamton, N.Y., Monday, carrying signs and shouting slogans either opposing or favoring expansion of the controversial process for extracting natural gas from shale. [Editor’s note: Binghamton was misspelled in the original version.]

The Environmental Protection Agency’s public hearings are part of a broad investigation, begun in March, into the human health and environmental effects of fracking – focusing on air pollution and water pollution. The chemical effects that fracking fluids may have on water supplies after being injected into the ground to extract gas are a special focus.

But a new study conducted for the American Public Power Association (APPA) suggests that if wider use of natural gas in electric power production comes to pass nationwide – as many analysts now expect – such controversies may be just beginning.

“Even if fracturing continues, serving a much larger market will require even more drilling that is already at record levels,” the APPA study found.

In Pennsylvania, for instance, at least 1,600 fracking wells have been drilled with about 4,000 permits granted, the Associated Press reported Monday. But the new study suggests that as the flood of gas drives prices down, electric power generators will increasingly see it as a good alternative to burning coal. That, in turn, would mean vastly expanded fracking.

It’s a crucial time to fight ignorance – help Truthout get the word out by donating here.

Lying beneath New York, Pennsylvania, and other parts of the Northeast, the rich Marcellus shale beds could supply the region with trillions of cubic feet of natural gas for decades, according to some estimates. But opponents say the process that involves pumping tons of toxic chemicals into the ground under pressure can pollute groundwater and greatly increase air pollution.

Thanks to expanded use of fracking, however, US natural-gas reserves have soared. Proven natural gas reserves have increased by more than enough to cover annual production for each of the last 15 or so years, the APPA report says. Natural-gas reserves now total 245 trillion cubic feet – enough to meet 2009-level demand for more than 10 years, it says.

The APPA study also recounts environmental impacts found by other groups. It said a recent study by the New York City Department of Environmental Protection, for instance, found that fracturing a single well could involve “pumping three to eight million gallons of water and 80 to 300 tons of chemicals” into it at high pressure over several days.

“Half or so of the injected solution returns back up the well,” the New York City study said. “The water that flows back up the well also tends to contain hydrocarbons and dissolved solids such that it must be disposed of via underground injection or industrial treatment.” Conventional wastewater treatment was “not feasible,” it said.

With injection water typically trucked in, the NYC study estimated “1,000 or more truck trips per well to haul in water and equipment and then haul out wastewater.” But that’s not the end of it, since as production falls off, the fracturing process is repeated on a well. Some shale gas wells need fracking every five years over a period of 20 to 40 years. The New York study calls fracturing “an ongoing process rather than something that occurs only when the wells are originally drilled.”

The EPA hearings are likely to increase debate as more information about the chemistry of the fracking process emerges, environmentalists and energy analysts say.

“They have never done a hydraulic fracking study as comprehensive as the one now beginning,” says Scott Anderson, a senior policy adviser for the Environment Defense Fund. “The results of this study will inform future congressional decisions on whether to continue to exempt hydraulic fracturing from the federal Safe Drinking Water Act.”

Little is known about the chemical composition of fracking fluids – and the state of New York has held up permitting until more information emerges. While the natural-gas industry says many of the chemicals in such fluids can be found under a kitchen sink, the industry has long resisted identifying those chemicals. That could be changing soon, too.

That’s because the EPA hearings could cause Congress to require that fracking fluid chemicals be identified, and could remove fracking’s exemption from regulation under the Safe Drinking Water Act, according to Kevin Book, an energy analyst with energy market research firm ClearView Energy Partners.

“On August 31, EPA quietly released interim results of its ongoing review of possible drinking water contamination at several sites near Pavilion, Wyoming,” he writes in a new analysis. “Although EPA’s latest data did not conclusively link contamination to fracking, EPA’s guidance that residents should avoid drinking their water may offer Congressional fracking opponents a valuable sound bite to use when calling for mandatory disclosure rules.”

While the Energy Policy Act of 2005 prevents the EPA from explicitly regulating fracking wells under the Safe Drinking Water Act, “the Agency already possesses considerable regulatory authority under other existing laws,” writes Mr. Book. As a result, he contends, even without Congressional action, the EPA could, under other federal laws, “investigate other reports of fracking-linked contamination.”

All republished content that appears on Truthout has been obtained by permission or license.