Category Archives: Communication

Download Full Movie Power Rangers (2017) English Subtitle

Power Rangers (2017) Full Movie Online Watch Free , English Subtitles Full HD, Free Movies Streaming , Free Latest Films.


Quality : HD
Title : Power Rangers.
Director : Dean Israelite
Release : March 23, 2017
Language : en.
Runtime : 124 min
Genre : Action, Adventure, Science Fiction.

Synopsis :
‘Power Rangers’ is a movie genre Action, Adventure, Science Fiction, was released in March 23, 2017. Dean Israelite was directed this movie and starring by Dacre Montgomery. This movie tell story about A group of high-school kids, who are infused with unique superpowers, harness their abilities in order to save the world.

Watch Full Movie Power Rangers (2017)

So..do not miss to Watch Power Rangers Online for free with your family. only 2 step you can Watch or download this movie with high quality video. Come and join us! because very much movie can you watch free streaming.

Incoming search term :

Power Rangers English Episodes
Power Rangers English Full Episodes Download
Power Rangers For Free Online
Power Rangers Online Free Megashare
Power Rangers HD English Full Episodes Download
Power Rangers Watch Online
Watch Power Rangers Online Viooz
Watch Power Rangers Online Free Viooz
Power Rangers English Full Episode Online
Watch Power Rangers Online Free
Watch Power Rangers Online Megashare
Watch Power Rangers Online Putlocker
Power Rangers Free Online
Power Rangers English Full Episodes
Power Rangers HD Full Episodes Online
Power Rangers Full Episodes Online
Power Rangers English Full Episodes Online Free Download
Power Rangers Free Download
Power Rangers Watch Online
Power Rangers Episodes Watch Online
Power Rangers Full Episode
Watch Stream Online Power Rangers
Power Rangers Episodes Online
Watch Power Rangers Online Free putlocker
Power Rangers English Full Episodes Free Download
Watch Power Rangers Online Putlocker
Power Rangers English Full Episodes Watch Online
Watch Power Rangers Online Megashare
Power Rangers English Episode
Power Rangers English Episodes Free Watch Online

Space Junk Collision Could Set Off Catastrophic Chain Reaction, Disable Earth Communications

Pentagon: A Space Junk Collision Could Set Off Catastrophic Chain Reaction, Disable Earth Communications | Popular Science.

 

Orbital Debris The dots on this NASA-generated chart represent known pieces of large orbital debris. NASA

Every now and again someone raises a stern warning about the amount of space junk orbiting Earth. Those warnings are usually met with general indifference, as very few of us own satellites or travel regularly to low Earth orbit. But the DoD’s assessment of the space junk problem finds that perhaps we should be paying attention: space junk has reached a critical tipping point that could result in a cataclysmic chain reaction that brings everyday life on Earth to a grinding halt.

Our reliance on satellites goes beyond the obvious. We depend on them for television signals, the evening weather report, and to find our houses on Google Earth when we’re bored at work. But behind the scenes, they also inform our warfighting capabilities, keep track of the global shipping networks that keep our economies humming, and help us get to the places we need to get to via GPS.

According to the DoD’s interim Space Posture Review, that could all come crashing down. Literally. Our satellites are sorely outnumbered by space debris, to the tune of 370,000 pieces of junk up there versus 1,100 satellites. That junk ranges from nuts and bolts lost during spacewalks to pieces of older satellites to whole satellites that no longer function, and it’s all whipping around the Earth at a rate of about 4.8 miles per second.

The fear is that with so much junk already up there, a collision is numerically probable at some point. Two large pieces of junk colliding could theoretically send thousands more potential satellite killers into orbit, and those could in turn collide with other pieces of junk or with satellites, unleashing another swarm of debris. You get the idea.

To give an idea of how quickly a chain reaction could get out hand consider this: in February of last year a defunct Russian satellite collided with a communications satellite, turning 2 orbiting craft into 1,500 pieces of junk. The Chinese missile test that obliterated a satellite in 2007 spawned 100 times more than that, scattering 150,000 pieces of debris.

If a chain reaction got out of control up there, it could very quickly sever our communications, our GPS system (upon which the U.S. military heavily relies), and cripple the global economy (not to mention destroy the $250 billion space services industry), and whole orbits could be rendered unusable, potentially making some places on Earth technological dead zones.

'disturbing' levels of cyber-raids

Top GCHQ spook warns of ‘disturbing’ levels of cyber-raids • The Register.

With a crunch conference on government cyber-security starting tomorrow, the director of government spook den GCHQ, Iain Lobban, said Britain had faced a “disturbing” number of digital attacks in recent months.

Attackers had targeted citizens’ data, credit card numbers and industry secrets, Lobban said.

“I can attest to attempts to steal British ideas and designs – in the IT, technology, defence, engineering and energy sectors as well as other industries – to gain commercial advantage or to profit from secret knowledge of contractual arrangements,” the eavesdropping boss added in his article for The Times.

According to Foreign Secretary William Hague there were more than 600 “malicious” attacks on government systems every day, while criminals could snap up Brits’ stolen card details online for just 70 pence a throw.

The statement was paired with the announcement of a £650m investment in cyber-security over the next four years, with both Hague and Lobbman arguing that industry and government need to work together to pull off a safe, resilient system.

Countries that could not protect their banking systems and intellectual property will be at a serious disadvantage in future, Hague told The Times.

The government could have its work cut out, though: security software maker Symantec today suggests that businesses are cutting back on cyber-security and are less aware of and engaged with the big threats than they were last year. Symantec was specifically staring at industries integral to national security.

It found that only 82 percent of them participated in government protection programmes, down 18 points since last year.

Symantec reckoned that reduced manpower meant companies had less time to focus on big structural threats.

“The findings of this survey are somewhat alarming, given recent attacks like Nitro and Duqu that have targeted critical infrastructure providers,” said Dean Turner, a director at Symantec.

“Having said that, limitations on manpower and resources as mentioned by respondents help explain why critical infrastructure providers have had to prioritise and focus their efforts on more day-to-day cyber threats.” ®

Quake-prone Japanese Area Runs Disaster System on Force.com

Quake-prone Japanese Area Runs Disaster System on Force.com | PCWorld.

A coastal region of Japan due for a major earthquake and possible tsunamis has implemented a cloud-based disaster management system run by Salesforce.com.

Shizuoka Prefecture, on Japan’s eastern coast in the central region of the country, lies curled around an undersea trough formed by the junction of two tectonic plates. It has been rocked by repeated large temblors in past centuries, collectively called “Tokai earthquakes,” and the central government has warned that with underground stresses high another is imminent.

The local prefectural government began to build a new disaster management system last year, the initial version of which went live in July. It is based on Salesforce.com’s platform-as-a-service offering, Force.com, which hosts hundreds of thousands of applications.

“It would have cost a lot more to run our own servers and network, and if a disaster happened managing something like that would be very difficult, especially if the prefecture office was damaged,” said Keisuke Uchiyama, a Shizuoka official who works with the system.

Japanese prefectures are the rough equivalent of states.

The system is currently hosted on Salesforce.com’s servers in the U.S. and goes live when an official disaster warning is issued by the government. It links up information about key infrastructure such as roads, heliports and evacuation centers.

Salesforce.com says it combines GIS (geographic information system) data with XML sent from Japan’s Meteorological Agency. Users can also send email updates from the field using their mobile phones, with GPS coordinates and pictures attached.

Uchiyama said the original plan was to allow open access, but budget cuts forced that to be postponed and it is now available only to government workers and disaster-related groups. The system was implemented with a budget of about 200 million yen (US$2.6 million) over its first two years, down from an original allotment of about 500 million yen over three years.

He said it was used to keep track of the situation last week when a powerful typhoon swept through central Japan.

The obvious downside to a hosted system is that key infrastructure is often destroyed during natural disasters. After the powerful earthquake and tsunami that hit Japan’s northeastern coast in March, some seaside towns were completely devastated and went weeks without basics like power or mobile phone service. Local communities turned to word-of-mouth and public bulletin boards to spread information and search for survivors.

“If the network gets cut, it’s over,” said Uchiyama.

Two Plus Two Equals Five – A 2nd look at disaster death tolls

Two Plus Two Equals Five – By Philip Walker | Foreign Policy.

The death toll and level of destruction immediately following a disaster are always difficult to determine, but over time a consensus usually emerges between governments and aid organizations. But, as David Rieff points out, “Sadly, over the course of the past few decades, exaggeration seems to have become the rule in the world of humanitarian relief.… These days, only the most extreme, most apocalyptic situations are likely to move donors in the rich world.” And with donor fatigue an ever-present possibility, it is no surprise then that later studies that contradict the original, inflated estimates are criticized — or worse, ignored — for seemingly undermining the humanitarian cause.

Arriving at these estimates is no easy endeavor, as government agencies and relief organization are rarely able to survey entire populations. Instead, emergency management experts rely on sound statistical and epidemiological techniques. But debating and questioning the numbers behind man-made and natural disasters is not just an academic exercise: the implications are huge. For example, relief agencies were restricted from operating in Darfur, partly because of Sudan’s anger that the U.S.-based Save Darfur Coalition had estimated that 400,000 people were killed in the region. Moreover, the U.N. Security Council used the International Rescue Committee’s death toll of 5.4 million in the Congo to put together its largest peacekeeping operation ever. Similarly, government aid pledges increase or decrease depending upon the extent of the disaster. Numbers do matter, and much depends upon their validity and credibility. What follows is a look at some recent disasters where the numbers just don’t match up.

Above, a view of some of the destruction in Bandar Aceh, Indonesia, a week after the devastating earthquake and tsunami struck on Dec. 26, 2004. According to the U.S. Geological Survey, 227,898 people died and about 1.7 million people were displaced in 14 countries in Southeast Asia, South Asia, and East Africa. Indonesia, the hardest hit country by the disaster, initially claimed that 220,000 people had died or went missing but ended up revising that number down to around 170,000.

THE DEADLIEST WAR IN THE WORLD

Discrepancy: 5.4 million vs. 900,000 dead in the Democratic Republic of the Congo between 1998 and 2008

The Democratic Republic of the Congo (DRC) has seen more than its fair share of conflict over the past 15 years. The war in the DRC officially broke out in 1998 and although the conflict technically ended in 2003 when the transitional government took over, fighting has continued in many of the country’s provinces. The conflict has been dubbed “Africa’s World War,” both due to the magnitude of the devastation and the number of African countries that have, at different times, been involved in the conflict. According to a widely cited 2008 report by the New York-based International Rescue Committee (IRC), “an estimated 5.4 million people have died as a consequence of the war and its lingering effects since 1998,” making it the world’s deadliest crisis since World War II. The organization is one of the largest providers of humanitarian aid in the Congo and is therefore deemed one of the few reliable sources on the conflict.

However, Andrew Mack, director of the Human Security Report Project at Simon Fraser University in Canada, said the IRC study did not employ appropriate scientific methodologies and that in reality far less people have died in the Congo. “When we used an alternative measure of the pre-war mortality rate, we found that the IRC estimates of their final three surveys, the figure dropped from 2.83 million to under 900,000,” Mack argued. (He also argued that international relief agencies — such as the International Rescue Committee — are facing a potential conflict of interest because they depend on donations that, in turn, are stimulated by their studies of death tolls. Those studies should be done by independent experts, not by relief agencies that depend on donations, he says.)

Above, the body of a young man lying on the central market avenue of Ninzi, about 25 miles north of Bunia, where on June 20, 2003, Lendu militias launched an attack, killing and mutilating at least 22 civilians.

Discrepancy: 400,000 vs. 15,000 women raped in the Democratic Republic of the Congo between 2006 and 2007

A June 2011 study in the American Journal of Public Health found that 400,000 women aged 15-49 were raped in the DRC over a 12-month period in 2006 and 2007. The shockingly high number is equivalent to four women being raped every five minutes. Perhaps even more alarming, the new number is 26 times higher than the 15,000 rapes that the United Nations reported during the same period.

Maria Eriksson Baaz, a Swedish academic from the University of Gothenburg, has called the study into question by arguing that it is based on out-of-date and questionable figures. As a long-time researcher on women’s rights in the DRC, Baaz claims that extrapolations made from these figures cannot be backed up scientifically. In a recent interview with the BBC, she said it was difficult to collect reliable data in the Congo and that women sometimes claim to be victims in order to get free health care. “Women who have been raped can receive free medical care while women who have other conflict-related injuries or other problems related to childbirth have to pay,” she said. “In a country like the DRC, with [its] extreme poverty where most people can simply not afford health care, it’s very natural this happens.”

Above, Suzanne Yalaka breastfeeds her baby Barunsan on Dec. 11, 2003, in Kalundja, South Kivu province. Her son is the consequence of her being raped by ten rebels from neighboring Burundi. She was left behind by her husband and her husband’s family.

NORTH KOREAN FAMINE

Discrepancy: 2.4 million vs. 220,000 dead in North Korea between 1995 and 1998

Due to the regime’s secretive nature, reliable statistics on the 1990s famine in North Korea are hard to come by. Yet, surprisingly, on May 15, 2001, at a UNICEF conference in Beijing, Choe Su-hon, one of Pyongyang’s nine deputy foreign ministers at the time, stated that between 1995 and 1998, 220,000 North Koreans died in the famine. Compared with outside estimates, these figures were on the low end — presumably because it was in the regime’s interest to minimize the death toll.

A 1998 report by U.S. congressional staffers, who had visited the country, found that from 1995 to 1998 between 900,000 and 2.4 million people had died as a result of food shortages. It noted that other estimates by exile groups were substantially higher but that these numbers were problematic because they were often based on interactions with refugees from the northeastern province of North Hamgyong, which was disproportionately affected by the famine.

Above, North Koreans rebuilding a dike in Mundok county, South Pyongan province, in September 1997, following an August tidal wave after typhoon Winnie. The rebuilding effort was part of an emergency food-for-work project organized by the World Food Program. According to a former North Korean government official, during the famine — from 1993 to 1999 — life expectancy fell from 73.2 to 66.8 and infant mortality almost doubled from 27 to 48 per 1,000 people.

GENOCIDE IN DARFUR

Discrepancy: 400,000 vs. 60,000 dead in Darfur between 2003 and 2005

In 2006, three years after the conflict in Darfur began, Sudanese President Omar al-Bashir publically criticized the United Nations for exaggerating the extent of the fighting in Darfur. “The figure of 200,000 dead is false and the number of dead is not even 9,000,” he proclaimed. At the same time, outside groups like the Save Darfur Coalition and various governments, including the United States, were having a difficult time producing concrete numbers as well. Their only consensus was that the real death toll was exponentially higher than those numbers provided by Bashir.

In 2005, a year after U.S. Secretary of State Colin Powell told a U.S. congressional committee that the ethnic violence in Darfur amounted to “genocide,” Deputy Secretary of State Robert Zoellick estimated the death toll between 60,000 and 160,000. Zoellick was widely criticized for understating the numbers. The World Health Organization estimated that 70,000 people had died over a seven-month period alone. At the same time, researchers for the Coalition for International Justice contended that 396,563 people had died in Darfur. Today, the Sudanese authorities claim that since the conflict began in 2003, 10,000 people have died, while the U.N. estimates that over 300,000 have been killed and another 2.7 million have been displaced.

Above, an armed Sudanese rebel arrives on Sept. 7, 2004, at the abandoned village of Chero Kasi less than an hour after Janjaweed militiamen set it ablaze in the violence-plagued Darfur region.

CYCLONE NARGIS 

Discrepancy: 138,000 vs. unknown death toll in Burma in 2008

Tropical cyclone Nargis made landfall in southern Burma on May 2, 2008, leaving a trail of death and destruction before petering out the next day. It devastated much of the fertile Irrawaddy delta and Yangon, the nation’s main city. Nargis brought about the worst natural disaster in the country’s history — with a death toll that may have exceeded 138,000, according to a study by the Georgia Institute of Technology. But, with a vast number of people still unaccounted for three years later, the death toll might even be higher. The Burmese authorities allegedly stopped counting for fear of political fallout.

It’s more common for countries hit by a devastating disaster to share their plight with the world and plead for a robust relief effort, but in the aftermath of cyclone Nargis the Burmese military regime sought to maintain control over news of the disaster — restricting access to journalists and censoring the release of information and images. Moreover, the United Nations and other relief agencies were initially banned from setting up operations. At the time, with over 700,000 homes blown away, the U.N. and the Red Cross estimated that over 2.5 million people were in desperate need of aid.

Above, school teacher Hlaing Thein stands on the wreckage of a school destroyed by cyclone Nargis in Mawin village in the Irrawaddy delta region on June 9, 2008.

 

Two Plus Two Equals Five

What numbers can we trust? A second look at the death toll from some of the world’s worst disasters.

BY PHILIP WALKER | AUGUST 17, 2011

EARTHQUAKE IN HAITI

Discrepancy: 318,000 vs. 46,000-85,000 dead in Haiti in 2010

The devastating earthquake of Jan. 12, 2010, killed over 318,000 people and left over 1.5 million people homeless, according to the Haitian government. International relief organizations generally estimate anywhere between 200,000 and 300,000 casualties.

However, a recently leaked report compiled for USAID by a private consulting firm claims that the death toll is likely between 46,000 and 85,000, and that roughly 900,000 people were displaced by the earthquake. The report has not yet been published, but its alleged findings have already been disputed by both Haitian authorities and the United Nations. Even the U.S. State Department, for now, is reluctant to endorse it, saying “internal inconsistencies” in some of the statistical analysis are currently being investigated prior to publication.

PAKISTAN FLOODS

Discrepancy: Large numbers affected vs. small death toll in Pakistan in 2010

A young girl washes the mud from her toy at a water pump in the middle of collapsed buildings at a refugee camp near Nowshera in northwest Pakistan on Sept. 23, 2010. Figures provided by the United Nations and Pakistan’s government estimate that 20 million people were affected by the 2010 summer floods — the worst in the country’s history. Almost 2,000 people died, 3,000 were injured, 2 million homes were damaged or destroyed, and over 12 million people were left in need of emergency food aid, according to Pakistan’s National and Provincial Disaster Management Authority. Flood waters wiped out entire villages and vast stretches of farmland affecting an area roughly the size of England. After surveying 15 key sectors across the country, in Oct. 2010, the World Bank and Asian Development Bank announced an estimated damage of $9.7 billion — an amount more than twice that of Pakistan’s 2005 earthquake which killed approximately 86,000 people. U.N. Secretary-General Ban Ki-moon characterized the destruction as more dire than that caused by the 2004 Indian Ocean tsunami and the Pakistani earthquake combined. “In the past I have visited the scenes of many natural disasters around the world, but nothing like this,” he stated.

David Rieff warns that, “By continually upping the rhetorical ante, relief agencies, whatever their intentions, are sowing the seeds of future cynicism, raising the bar of compassion to the point where any disaster in which the death toll cannot be counted in the hundreds of thousands, that cannot be described as the worst since World War II or as being of biblical proportions, is almost certainly condemned to seem not all that bad by comparison.” This was the case in Pakistan where the number affected by the flooding was gigantic but the death toll was relatively low — especially compared to the Haiti earthquake a few months earlier. As a result, the United Nations and other aid organizations were unable to raise large sums for the relief effort compared to previous disasters. “Right now, our level of needs in terms of funding is huge compared to what we’ve been receiving, even though this is the largest, by far, humanitarian crisis we’ve seen in decades, ” said Louis-George Arsenault, director of emergency operations for UNICEF, in an interview with the BBC in Aug. 2010.

As David Meltzer, senior vice president of international services for the American Red Cross, discerningly put it, “Fortunately, the death toll [in Pakistan] is low compared to the tsunami and the quake in Haiti. … The irony is, our assistance is focused on the living — and the number of those in need is far greater than in Haiti.”

 

U.S. Defense Lawyers Are Crippling Nation's ability to wage Cyberwar

Cyberwar, Lawyers, and the U.S.: Denial of Service – By Stewart Baker | Foreign Policy.

Lawyers don’t win wars. But can they lose one?

We’re likely to find out, and soon. Lawyers across the U.S. government have raised so many show-stopping legal questions about cyberwar that they’ve left the military unable to fight or even plan for a war in cyberspace. But the only thing they’re likely to accomplish is to make Americans less safe.

No one seriously denies that cyberwar is coming. Russia pioneered cyberattacks in its conflicts with Georgia and Estonia, and cyberweapons went mainstream when the developers of Stuxnet sabotaged Iran’s Natanz uranium-enrichment plant, setting back the Islamic Republic’s nuclear weapons program more effectively than a 500-pound bomb ever could. In war, weapons that work get used again.

Unfortunately, it turns out that cyberweapons may work best against civilians. The necessities of modern life — pipelines, power grids, refineries, sewer and water lines — all run on the same industrial control systems that Stuxnet subverted so successfully. These systems may be even easier to sabotage than the notoriously porous computer networks that support our financial and telecommunications infrastructure.

And the consequences of successful sabotage would be devastating. The body charged with ensuring the resilience of power supplies in North America admitted last year that a coordinated cyberattack on the continent’s power system “could result in long-term (irreparable) damage to key system components” and could “cause large population centers to lose power for extended periods.” Translated from that gray prose, this means that foreign militaries could reduce many of U.S. cities to the state of post-Katrina New Orleans — and leave them that way for months.

Can the United States keep foreign militaries out of its networks? Not today. Even America’s premier national security agencies have struggled to respond to this new threat. Very sophisticated network defenders with vital secrets to protect have failed to keep attackers out. RSA is a security company that makes online credentials used widely by the Defense Department and defense contractors. Hackers from China so badly compromised RSA’s system that the company was forced to offer all its customers a new set of credentials. Imagine the impact on Ford’s reputation if it had to recall and replace every Ford that was still on the road; that’s what RSA is experiencing now.

HBGary, another well-respected security firm, suffered an attack on its system that put thousands of corporate emails in the public domain, some so embarrassing that the CEO lost his job. And Russian intelligence was able to extract large amounts of information from classified U.S. networks — which are not supposed to touch the Internet — simply by infecting the thumb drives that soldiers were using to move data from one system to the next. Joel Brenner, former head of counterintelligence for the Office of the Director of National Intelligence, estimates in his new book, America the Vulnerable, that billions of dollars in research and design work have been stolen electronically from the Defense Department and its contractors.

In short, even the best security experts in and out of government cannot protect their own most precious secrets from network attacks. But the attackers need not stop at stealing secrets. Once they’re in, they can just as easily sabotage the network to cause the “irreparable” damage that electric-grid guardians fear.

No agency has developed good defenses against such attacks. Unless the United States produces new technologies and new strategies to counter these threats, the hackers will get through. So far, though, what the United States has mostly produced is an outpouring of new law-review articles, new legal opinions, and, remarkably, new legal restrictions.

Across the federal government, lawyers are tying themselves in knots of legalese. Military lawyers are trying to articulate when a cyberattack can be classed as an armed attack that permits the use of force in response. State Department and National Security Council lawyers are implementing an international cyberwar strategy that relies on international law “norms” to restrict cyberwar. CIA lawyers are invoking the strict laws that govern covert action to prevent the Pentagon from launching cyberattacks.

Justice Department lawyers are apparently questioning whether the military violates the law of war if it does what every cybercriminal has learned to do — cover its tracks by routing attacks through computers located in other countries. And the Air Force recently surrendered to its own lawyers, allowing them to order that all cyberweapons be reviewed for “legality under [the law of armed conflict], domestic law and international law” before cyberwar capabilities are even acquired.

The result is predictable, and depressing. Top Defense Department officials recently adopted a cyberwar strategy that simply omitted any plan for conducting offensive operations, even as Marine Gen. James Cartwright, then vice chairman of the Joint Chiefs of Staff, complained publicly that a strategy dominated by defense would fail: “If it’s OK to attack me and I’m not going to do anything other than improve my defenses every time you attack me, it’s very difficult to come up with a deterrent strategy.”

Today, just a few months later, Cartwright is gone, but the lawyers endure. And apparently the other half of the U.S. cyberwar strategy will just have to wait until the lawyers can agree on what kind of offensive operations the military is allowed to mount.

***We’ve been in this spot before. In the first half of the 20th century, the new technology of air power transformed war at least as dramatically as information technology has in the last quarter-century. Then, as now, our leaders tried to use the laws of war to stave off the worst civilian harms that this new form of war made possible.

Tried and failed.

By the 1930s, everyone saw that aerial bombing would have the capacity to reduce cities to rubble in the next war. Just a few years earlier, the hellish slaughter in the trenches of World War I had destroyed the Victorian world; now air power promised to bring the same carnage to soldiers’ homes, wives, and children.

In Britain, some leaders expressed hardheaded realism about this grim possibility. Former Prime Minister Stanley Baldwin, summing up his country’s strategic position in 1932, showed a candor no recent American leader has dared to match. “There is no power on Earth that can protect [British citizens] from being bombed,” he said. “The bomber will always get through…. The only defense is in offense, which means that you have got to kill more women and children more quickly than the enemy if you want to save yourselves.”

The Americans, however, still hoped to head off the nightmare. Their tool of choice was international law. (Some things never change.) When war broke out in Europe on Sept. 1, 1939, President Franklin D. Roosevelt sent a cable to all the combatants seeking express limits on the use of air power. Citing the potential horrors of aerial bombardment, he called on all combatants to publicly affirm that their armed forces “shall in no event, and under no circumstances, undertake the bombardment from the air of civilian populations or of unfortified cities.”

Roosevelt had a pretty good legal case. The 1899 Hague conventions on the laws of war, adopted as the Wright brothers were tinkering their way toward Kitty Hawk, declared that in bombardments, “all necessary steps should be taken to spare as far as possible edifices devoted to religion, art, science, and charity, hospitals, and places where the sick and wounded are collected, provided they are not used at the same time for military purposes.” The League of Nations had also declared that in air war, “the intentional bombing of civilian populations is illegal.”

But FDR didn’t rely just on law. He asked for a public pledge that would bind all sides in the new war — and, remarkably, he got it. The horror at aerial bombardment of civilians ran so deep in that era that Britain, France, Germany, and Poland all agreed to FDR’s bargain, before nightfall on Sept. 1, 1939.

Nearly a year later, with the Battle of Britain raging in the air, the Luftwaffe was still threatening to discipline any pilot who bombed civilian targets. The deal had held. FDR’s accomplishment began to look like a great victory for the international law of war — exactly what the lawyers and diplomats now dealing with cyberwar hope to achieve.

But that’s not how this story ends.

On the night of Aug. 24, 1940, a Luftwaffe air group made a fateful navigational error. Aiming for oil terminals along the Thames River, they miscalculated, instead dropping their bombs in the civilian heart of London.

It was a mistake. But that’s not how British Prime Minister Winston Churchill saw it. He insisted on immediate retaliation. The next night, British bombers hit (arguably military) targets in Berlin for the first time. The military effect was negligible, but the political impact was profound. German Luftwaffe commander Hermann Göring had promised that the Luftwaffe would never allow a successful attack on Berlin. The Nazi regime was humiliated, the German people enraged. Ten days later, Adolf Hitler told a wildly cheering crowd that he had ordered the bombing of London: “Since they attack our cities, we will extirpate theirs.”

The Blitz was on.

In the end, London survived. But the extirpation of enemy cities became a permanent part of both sides’ strategy. No longer an illegal horror to be avoided at all costs, the destruction of enemy cities became deliberate policy. Later in the war, British strategists would launch aerial attacks with the avowed aim of causing “the destruction of German cities, the killing of German workers, and the disruption of civilized life throughout Germany.” So much for the Hague conventions, the League of Nations resolution, and even the explicit pledges given to Roosevelt. All these “norms” for the use of air power were swept away by the logic of the technology and the predictable psychology of war.

***American lawyers’ attempts to limit the scope of cyberwar are just as certain to fail as FDR’s limits on air war — and perhaps more so.

It’s true that half a century of limited war has taught U.S. soldiers to operate under strict restraints, in part because winning hearts and minds has been a higher priority than destroying the enemy’s infrastructure. But it’s unwise to put too much faith in the notion that this change is permanent. Those wars were limited because the stakes were limited, at least for the United States. Observing limits had a cost, but one the country could afford. In a way, that was true for the Luftwaffe, too, at least at the start. They were on offense, and winning, after all. But when the British struck Berlin, the cost was suddenly too high. Germans didn’t want law and diplomatic restraint; they wanted retribution — an eye for an eye. When cyberwar comes to America and citizens start to die for lack of power, gas, and money, it’s likely that they’ll want the same.

More likely, really, because Roosevelt’s bargain was far stronger than any legal restraints we’re likely to see on cyberwar. Roosevelt could count on a shared European horror at the aerial destruction of cities. The modern world has no such understanding — indeed, no such shared horror — regarding cyberwar. Quite the contrary. For some of America’s potential adversaries, the idea that both sides in a conflict could lose their networked infrastructure holds no horror. For some, a conflict that reduces both countries to eating grass sounds like a contest they might be able to win.

What’s more, cheating is easy and strategically profitable. America’s compliance will be enforced by all those lawyers. Its adversaries’ compliance will be enforced by, well, by no one. It will be difficult, if not impossible, to find a return address on their cyberattacks. They can ignore the rules and say — hell, they are saying — “We’re not carrying out cyberattacks. We’re victims too. Maybe you’re the attacker. Or maybe it’s Anonymous. Where’s your proof?”

Even if all sides were genuinely committed to limiting cyberwar, as they were in 1939, history shows that it only takes a single error to break the legal limits forever. And error is inevitable. Bombs dropped by desperate pilots under fire go astray — and so do cyberweapons. Stuxnet infected thousands of networks as it searched blindly for Iran’s uranium-enrichment centrifuges. The infections lasted far longer than intended. Should we expect fewer errors from code drafted in the heat of battle and flung at hazard toward the enemy?

Of course not. But the lesson of all this for the lawyers and the diplomats is stark: Their effort to impose limits on cyberwar is almost certainly doomed.

No one can welcome this conclusion, at least not in the United States. The country has advantages in traditional war that it lacks in cyberwar. Americans are not used to the idea that launching even small wars on distant continents may cause death and suffering at home. That is what drives the lawyers — they hope to maintain the old world. But they’re being driven down a dead end.

If America wants to defend against the horrors of cyberwar, it needs first to face them, with the candor of a Stanley Baldwin. Then the country needs to charge its military strategists, not its lawyers, with constructing a cyberwar strategy for the world we live in, not the world we’d like to live in.

That strategy needs both an offense and a defense. The offense must be powerful enough to deter every adversary with something to lose in cyberspace, so it must include a way to identify attackers with certainty. The defense, too, must be realistic, making successful cyberattacks more difficult and less effective because resilience and redundancy has been built into U.S. infrastructure.

Once the United States has a strategy for winning a cyberwar, it can ask the lawyers for their thoughts. But it can’t be done the other way around.

In 1941, the British sent their most modern battleship, the Prince of Wales, to Southeast Asia to deter a Japanese attack on Singapore. For 150 years, having the largest and most modern navy was all that was needed to project British power around the globe. Like the American lawyers who now oversee defense and intelligence, British admirals preferred to believe that the world had not changed. It took Japanese bombers 10 minutes to put an end to their fantasy, to the Prince of Wales, and to hundreds of brave sailors’ lives.

We should not wait for our own Prince of Wales moment in cyberspace.

Post-9/11 U.S. intelligence reforms take root but problems remain

Post-9/11 U.S. intelligence reforms take root, problems remain | Reuters.

(Reuters) – U.S. intelligence agencies will forever be scarred by their failure to connect the dots and detect the September 11 plot, but a decade later efforts to break down barriers to information-sharing are taking root.

Changing a culture of “need-to-know” to “need-to-share” does not come easily in spy circles. Some officials say they worry, a decade later, about a future attack in which it turns out that U.S. spy agencies had clues in their vast vaults of data but did not put them together, or even know they existed.

Yet significant changes, both big and small, have broken down barriers between agencies, smoothed information-sharing and improved coordination, U.S. intelligence experts say.

From issuing a blue badge to everyone working in the sprawling intelligence community to symbolize a common identity, to larger moves of mixing employees from different agencies, the goal is singular — to prevent another attack.

“We’re much further ahead,” David Shedd, Defense Intelligence Agency deputy director, said of the ability to connect the dots compared with 10 years ago. Still, signs of a plot to attack the United States could be missed again.

“My worst fear, and I suspect probably one that would come true, is that in any future would-be or actual attack, God forbid, we will be able to find the dots again somewhere because of simply how much data is collected,” Shedd said.

The political response to the failure to stop the attack was the 2002 creation of the Department of Homeland Security, pulling together 22 agencies to form the third largest U.S. Cabinet department behind the Pentagon and Veterans Affairs.

That was followed by the creation in late 2004 of the Director of National Intelligence to oversee all the spy agencies, as recommended by the bipartisan 9/11 commission.

Previously, the CIA director held a dual role of also overseeing the multitude of intelligence agencies. But in the aftermath of the 2001 attacks, policymakers decided that was too big of a job for one person to do effectively.

‘THERE ARE PROBLEMS’

Critics argued then and now that the reforms were the government’s usual response to crises — create more bureaucracy. But others see much-needed change.

“It has been a tremendous improvement,” said Lee Hamilton, who was the 9/11 commission vice chair. “It’s not seamless, there are problems, and we’ve still got a ways to go.”

The 2001 attacks involving airliners hijacked by al Qaeda operatives killed nearly 3,000 people in New York, Pennsylvania and the Pentagon. Various U.S. intelligence and law enforcement agencies had come across bits of information suggesting an impending attack but failed to put the pieces together.

The CIA had information about three of the 19 hijackers at least 20 months before the attacks; the National Security Agency had information linking one of the hijackers with al Qaeda leader Osama bin Laden’s network; the CIA knew one hijacker had entered the United States but did not tell the FBI; and an FBI agent warned of suspicious Middle Eastern men taking flying lessons.

Have the reforms made America safer? Officials say yes, and point to the U.S. operation that killed bin Laden in Pakistan in May that demanded coordination among intelligence agencies and the military. But there is an inevitable caveat: no one can guarantee there will never be another attack on U.S. soil.

On Christmas Day 2009, a Nigerian man linked to an al Qaeda off-shoot tried unsuccessfully to light explosives sewn into his underwear on a flight to Detroit from Amsterdam. It turned out U.S. authorities had pockets of information about him.

President Barack Obama used a familiar September 11 phrase to describe the 2009 incident as “a failure to connect the dots of intelligence that existed across our intelligence community.”

Roger Cressey, a former White House National Security Council counterterrorism official, resurrected another September 11 phrase: “It was a failure of imagination.”

The intelligence community had not seen al Qaeda in the Arabian Peninsula, a Yemen-based al Qaeda off-shoot, as capable of striking the U.S. homeland. If the “underwear bomber” threat had originated in Pakistan “they would have gone to battle stations immediately,” Cressey said.

Some proposed changes in how authorities would respond to another successful attack still are pending. For example, creation of a common communication system for police, firefighters and other emergency personnel remains tangled up in political wrangling in Congress over how to implement it.

“This is a no-brainer,” Hamilton said. “The first responders at the scene of a disaster ought to be able to talk with one another. They cannot do it today in most jurisdictions.”

Former leaders of the 9/11 commission issued a report card saying nine of its 41 recommendations remain unfinished.

WHERE’S THE POWER?

The Office of the Director of National Intelligence has experienced growing pains as overseer of the 17 spy agencies, churning through four chiefs in six years.

Tensions over turf, confusion about the DNI’s role, and problems herding agencies with very powerful chiefs of their own all came to a crescendo when retired Admiral Dennis Blair, the third DNI, tried to assert authority over CIA station chiefs, who represent the agency in different countries.

“The position of chief of station is one of the crown jewels of the CIA, and they don’t want anyone playing with their crown jewels,” said Mark Lowenthal, a former senior U.S. intelligence official.

After a dust-up with CIA Director Leon Panetta, who now is defense secretary, it was Blair who was sent packing.

“I think the mistake that some have made is to have viewed the DNI and the Director of CIA as an either/or proposition rather than the power of the two working together,” the DIA’s Shedd said in an interview in his office.

“There is a history of where that hasn’t worked so well, I believe it is working much better today,” said Shedd, who has worked at the DNI, CIA and National Security Council.

Intelligence experts say in the current administration, Obama’s top homeland security and counterterrorism adviser John Brennan arguably has more power than any of them because he has the president’s ear. It’s a reminder that, bureaucratic reform or no, personalities count in making national security policy.

The improved sharing of secret data has led to yet another set of problems. The deluge of bits and bytes has subjected intelligence analysts to information overload as they try to sift through it all for relevant pieces.

“Our analysts still are spending way too much time on finding the information rather than on the analysis of the information,” Shedd said. “There is just too much data to go find it all.”

The intelligence community wants a system developed that would automatically process information from multiple agencies and then make the connections for the analysts.

But greater inroads into sharing data across agencies does not guarantee that another attack will be averted.

The threat has evolved and officials now are increasingly concerned about a “lone wolf” plot by an individual, not tied to any militant group, that may be more difficult to uncover.

“Those threats will not come to our attention because of an intelligence community intercept,” said John Cohen, a senior Department of Homeland Security counterterrorism official.

“They will come to our attention because of an alert police officer, an alert deputy sheriff, an alert store owner, an alert member of the public sees something that is suspicious and reports it,” Cohen said.

One measure of the success of post-9/11 reforms is that a decade later the United States has not had a similar attack.

“Now that could be luck, that could be skill, we don’t really know,” Hamilton said. “But in all likelihood what we have done, including the establishment of the Department of Homeland Security and the transformation in intelligence and FBI, has certainly been helpful.”

(Editing by Warren Strobel and Will Dunham)

Can we count on cell networks in disasters?

Can we count on cell networks in disasters? | Signal Strength – CNET News.

Andrea Mancuso was working just north of the World Trade Center on September 11, 2001, when two planes struck the towers. Soon after, she was the only person around who seemed to have cell phone service.

“I walked from downtown to Lincoln Center (about 4.5 miles) before I was able to hail a cab with four strangers,” she said. “Everyone was upset, and no one had a cell phone signal except me. I passed my phone around like a hot potato all the way to Harlem. Everyone including the cab driver graciously and tearfully called their families.”

Her story, of course, is not unique. For hours, family members and co-workers frantically tried to contact people they knew in Lower Manhattan.

The network failure could partially be pinned on infrastructure damage. Cell towers were destroyed in the attacks, along with switching equipment used for landline phones. But another cause of the problem was the huge surge in traffic from people trying to find loved ones or letting others know they were OK.

Since 9/11, wireless networks have been tested time and again, and their performance has been shaky. A major blackout in the Northeast in 2003, Hurricane Katrina in 2005, and the Minneapolis bridge collapse in 2007 put strains on local networks. Cellular service in New York City even ground to a halt last month because of a minor earthquake centered several hundred miles away.

Undoubtedly, with each crisis, operators have learned more about what they can do to keep service up and running. But there’s a flip side to that growing expertise: we’re more dependent than ever on cell phones.

In September 2001, there were between 118 million and 128 million wireless subscribers who owned cell phones, according to data compiled by the CTIA Wireless Association. At the end of 2010, it was 302 million, or more than 96 percent of Americans. To keep up with the surge, wireless operators have spent billions of dollars upgrading their networks, adding more than 125,000 new cell sites since the end of 2001.

 

Related stories:
Hurricane Irene’s challenge for cell phone networks
Cell service jammed after East Coast earthquake
Wireless operators accelerate upgrade plans

But is that enough? Despite anecdotal evidence to the contrary, industry experts interviewed by CNET believe that carriers have made huge strides on network reliability by doing simple things like expansion and adding generator backups, and complex things like forming emergency response units. They believe we’re in far better shape than we were 10 years ago…to a point.

“There’s no question the networks are in a much better position today than they were in 2001 to handle a significant crisis, even in the face of a staggering increase in users,” said Charles Golvin, an analyst at Forrester Research. “But it’s also important for people to make contingency plans for communications, in case the network isn’t working.”

On the plus side
There’s reason for that guarded optimism. Carriers have added spectrum and high-capacity connections from their cell towers to the wired networks that transport voice and data traffic. And they’ve hardened their networks with equipment that can withstand heavy wind and rain, as well as ensure that the equipment remains functional when commercial power is lost.

In fact, all the major wireless carriers have increased the number of cell sites with backup power supplies. They’ve also increased the number of cell sites on wheels that they can roll into locations that have had infrastructure damage.

“A good proportion of the cellular bay stations across all the major carriers now have some kind of battery or generator for backup power,” said Gerard Hallaren, an equities analyst at JRPG Research. “That wasn’t the case back in 2001.”

Verizon Wireless, for example, has for years been installing backup generators and batteries to many of its cell sites. During the 2003 blackout that kept much of the Northeast in the dark for hours, Verizon’s customers could still communicate when customers from other carriers could not.

There’s also that human element, in terms of specialized units that can quickly move into an area.

“In the event of a natural or man-made disaster, like 9/11, we have multiple groups within AT&T who are equipped to respond quickly to repair and restore network capabilities,” AT&T spokesman Mark Siegel said. “Our network disaster recovery team is a great example of that. They were deployed after 9/11 to assist in restorations, and we’ve invested $600 million in our NDR team since its formation.”

There are also new services. Enhanced 911, which allows 911 operators to locate callers on a cell phone, is a prime example. A decade ago, cell phones in the U.S. didn’t yet support the technology. Today, every phone sold in the U.S. is capable of providing location information to emergency 911 operators.

While it was gaining popularity in Europe and elsewhere in 2001, SMS text-messaging services in the U.S. weren’t used much by wireless subscribers because they worked only within carrier networks. This meant that on September 11, 2001, if someone wanted to send a text message to a family member or loved one, they were able to send it only to someone who subscribed to the same carrier.

A few months later, in November 2001, carriers began to connect their networks for text messaging, allowing subscribers on different networks to exchange texts. Today, more than 187 billion text messages cross U.S. wireless networks each month.

Text messaging has become a critical form of communication during a crisis. In the lead-up to Hurricane Irene on the East Coast last month, wireless operators and public-safety officials were asking consumers to use text messaging during the storm instead of making voice calls to help alleviate network congestion.

Text messaging is a better way of communicating in a crisis for several reasons. To start, the messages are small and consume only a small amount of network resources. Second, messages are sent on a cell phone’s signaling channel. This means that they’re in a separate “lane” from voice and data messages, so they may have a clear path when the voice network is congested. And if the network is too congested even to send a text, the message can be stored. When service resumes, the message is sent.

The technology has become such a ubiquitous and reliable form of communication during an emergency that the Federal Communication Commission is working on rules to allow 911 call centers to accept SMS text messages, as well as photos, videos, and data communications, to provide more information to first responders for assessing and responding to emergencies.

Congestion problem
The biggest problem wireless networks face today in a crisis is a rapid increase in usage. The networks don’t have enough capacity to handle the surge in call volume. Cellular networks are designed to handle a certain amount of calls in each cell site or region, with wireless operators carefully calculating how much usage is needed to serve the average usage volume while having just enough capacity to handle spikes in demand.

The problem occurs when a disaster hits, and thousands of people all at once pick up their phones to call someone, send a text message, update Twitter, and so on. There simply isn’t enough capacity in the network to allow everyone in a cell site to make a phone call at the same time.

Steve Largent, the president of the CTIA Wireless Association, argues that more wireless spectrum is needed to ensure that more “lanes” for data can be opened up for wireless operators to direct traffic to during a crisis.

“Crisis situations are a perfect example of why it’s so important that the government makes more wireless spectrum available,” he said.

While more spectrum could help, it’s unclear if it would ever be cost-effective for wireless operators to configure their networks to withstand the highest demand for network resources. Analyst Gerard Hallaren said most networks are designed to handle only about 20 percent to 40 percent of maximum traffic, with 40 percent being on the conservative side.

“It’s just economic insanity for any carrier to try to solve the congestion problem,” he said. “It’s cost-prohibitive to build a network that could serve 330 million at the same time. A service like that would cost hundreds of dollars a month, and people are not willing to pay that much for cell phone service.”

That said, the carriers say they’ve made improvements to their networks and are trying to alleviate the issue.

“We continue to build redundancies into our network and increase capacity so that it is not overwhelmed by ‘sudden calling events,'” AT&T’s Siegel said.

New generations of cellular technology have also helped make wireless more available during a crisis. The move from 2G to 3G, and now to 4G, will offer carriers more efficiencies in how they use their spectrum, which could also be a benefit during an emergency to alleviate network congestion.

The other major difference between September 2001 and now is that the mobile Internet as we know it today did not exist. Third-generation, or so called 3G, wireless networks were not deployed, and most people did not have access to the Internet from their cell phones. Facebook, Twitter, and other social-networking apps that people access easily from their cell phones today to share pictures, updates, and other information weren’t even invented back then. While this traffic also increases the load on networks, sometimes it’s easier for users to get through to these sites than to make voice connections via their cell phones.

“People have so many more ways of communicating with each other now to tell someone where they are or that they are all right,” Forrester’s Golvin said. “Having these communication alternatives is a huge improvement over where we were a decade ago.”

Public-safety officials had their own communications challenges responding to the terrorist attacks on September 11, 2001. Tomorrow CNET will explain the problems first responders had that day and why the public-safety community is still waiting for their own wireless network.

Mitsubishi Victim of Chinese cyber attack

BBC News – Japan defence firm Mitsubishi Heavy in cyber attack.

Japan’s top weapons maker has confirmed it was the victim of a cyber attack reportedly targeting data on missiles, submarines and nuclear power plants.

Mitsubishi Heavy Industries (MHI) said viruses were found on more than 80 of its servers and computers last month.

The government said it was not aware of any leak of sensitive information.

But the defence ministry has demanded MHI carry out a full investigation. Officials were angered after learning of the breach from local media reports.

Speaking at a news conference on Tuesday, Japan’s defence minister Yasuo Ichikawa said the cyber attackers had not succeeded in accessing any important information but MHI would be instructed “to undertake a review of their information control systems”.

“The ministry will continue to monitor the problem and conduct investigations if necessary,” Mr Ichikawa added.

All government contractors are obliged to inform ministers promptly of any breach of sensitive or classified information.

Analysis

The Ministry of Defence has said the delay in Mitsubishi Heavy Industries informing it of the cyber attack is “regrettable” – a bland term regularly deployed by Japanese bureaucrats to describe everything from near indifference to utter outrage.

But it is clear there is concern in Japan about security at the country’s biggest defence contractor.

Mitsubishi Heavy makes everything from warships to missiles. The giant company says it discovered the breach in mid- August, and informed the Japanese police at the end of the month.

But the defence ministry was not told until Monday afternoon, after reports had appeared in local media.

The key issue is just how serious the attack was – and whether any of Japan’s defence secrets have leaked.

Mitsubishi Heavy says the virus was confined to just 45 servers and 38 computer terminals – out of the many thousands it operates.

An ongoing internal investigation has found only network information, such as IP addresses, has been compromised.

“It’s up to the defence ministry to decide whether or not the information is important. That is not for Mitsubishi Heavy to decide. A report should have been made,” a defence ministry spokesman was earlier quoted by Reuters as saying.

Better protection

The online attacks – which are believed to be the first of their kind against Japan’s defence industry – originated outside the company’s computer network, MHI said.

They have been described as spear phishing attacks – when hackers send highly customised and specifically targeted messages aimed at tricking people into visiting a fake webpage and giving away login details.

Neither the Japanese government nor MHI have said who may be responsible. A report in one Japanese newspaper said Chinese language script was detected in the attack against MHI.

But China rebuffed suggestions it could be behind the attacks.

“China is one of the main victims of hacking… Criticising China as being the source of hacking attacks not only is baseless, it is also not beneficial for promoting international co-operation for internet security,” foreign ministry spokesman Hong Lei said.

China has in the past been accused of carrying out online attacks on foreign government agencies and firms.

Beijing routinely denies that it is behind this kind of hacking but, says the BBC’s Defence Correspondent Jonathan Marcus, the US military is more and more concerned about China’s abilities in this field.

Fear of the “cyber-dragon” is driving forward a fundamental re-think of US policy which is coming more and more to regard computer hacking as a potential act of war, our correspondent adds.

MHI confirmed that 45 of its servers and 38 computers were infected by at least eight viruses.

The viruses targeted a shipyard in Nagasaki, where destroyers are built, and a facility in Kobe that manufactures submarines and parts for nuclear power stations, public broadcaster NHK reported.

A plant in Nagoya, where the company designs and builds guidance and propulsion systems for rockets and missiles, was also reportedly compromised.

MHI said it had consulted the Tokyo police department and was carrying out an investigation alongside security experts, which should be concluded by the end of the month.

Lockheed case

A second defence contractor, IHI, which supplies engine parts for military aircraft, said it had also been targeted.

IHI said it had been receiving emails containing viruses for months, but its security systems had prevented infection.

There are also reports that Japanese government websites, including the cabinet office and a video distribution service, have been hit by distributed denial-of-service attacks.

A typical DDoS attack involves hundreds or thousands of computers, under the control of hackers, bombarding an organisation’s website with so many hits that it collapses.

Last month, a Japanese defence white paper urged better protection against cyber attacks after US defence contractors were hit by a spate of assaults.

One of the most high-profile cases involved Lockheed Martin – the world’s biggest aerospace company, which makes F-16, F-22 and F-35 fighter jets as well as warships.

Although the firm said none of its programmes were compromised in the attack in May, it prompted other defence contractors to assess their own security measures.

More on This Story

Related Stories