Category Archives: AI-ROBOTICS

U.S. Defense Lawyers Are Crippling Nation's ability to wage Cyberwar

Cyberwar, Lawyers, and the U.S.: Denial of Service – By Stewart Baker | Foreign Policy.

Lawyers don’t win wars. But can they lose one?

We’re likely to find out, and soon. Lawyers across the U.S. government have raised so many show-stopping legal questions about cyberwar that they’ve left the military unable to fight or even plan for a war in cyberspace. But the only thing they’re likely to accomplish is to make Americans less safe.

No one seriously denies that cyberwar is coming. Russia pioneered cyberattacks in its conflicts with Georgia and Estonia, and cyberweapons went mainstream when the developers of Stuxnet sabotaged Iran’s Natanz uranium-enrichment plant, setting back the Islamic Republic’s nuclear weapons program more effectively than a 500-pound bomb ever could. In war, weapons that work get used again.

Unfortunately, it turns out that cyberweapons may work best against civilians. The necessities of modern life — pipelines, power grids, refineries, sewer and water lines — all run on the same industrial control systems that Stuxnet subverted so successfully. These systems may be even easier to sabotage than the notoriously porous computer networks that support our financial and telecommunications infrastructure.

And the consequences of successful sabotage would be devastating. The body charged with ensuring the resilience of power supplies in North America admitted last year that a coordinated cyberattack on the continent’s power system “could result in long-term (irreparable) damage to key system components” and could “cause large population centers to lose power for extended periods.” Translated from that gray prose, this means that foreign militaries could reduce many of U.S. cities to the state of post-Katrina New Orleans — and leave them that way for months.

Can the United States keep foreign militaries out of its networks? Not today. Even America’s premier national security agencies have struggled to respond to this new threat. Very sophisticated network defenders with vital secrets to protect have failed to keep attackers out. RSA is a security company that makes online credentials used widely by the Defense Department and defense contractors. Hackers from China so badly compromised RSA’s system that the company was forced to offer all its customers a new set of credentials. Imagine the impact on Ford’s reputation if it had to recall and replace every Ford that was still on the road; that’s what RSA is experiencing now.

HBGary, another well-respected security firm, suffered an attack on its system that put thousands of corporate emails in the public domain, some so embarrassing that the CEO lost his job. And Russian intelligence was able to extract large amounts of information from classified U.S. networks — which are not supposed to touch the Internet — simply by infecting the thumb drives that soldiers were using to move data from one system to the next. Joel Brenner, former head of counterintelligence for the Office of the Director of National Intelligence, estimates in his new book, America the Vulnerable, that billions of dollars in research and design work have been stolen electronically from the Defense Department and its contractors.

In short, even the best security experts in and out of government cannot protect their own most precious secrets from network attacks. But the attackers need not stop at stealing secrets. Once they’re in, they can just as easily sabotage the network to cause the “irreparable” damage that electric-grid guardians fear.

No agency has developed good defenses against such attacks. Unless the United States produces new technologies and new strategies to counter these threats, the hackers will get through. So far, though, what the United States has mostly produced is an outpouring of new law-review articles, new legal opinions, and, remarkably, new legal restrictions.

Across the federal government, lawyers are tying themselves in knots of legalese. Military lawyers are trying to articulate when a cyberattack can be classed as an armed attack that permits the use of force in response. State Department and National Security Council lawyers are implementing an international cyberwar strategy that relies on international law “norms” to restrict cyberwar. CIA lawyers are invoking the strict laws that govern covert action to prevent the Pentagon from launching cyberattacks.

Justice Department lawyers are apparently questioning whether the military violates the law of war if it does what every cybercriminal has learned to do — cover its tracks by routing attacks through computers located in other countries. And the Air Force recently surrendered to its own lawyers, allowing them to order that all cyberweapons be reviewed for “legality under [the law of armed conflict], domestic law and international law” before cyberwar capabilities are even acquired.

The result is predictable, and depressing. Top Defense Department officials recently adopted a cyberwar strategy that simply omitted any plan for conducting offensive operations, even as Marine Gen. James Cartwright, then vice chairman of the Joint Chiefs of Staff, complained publicly that a strategy dominated by defense would fail: “If it’s OK to attack me and I’m not going to do anything other than improve my defenses every time you attack me, it’s very difficult to come up with a deterrent strategy.”

Today, just a few months later, Cartwright is gone, but the lawyers endure. And apparently the other half of the U.S. cyberwar strategy will just have to wait until the lawyers can agree on what kind of offensive operations the military is allowed to mount.

***We’ve been in this spot before. In the first half of the 20th century, the new technology of air power transformed war at least as dramatically as information technology has in the last quarter-century. Then, as now, our leaders tried to use the laws of war to stave off the worst civilian harms that this new form of war made possible.

Tried and failed.

By the 1930s, everyone saw that aerial bombing would have the capacity to reduce cities to rubble in the next war. Just a few years earlier, the hellish slaughter in the trenches of World War I had destroyed the Victorian world; now air power promised to bring the same carnage to soldiers’ homes, wives, and children.

In Britain, some leaders expressed hardheaded realism about this grim possibility. Former Prime Minister Stanley Baldwin, summing up his country’s strategic position in 1932, showed a candor no recent American leader has dared to match. “There is no power on Earth that can protect [British citizens] from being bombed,” he said. “The bomber will always get through…. The only defense is in offense, which means that you have got to kill more women and children more quickly than the enemy if you want to save yourselves.”

The Americans, however, still hoped to head off the nightmare. Their tool of choice was international law. (Some things never change.) When war broke out in Europe on Sept. 1, 1939, President Franklin D. Roosevelt sent a cable to all the combatants seeking express limits on the use of air power. Citing the potential horrors of aerial bombardment, he called on all combatants to publicly affirm that their armed forces “shall in no event, and under no circumstances, undertake the bombardment from the air of civilian populations or of unfortified cities.”

Roosevelt had a pretty good legal case. The 1899 Hague conventions on the laws of war, adopted as the Wright brothers were tinkering their way toward Kitty Hawk, declared that in bombardments, “all necessary steps should be taken to spare as far as possible edifices devoted to religion, art, science, and charity, hospitals, and places where the sick and wounded are collected, provided they are not used at the same time for military purposes.” The League of Nations had also declared that in air war, “the intentional bombing of civilian populations is illegal.”

But FDR didn’t rely just on law. He asked for a public pledge that would bind all sides in the new war — and, remarkably, he got it. The horror at aerial bombardment of civilians ran so deep in that era that Britain, France, Germany, and Poland all agreed to FDR’s bargain, before nightfall on Sept. 1, 1939.

Nearly a year later, with the Battle of Britain raging in the air, the Luftwaffe was still threatening to discipline any pilot who bombed civilian targets. The deal had held. FDR’s accomplishment began to look like a great victory for the international law of war — exactly what the lawyers and diplomats now dealing with cyberwar hope to achieve.

But that’s not how this story ends.

On the night of Aug. 24, 1940, a Luftwaffe air group made a fateful navigational error. Aiming for oil terminals along the Thames River, they miscalculated, instead dropping their bombs in the civilian heart of London.

It was a mistake. But that’s not how British Prime Minister Winston Churchill saw it. He insisted on immediate retaliation. The next night, British bombers hit (arguably military) targets in Berlin for the first time. The military effect was negligible, but the political impact was profound. German Luftwaffe commander Hermann Göring had promised that the Luftwaffe would never allow a successful attack on Berlin. The Nazi regime was humiliated, the German people enraged. Ten days later, Adolf Hitler told a wildly cheering crowd that he had ordered the bombing of London: “Since they attack our cities, we will extirpate theirs.”

The Blitz was on.

In the end, London survived. But the extirpation of enemy cities became a permanent part of both sides’ strategy. No longer an illegal horror to be avoided at all costs, the destruction of enemy cities became deliberate policy. Later in the war, British strategists would launch aerial attacks with the avowed aim of causing “the destruction of German cities, the killing of German workers, and the disruption of civilized life throughout Germany.” So much for the Hague conventions, the League of Nations resolution, and even the explicit pledges given to Roosevelt. All these “norms” for the use of air power were swept away by the logic of the technology and the predictable psychology of war.

***American lawyers’ attempts to limit the scope of cyberwar are just as certain to fail as FDR’s limits on air war — and perhaps more so.

It’s true that half a century of limited war has taught U.S. soldiers to operate under strict restraints, in part because winning hearts and minds has been a higher priority than destroying the enemy’s infrastructure. But it’s unwise to put too much faith in the notion that this change is permanent. Those wars were limited because the stakes were limited, at least for the United States. Observing limits had a cost, but one the country could afford. In a way, that was true for the Luftwaffe, too, at least at the start. They were on offense, and winning, after all. But when the British struck Berlin, the cost was suddenly too high. Germans didn’t want law and diplomatic restraint; they wanted retribution — an eye for an eye. When cyberwar comes to America and citizens start to die for lack of power, gas, and money, it’s likely that they’ll want the same.

More likely, really, because Roosevelt’s bargain was far stronger than any legal restraints we’re likely to see on cyberwar. Roosevelt could count on a shared European horror at the aerial destruction of cities. The modern world has no such understanding — indeed, no such shared horror — regarding cyberwar. Quite the contrary. For some of America’s potential adversaries, the idea that both sides in a conflict could lose their networked infrastructure holds no horror. For some, a conflict that reduces both countries to eating grass sounds like a contest they might be able to win.

What’s more, cheating is easy and strategically profitable. America’s compliance will be enforced by all those lawyers. Its adversaries’ compliance will be enforced by, well, by no one. It will be difficult, if not impossible, to find a return address on their cyberattacks. They can ignore the rules and say — hell, they are saying — “We’re not carrying out cyberattacks. We’re victims too. Maybe you’re the attacker. Or maybe it’s Anonymous. Where’s your proof?”

Even if all sides were genuinely committed to limiting cyberwar, as they were in 1939, history shows that it only takes a single error to break the legal limits forever. And error is inevitable. Bombs dropped by desperate pilots under fire go astray — and so do cyberweapons. Stuxnet infected thousands of networks as it searched blindly for Iran’s uranium-enrichment centrifuges. The infections lasted far longer than intended. Should we expect fewer errors from code drafted in the heat of battle and flung at hazard toward the enemy?

Of course not. But the lesson of all this for the lawyers and the diplomats is stark: Their effort to impose limits on cyberwar is almost certainly doomed.

No one can welcome this conclusion, at least not in the United States. The country has advantages in traditional war that it lacks in cyberwar. Americans are not used to the idea that launching even small wars on distant continents may cause death and suffering at home. That is what drives the lawyers — they hope to maintain the old world. But they’re being driven down a dead end.

If America wants to defend against the horrors of cyberwar, it needs first to face them, with the candor of a Stanley Baldwin. Then the country needs to charge its military strategists, not its lawyers, with constructing a cyberwar strategy for the world we live in, not the world we’d like to live in.

That strategy needs both an offense and a defense. The offense must be powerful enough to deter every adversary with something to lose in cyberspace, so it must include a way to identify attackers with certainty. The defense, too, must be realistic, making successful cyberattacks more difficult and less effective because resilience and redundancy has been built into U.S. infrastructure.

Once the United States has a strategy for winning a cyberwar, it can ask the lawyers for their thoughts. But it can’t be done the other way around.

In 1941, the British sent their most modern battleship, the Prince of Wales, to Southeast Asia to deter a Japanese attack on Singapore. For 150 years, having the largest and most modern navy was all that was needed to project British power around the globe. Like the American lawyers who now oversee defense and intelligence, British admirals preferred to believe that the world had not changed. It took Japanese bombers 10 minutes to put an end to their fantasy, to the Prince of Wales, and to hundreds of brave sailors’ lives.

We should not wait for our own Prince of Wales moment in cyberspace.

Flying sphere for disaster recon

Flying sphere the size of a basketball that can travel at 37mph | Mail Online.

Its Japanese developers call it the ‘Futuristic Circular Flying Object’ and it’s designed to go where humans can’t.

The radio-controlled sphere, roughly the size of a basketball, was built for search and rescue operations, specifically to fly in and out of buildings weakened by earthquakes or other natural disasters.

The device uses its onboard camera to transmit live images of whatever it sees.

Scroll down for video

Radio-controlled: The 'Futuristic Circular Flying Object' uses an onboard camera to transmit live images of whatever it sees

Radio-controlled: The ‘Futuristic Circular Flying Object’ uses an onboard camera to transmit live images of whatever it sees

The sphere was built for search and rescue operations, specifically to fly in and out of buildings weakened by earthquakes or other natural disasters

The sphere was built for search and rescue operations, specifically to fly in and out of buildings weakened by earthquakes or other natural disasters

The airborne device zips through the air, glides smoothly around corners, and negotiates staircases with ease, all the while emitting a soft hum

Flying object: The device zips through the air, glides smoothly around corners, and negotiates staircases with ease, all the while emitting a soft hum

The black, open-work ball looks like a futuristic work of art, but it can hover for up to eight minutes and fly at 37mph — although it does slow down for open windows.

 

Fumiyuki Sato, at the Japanese Defense Ministry’s Technical Research and Development Institute, invented and built the vehicle for roughly 110,000 yen (£865 / $1,390) with parts purchased off the shelf at consumer electronics stores.

He said: ‘Because of its spherical shape, it can land in various positions and tumble to move around the ground.’

It zips through the air, glides smoothly around corners, and negotiates staircases with ease, all the while emitting a soft hum.

Inventor: Fumiyuki Sato, at the Japanese Defense Ministry's Technical Research and Development Institute, invented and built the vehicle for roughly £865

Inventor: Fumiyuki Sato, at the Japanese Defense Ministry’s Technical Research and Development Institute, invented and built the vehicle for roughly £865

Slick: The black, open-work ball looks like a futuristic work of art, but it can hover for up to eight minutes and fly at up to 37mph

Slick: The black, open-work ball looks like a futuristic work of art, but it can hover for up to eight minutes and fly at up to 37mph

 

Resourceful: Mr Sato built the sphere with parts purchased off the shelf at consumer electronics stores

Resourceful: Mr Sato built the sphere with parts purchased off the shelf at consumer electronics stores

Measuring 42cm, it boasts eight manoeuvrable rudders, 16 spoilers and three gyro sensors to keep it upright. It is made of lightweight carbon fiber and styrene components for a total weight of 340grams.

If its lithium batteries lose power, it’s been designed simply to roll to a stop to minimise the chance of damage.

‘When fully developed, it can be used at disaster sites, or anti-terrorism operations or urban warfare,’ Mr Sato said.

Meanwhile, he added, there’s the pure fun of testing it.

Earthquake? Terrorist bomb? Call in the AI

Earthquake? Terrorist bomb? Call in the AI – tech – 23 May 2011 – New Scientist.

In the chaos of large-scale emergencies, artificially intelligent software could help direct first responders

9.47 am, Tavistock Square, London, 7 July 2005. Almost an hour has passed since the suicide bombs on board three underground trains exploded. Thirty-nine commuters are now dead or dying, and many more are badly injured.

Hassib Hussain, aged 18, now detonates his own device on the number 30 bus – murdering a further 13 and leaving behind one of the most striking images of the day: a bus ripped open like a tin of sardines.

In the aftermath of the bus bomb, questions were raised about how emergency services had reacted to the blast. Citizens and police called emergency services within 5 minutes, but ambulance teams did not arrive on the scene for nearly an hour.

As the events of that day show, the anatomy of a disaster – whether a terrorist attack or an earthquake – can change in a flash, and lives often depend on how police, paramedics and firefighters respond to the changing conditions. To help train for and navigate such chaos, new research is employing computer-simulation techniques to help first responders adapt to emergencies as they unfold.

Most emergency services prepare for the worst with a limited number of incident plans – sometimes fewer than 10 – that tell them how to react in specific scenarios, says Graham Coates of Durham University, UK. It is not enough, he says. “They need something that is flexible, that actually presents them with a dynamic, tailor-made response.”

A government inquest, concluded last month, found that no additional lives were lost because of the delay in responding to the Tavistock Square bomb, but that “communication difficulties” on the day were worrying.

So Coates and colleagues are developing a training simulation that will help emergency services adapt more readily. The “Rescue” system comprises up to 4000 individual software agents that represent the public and members of emergency services. Each is equipped with a rudimentary level of programmed behaviours, such as “help an injured person”.

In the simulation, agents are given a set of orders that adhere to standard operating procedure for emergency services – such as “resuscitate injured victims before moving them”. When the situation changes – a fire in a building threatens the victims, for example – agents can deviate from their orders if it helps them achieve a better outcome.

Meanwhile, a decision-support system takes a big-picture view of the unfolding situation. By analysing information fed back by the agents on the ground, it can issue updated orders to help make sure resources like paramedics, ambulances and firefighters are distributed optimally.

Humans that train with the system can accept, reject or modify its recommendations, and unfolding event scenarios are recorded and replayed to see how different approaches yield different results. Coates presented his team’s work at the International Conference on Information Systems for Crisis Response and Management in Lisbon, Portugal, last week.

That still leaves the problem of predicting how a panicked public might react to a crisis – will fleeing crowds hamper a rescue effort, or will bystanders comply with any instructions they receive?

To explore this, researchers at the University of Notre Dame in South Bend, Indiana, have built a detailed simulation of how crowds respond to disaster. The Dynamic Adaptive Disaster Simulation (DADS) also uses basic software agents representing humans, only here they are programmed to simply flee from danger and move towards safety.

When used in a real emergency situation, DADS will utilise location data from thousands of cellphones, triangulated and streamed from masts in the region of the emergency. It can make predictions of how crowds will move by advancing the simulation faster than real-time events. This would give emergency services a valuable head start, says Greg Madey, who is overseeing the project.

A similar study led by Mehdi Moussaïd of Paul Sabatier University in Toulouse, France, sought to address what happens when such crowds are packed into tight spaces.

In his simulation, he presumed that pedestrians choose the most direct route to their destination if there is nothing in their way, and always try to keep their distance from those around them. Running a simulation based on these two rules, Moussaïd and his colleagues found that as they increased the crowd’s density, the model produced crushes and waves of people just like those seen in real-life events such as stampedes or crushes at football stadiums (Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1016507108). The team hope to use their model to help plan emergency evacuations.

Jenny Cole, head of emergency services at London-based independent think tank The Royal United Services Institute, wrote a report on how the different emergency services worked together in the wake of the London bombings. She remains “sceptical” about these kinds of simulations. “No matter how practical or useful they would be, there’s usually no money left in the end to implement them,” she says.

For his part, Coates says he plans to release his system to local authorities for free as soon as it is ready.

A cacophony of tweets

In the chaotic moments after disaster strikes, people often turn to Twitter for information. But making sense of a flurry of Twitter posts can be difficult.

Now Jacob Rogstadius at the University of Madeira in Portugal and his team have developed a system that sorts updates from Twitter by keyword – for example “Japan” or “earthquake” – and places them into an event timeline, without the need for hashtags.

In the next phase of development, people will look at tweets clustered in this way to judge the pertinence and reliability of different sources of information, or request more – pictures of the area, for example – to create a virtual “incident room” as the crisis unfolds.

MIT unveils swimming, oil-cleaning robots

MIT unveils swimming, oil-cleaning robots – CNN.com.

By John D. Sutter, CNN
// August 26, 2010 5:05 p.m. EDT | Filed under: Innovation

//

Prototypes of the MIT Seaswarm robots have been tested in the ocean, but they're not ready for commercial use.

Prototypes of the MIT Seaswarm robots have been tested in the ocean, but they’re not ready for commercial use.

STORY HIGHLIGHTS

  • MIT develops a seagoing robot that uses a special material to absorb and gather oil
  • The robot, called Seaswarm, collects oil from the surface of water autonomously
  • The machines would work best in swarms of thousands, researchers say
  • The prototype will be unveiled on Saturday; expect them to be commercial in a year

(CNN) — Here’s a new way of looking at oil spill clean-up: Forget the big ships, massive work crews and hefty price tags.

Instead, just deploy an army of autonomous, oil-scrubbing robots. They can find the oil on their own. And when they reach the site of an oil spill, they talk to their robot friends to figure out the best way to get the whole thing mopped up.

That’s the vision the Massachusetts Institute of Technology put forward on Wednesday as the school announced the development of a prototypical robot called Seaswarm. The $20,000 robots will be unveiled officially to the public on Saturday at an event in Venice, Italy, and will be ready to deal with oil spills in about a year, said Assaf Biderman, who oversaw MIT’s research team on the project.

The Seaswarm robots, which were developed by a team from MIT’s SENSEable City Lab, look like a treadmill conveyor belt that’s been attached to an ice cooler. The conveyor belt piece of the system floats on the surface of the ocean. As it turns, the belt propels the robot forward and lifts oil off the water with the help of a nanomaterial that’s engineered to attract oil and repel water.

“You can imagine it like a carpet rolling on the surface of the water,” said Biderman, who also is associate director of the SENSEable City Lab.

The material on the robot’s conveyer belt, which MIT calls a “paper towel for oil spills,” can absorb up to 20 times its weight in oil.

Once it has absorbed the crude from the surface of the ocean, the robot can either burn off the oil on the spot, using a heater on the “ice cooler” part of its body, or it can bag the oil and leave it on the surface of the water for a later pick-up, Biderman said. That oil could be reused or recycled.

The robots are designed to work in a swarm, he said, meaning thousands could be deployed on the same spill at once. They coordinate with each other by using GPS location data. That lets them plot out the most efficient way to tackle a clean-up project.

Biderman said the Seaswarm robots are relatively cheap, quick and effective at cleaning up oil spills.

Had they been deployed on the Deepwater Horizon oil disaster, he said, the Seaswarm robots would have cleaned up the oil in two months at a cost of $100 million to $200 million, far less than the actual clean-up bill.

The Seaswarm robots operate on solar energy and require only 100 watts of power, or about that of a bright light bulb. They could stay at sea for months, Biderman said, and could operate around the clock.

The conveyor belts on the robots also are engineered in a way that they hug the water to prevent them from flipping over.

“Because it adheres to the surface of the water, it cannot capsize,” he said, “So it can withstand quite severe weather. Imagine this like a leaf that lands on the surface of the water and moves with the waves and the currents and cannot be flipped over.”

Traditional oil skimmers are attached to large boats. They must be operated by people, which increases their cost and they are hampered by severe weather.

About 800 skimming boats were deployed in response to the Deepwater Horizon oil disaster, which began in April and led to nearly 5 million barrels of oil being released into the ocean, according to government estimates. By comparison, 5,000 to 10,000 of MIT’s autonomous robots would have been needed to respond to the spill, Biderman said.

MIT will continue research on the robots for about a year, he said, at which time the robot technology would be ready for commercial production and possibly a buyer.

Other groups are developing oil-spill cleanup technology, too.

Case Western Reserve University has developed another nanotechnology “sponge” material that could be used in response to such disasters.

And a company called Extreme Spill Technology says on its website that it has developed a traditional, boat-based skimming technology that works much more quickly and in rougher waters than the traditional skimmers.

Biderman said MIT’s oil-sopping robot would be most effective in situations like the recent oil disaster, where oil is spread out.

“Ideally, when spillage happens, the best thing to do is to contain it right where the spillage occurs,” he said. “But quite often the oil goes out of containment, and this is where this technology would be most effective.”

But the robots were actually designed with smaller, localized clean-ups in mind, he said.

“We’re hoping that spillage like what we’ve seen with Deepwater Horizon will not occur again, but oil leakage constantly happens and that’s really what motivated us,” he said. “When you drill offshore, you always have leakage. And you can imagine a team of robots waiting around the corner for a spill.”