Tag Archives: smart city

Is Flood Map Due for a Big Data Make-Over?

One of the staples of many cities’ and regions’ disaster planning and readiness is the flood map showing areas and, if you zoom in, structures at risk from floods of a given magnitude. These are published by FEMA in the U.S. and equivalent government agencies in other countries.

Flood maps are not glamorous or technologically exciting things.  They have done their work for many years and, provided that they are up to date, are an effective way of communicating a generalized level of risk. However, they are far from perfect, and it is possible to identify a number of improvements that could be made with some of the Internet of Things and big data technologies now available. In so doing, the flood map could become a poster child for the idea of smart cities.

See also: The 2 New Realities Because of Big Data

First, flood maps are regularly not up date, because they are updated on a five- or 10-year cycle (or, in poorer or or less capably governed locations, whenever funds are made available). In the interim, new understanding of weather patterns, sea level rise and the like can change the definition of appropriate flood scenarios to apply, and entirely new settlement and urbanization patterns can emerge. Flood maps would clearly be more useful if they were more dynamic – if the timescale for their updating was compressed.

At the same time, because of their scale, flood maps cannot really capture localized variations in risk. The example below shows how these may apply even at the scale of individual homes, in this case in Florida.

(With thanks to Coastal Risk Consulting, an IBM Business Partner)

If this local variation was just applicable to residential properties, that would be one thing (although bad enough for the owners of the higher-risk homes!). But if the variation made the difference between having part of the local phone or internet system working or not, or if it meant that a hospital that was thought to be safe was actually at risk of its ER wing being under 18 inches of water, that would clearly be something else again, because it could badly de-rail emergency response. Flood maps clearly need to be more granular – more detailed – as well as more dynamic.

Improvements in dynamism are already being made, as the availability of commercial mapping services from Google, TomTom and others might make one suspect. These are updated rather more frequently than five to 10 years! There are also considerable improvements in granularity now available, as the above example showed – companies like Coastal Risk Consulting will provide LIDAR-based risk assessments at the level of individual properties. Different flood models can be plugged in to allow a city, business or a homeowner (or their insurers) to assess risk arising at individual locations from different scenarios.

See also: Flood Insurance at the Crossroads

But the improvements in dynamism and granularity could, in theory, go much further. The concept of elevation (above sea level or above a river) probably brings to mind something that is a given, fixed and invariable, unless you happen to be looking at geological timescales. But there are factors that can mediate the value of elevation that operate on a much shorter timescale. Consider a building that is 10 feet above sea level but protected by a levee 10 feet high. It may be said to have 20 feet of “virtual elevation,” inasmuch as it would require a flood crest of more than 20 feet above sea level to flood the property. Similarly, take a property 10 feet above sea level but in the area covered by a flood pump or storm drain that can remove 1.5 feet of water from that area. The property may be said to have 11.5 feet of “virtual elevation.” A property may also have a virtual elevation of less than its physical elevation if, for example, building work or a wall or pavement channels additional water toward it.

The point about virtual elevation is that it may change in any given location by the year as, say, gophers undermine the levee; by the month, as an area is paved; by the day, if the flood pump is being maintained; or even by the minute if the pump suddenly fails (perhaps when its power supply is compromised by flooding elsewhere)! Virtual elevation is a highly dynamic, highly granular concept that a typical flood map would fail to capture – yet one that may make the difference between a critical asset being operable or not, or an evacuation route being open or not.  A city faced with an oncoming storm-surge or a rainfall event upstream of where it is located might therefore need to ask “what’s our virtual elevation – our disposition – right now?” The answer might make a significant difference to its standing emergency management plans and require significant adjustments.

All of which tends to imply that the traditional flood map really needs a makeover. At a minimum, while it still provides the baseline, the structures and urban extents that it shows need to be updated, say, annually; making the flood map part of a more interactive tool that allowed for different weather scenarios to be applied, say, would also be a step forward.

In reality, the flood map would represent one end of a continuum stretching to something much more contemporaneous. Using the same core baseline data, changes to virtual elevation could be assessed as plans are approved or building permits are issued, or as assets are maintained and their records are updated.

In this way the flood map would illustrate the observation that “big data” should really be labeled “small data” – but at enormous scale. If the extra data flows can be added to improve the flood map’s dynamism to, say, a daily or weekly update, and its granularity to the individual property or asset level, it would be transformed from some form or reference baseline that may or may not be up to date at any given point in time, to a live tool that supports day to day decision making.

‘Smart Cities’ Are Wide Open to Hackers

A monster storm is on a collision course with New York City, and an evacuation is underway. The streets are clogged, and then it happens. Every traffic light turns red. Within minutes, the world’s largest polished diamond, the Cullinan I, on loan to the Metropolitan Museum of Art from the collection of the British crown jewels, is whisked away by helicopter.

While this may sound like the elevator pitch for an action film, the possibility of such a scenario is more fact than fiction these days.

Cesar Cerrudo is the chief technology officer at IOActive Labs, a global security firm that assesses hardware, software and wetware (that is, the human factor) for enterprises and municipalities. A year ago, Cerrudo made waves when he demonstrated how 200,000 traffic sensors located in major cities around the U/S. — including New York, Seattle, Washington, D.C., and San Francisco — as well as in the U.K., France and Australia, could be disabled or reprogrammed because the Sensys Networks sensors system that regulated them was not secure. According to ThreatPost, these sensors “accepted software modifications without double-checking the code’s integrity.” Translation: There was a vulnerability that made it possible for hackers to reprogram traffic lights and snarl traffic.

A widely reported discovery, first discussed last year at a “black hat” hacker convention in Amsterdam, highlighted a more alarming scenario than the attack of the zombie traffic lights. Researchers Javier Vazquez Vidal and Alberto Garcia Illera found that it was possible, through a simple reverse engineering approach to smart meters, for a hacker to order a citywide blackout.

The array of attacks made possible by the introduction of smart systems are many. With every innovation, a city’s attackable surface grows. The boon of smart systems brings with it the need for responsibility. It is critical for municipalities to ensure that these systems are secure. Unfortunately, there are signs out there of a responsibility gap.

According to the New York Times, Cerrudo successfully hacked the same traffic sensors that made news last year, this time in San Francisco, despite reports that the vulnerabilities had been addressed after the initial flurry of coverage when he revealed the problem a year ago. It bears saying the obvious here: Cerrudo’s findings are alarming.

The integration of smart technology into municipalities is a new thing. The same Times article notes that the market for smart city technology is expected to reach $1 trillion by 2020. As with all new technology, compromises are not only possible, but perhaps even likely, in the beginning. The problem here is that we’re talking about large, populous cities. As they become ever more wired, they become more vulnerable.

The issue is not dissimilar from the one facing private-sector leaders. Organizations must constantly defend against a barrage of advanced and persistent attacks from an ever-growing phalanx of highly sophisticated hackers. Some of them work alone. Still others are organized into squadrons recruited or sponsored by foreign powers — as we have seen with the North Korean attack on Sony Pictures and the megabreach of Anthem, suspected to be at the hand of Chinese hackers — for a variety of purposes, none of them good.

The vulnerabilities are numerous, ranging from the power grid to the water supply to the ability to transport food and other necessities to where they are needed. As Cerrudo told the Times, “The current attack surface for cities is huge and wide open to attack. This is a real and immediate danger.”

The solution, however, may not be out of reach. As with the geometric expansion of the Internet of Things market, there is a simple problem here: lack of familiarity at the user level — where human error is always a factor — with proper security protocols. Those protocols are no secret: encryption, long and strong password protection and multifactor authentication for users with security clearance.

While the protocols are not a panacea for the problems that face our incipiently smart cities, they will go a long way toward addressing security hazards and pitfalls.

Cerrudo also has advocated the creation of computer emergency response teams (CERTs) “to address security incidents, coordinate responses and share threat information with other cities.” While CERTs are crucial, the creation of a chief information security officer role in municipal government to quarterback security initiatives and direct defense in a coordinated way may be even more crucial to the problems that arise from our new smart cities. In the pioneering days of the smart city, there are steps that municipalities can take to keep their cities running like clockwork.

It starts with an active approach to security.

This article was written by ThirdCertainty contributor Adam Levin. Levin is chairman and co-founder of Credit.com and Identity Theft 911. His experience as former director of the New Jersey Division of Consumer Affairs gives him unique insight into consumer privacy, legislation and financial advocacy. He is a nationally recognized expert on identity theft and credit.