Data, data everywhere; and not a bit to eat

Data sets

Data sets are packages of information that you can use before, during or after a disaster.  It is important in planning to determine who has what data, if you can get access to it, and if it is compatible with your systems.

The following are common data sets that provide baseline data, or information that is available prior to any events occurring.  These are useful for planning purposes and exercises.

  • Baseline data: Topography; Political boundaries; Demography; Land ownership / use, Critical facilities
  • Scientific Data: Hydrography / hydrology; Soils; Geology; Seismology
  • Engineering and Environmental Data: Control structures: locks, dams, levees; Building inventories/codes; Transportation, bridges, tunnels; Utility infrastructure, pipelines, power lines; Water quality; Hazardous sites; Critical facilities
  • Economic Data
  • Census Data

Other data sets are only available during a response.  Some data sets are specific to the incident.  These are usually dynamic and depend heavily on solid damage assessment and data exchange agreements.

  • Critical infrastructure status: Road and bridge closures; Airport status; Utility status – water / electricity / gas / telephone
  • Secondary hazards: Condition of dams and levees; Fires and toxic release potential
  • Resource Information: Personnel deployment; Equipment deployment
  • Medical system condition: Hospital status; Nursing home status
  • Casualty information: Injuries and deaths; Medical evacuations; Location of trapped persons; Evacuation routes; Shelters

Keep in mind that not everyone uses data for the same purpose.  Many do a damage assessment and this data is all called “damage assessment”, but it not useful to each other due to different needs and standards.  Some examples include:

  • FEMA looks a major infrastructure and systems, and overall impact.
  • SBA looks at impact to businesses.
  • USACE looks at impact to locks, dams, and their projects.
  • Red Cross looks at impact to individuals and families.
  • Insurance companies look at impact to policy holders.

Most of the data sets include a geographic location, such as an address, road, or other positioning information.  This helps transform the data set from just existing in a database to being analyzed through a GIS tool.  More on that in the GIS section.


A database is a location that stores data.  There can be simple databases, and very complex databases.  Remember that we’re looking at this from the perspective of an emergency manager.  You should be familiar with the terms and some other basic information, but leave the complex database creation to the SMEs.

Here are a few terms that are used in database discussions:

  • Database – an organized collection of data
  • Table – data organized in rows and columns
  • Attribute or field – a variable or item, think of a cell in a spreadsheet
  • Record – a collection of attributes
  • Domain – the range of values an attribute may have
  • Key – unique data used to identify records
  • Index – organize and order records
  • Data dictionary & schema – documentation of connections between all the parts

Quick Tip: never buy or accept a database from someone without a properly documented data dictionary and schema.  Having that will save you hassle in the future when you need support or to change it.

A database can either store all the data in a single table, or spread the data across multiple tables.  Remember that large amounts of data are best handled in a multi-table database, but this also creates the problems of trying to share data as all the data must be linked back.  It shouldn’t be a problem in a properly designed and documented database.

A single table database is like keeping data in Microsoft Excel.  It is simple to create all the rows and columns on one sheet.  Lots of data points and duplicate data points may be better organized in a multi-table database.  Key fields are used to link records across many tables.  Tables can be a one to one (1:1) relationship.  For example, if one table contained your personal information and another table contained your transcript that would be a one to one relationship.  Tables can be a one to many (1:many) relationship.  A table of course descriptions may link by class name to everyone’s individual grades.  Once course taken by many people.  The course description could be updated once without needing to touch every record of all the people that took it.  If you want to dive deeper, start at


Meta data

Meta data is a way of providing information about data, or anything else really.  Metadata makes data retrieval and understand much easier.  It also makes data gathering more complicated and difficult since there is more work required on the data gathering side.  In my experience, everyone agrees that good metadata is good.

Like everything else, garbage in = garbage out. is an example of data and metadata in capturing.  Each field is described as to the use and what it contains.

Another way to look at meta data.  The nutrition facts is meta data for the banana.  The banana is meta data for the information listed in the nutrition facts.

An image of a bananaBanana nutrition label


Next: Data standards

Additional resources:

So much information about information

Information.  There is a lot of information about information.  Effective use of information is expected, yet in my experiences there are always “day after experts” that will tell you it could have been done better.

This is really the overarching question about information.  Craig Fugate has noted on a number of occasions: Every EOC has the weather radar or satellite image of a hurricane making landfall.  How many people in the EOC do you think can actually read that image?  Much of what you see in an EOC is eye-candy or part of the “theater of disaster” for visitors.

High-tech EOCs and pretty graphics are useless unless they help someone make a decision. – Craig Fugate

Information sources include: Historical records, personal experience, first responders, local and state EOCs, media (traditional & new), remote sensing, and the general public (inside and outside event area)

Imagine if all smart phones had sensors (like GPS, microphones, photo, video, and text capability) that people could use to upload multi-media information to first responders.

Oh wait.  They do.

Gathering and analyzing data

The challenge there is in the gathering and analyzing of the data.  That is where the information hierarchy or “DIKW” models come in to help understand it.

  • Data is a simple specific fact.  Alone it doesn’t do anything and doesn’t have much value.  Data needs to be built on.  Example: There are 1000 people in a shelter.
  • Information is data processed to be relevant and meaningful.  It adds the “what” element.  Example: There are 1000 people in a shelter and no toilets.
  • Knowledge is information combined with opinions, skills and experience.  It adds the “how to” element.  Example:  A 1000 people in a shelter will need 75 toilets.
  • Instead of Wisdom, I add Action.  Knowledge is good, but putting that knowledge to action is better.  Example: We need to get 75 toilets for the 1000 people in the shelter.

Imaging being asked three questions: “So what’s the problem?”; “What would fix it?”; “How will you do it?”  When you ask these questions, you are loosely following the DIKW model to turn data into action.

Information processing steps

All data systems, including you, will follow basic steps to process data.  These steps can be very simple or increasingly complex depending on the needs.

Project management folks will recognize a critical step that is assumed before these can occur: the requirements.  What are your information needs?  What is the goal that you’re trying to achieve with this work?

  • Collection: Gathering and Recording Information
  • Evaluation: Determine confidence: credibility, reliability, validity, relevance
  • Abstracting: Editing and reducing information
  • Indexing: Classification for retrieval
  • Storage: Physical storage of information
  • Dissemination: Get the right information to the right people at the right time

Many systems are good at the collection of data.  That’s usually the easy part.  Finding the right computer algorithms to handle evaluation to indexing is the hard part.  Google is an example of a company that has gotten this down so well that they became the top company for internet searches.

 Next: Databases, data sets and meta data


Additional resources

Cyber-security and disasters

More and more systems are being connected to share information, and IP networks provide a very cost-effective solution.  One physical network can be used to connect many different devices.  The water company can use a computer interface to control the water pumps and valves at treatment plants and throughout the distribution system.  The natural gas and electric providers can do the same.  Hospitals connect medical devices throughout the facility to central monitoring stations.  A few people in one room can watch all the ICU patients.  Fire departments, law enforcement and EMS can use a wireless network to communication, dispatch units, provide navigation, and track vehicle telematics to manage maintenance cycles.

All networks do not need to lead to the internet, however this is rare and needs to be specifically designed into the system when it is being designed.  Having a physically separate system does provide the best security if all the data is being kept internal to that network.  Remember that internal-only networks are still subject to security issues from internal threats.

Any network or device that does have an internet connection is subject to external attacks through that connection.  A malicious hacker can break into the water treatment system and change the valves to contaminate drinking water.  They could open all the gates on a dam flooding downstream communities.  They could reroute electrical paths to overload circuits or such down other areas.  They could change the programming so dispatchers are sending the farthest unit instead of the nearest, or create false dispatch instructions.

Cyber attacks can disable systems but they can also create real-world disasters.  First responders are trained to consider secondary-devices during intentionally started emergencies.  What if that secondary-device is a cyber attack, or a cyber attack precedes a real event?  During the September 2001 attacks in New York City, a secondary effect of the plane hitting the tower was the crippling of the first responder’s radio system.  Imagine if a cyber attack was coordinate with the plane’s impact.  The attackers could turn all traffic lights to green which could cause traffic accidents at nearly all intersection.  This would snarl traffic and prevent the first responders from getting to the towers.

A side step on the use of the term hacker.  A hacker is anyone that hacks together a technical or electronics solution in an uncommon way.  I explain it as “MacGyver’ing” a solution.  There is no positive or negative connotation in the term used that way.  Hacker also describes a person that breaks into computer systems by bypassing security.  A more accurate description is calling them a cracker, like a safe cracker.  This type of hacker is divided into criminals (black hats) and ethical hackers (white hats).  Ethical hackers are people who test computer security by attempting to break into systems.

By now, you’re probably aware of the Anonymous hacker group.  They have been collectively getting more organized and increasing in actions that drive toward internet freedom since 2008.  Often they’re called “hacktivists” meaning they hack to protest.  There are many more malicious hackers out there with different agendas: status, economic, political, religious … any reason people might disagree could be a reason for a hacker.

Somewhere on the internet is a team of highly trained cyber ninjas that are constantly probing devices for openings.  They use a combination of attack forms including social engineering (phishing) attacks.  Automated tools probe IP addresses in a methodically efficient manner.  The brute force method is used to test common passwords on accounts across many logins.  Worms and Trojans are sent out to gather information and get behind defenses.  Any found weaknesses will be exploited.

Pew Internet reports that 79% of adults have access to the internet and two-thirds of American adults have broadband internet in their home.  The lower cost of computers and internet access has dramatically increase the number of Americans online.  The stand-alone computer connected to the internet has forced the home user to have the role of the system administrator, software analyst, hardware engineer, and information security specialist.  The must be prepared to stop the dynamic onslaught of cyber ninjas, yet are only armed with the tools pre-loaded on the computer or off-the-shelf security software.

Organizations are in a better and worse position.  The enterprise network can afford full-time professionals to ensure the software is updated, the security measures meet the emerging threats, and professional resources to share information with peers.  Enterprise networks are also a larger target; especially to increase the online reputation of a hacker.

On Disasters

During a disaster, there will be many hastily formed networks.  The nature of rushed work increases the number of errors and loopholes in technical systems.

During the Haiti Earthquake response, malware and viruses were common across the shared NGO networks.  The lack of security software on all of the laptops created major problems.  Some organizations purchased laptops and brought them on-scene without any preloaded security software.  Other organizations hadn’t used their response computers in over a year, so no recent security patches to the operating systems or updates to the anti-virus software was done.  USB sticks move data from computer to computer, bypassing any network-level protections.  The spread of malware and viruses across the networked caused problems and delays.

There are a number of key factors when designing a technology system that will be used in response that differ from traditional IT installations.  One of the most important considerations is a way for the system to be installed in a consistent manner by people with minimal technical skills.  Pre-configuration will ensure that the equipment is used efficiently and in the most secure manner.


Additional Resources

Understanding networking

As a manager, it is not your responsibility to know how to configure a router and make things work in the network.   The best way that you should consider networking is the “black box theory”.  You really don’t care how the individual parts work.  You need to know what they are capable of.  Believe it or not, networking is really simple.

At the simplest form, a network is a few computers that are connected by a wire to a network device that shares the information to each computer.   A network is similar to a big post office that is sharing information packets electronically.  The computers each have a unique name that helps the network devices know what information goes to what computer.

The internet is an IP-based network.  IP stands for Internet Protocol.  Easy, huh?  The Transmission Control Protocol is the way that the computers break up large data chunks to send across the internet.  Stick the two together and you’ll get the commonly referred to TCP/IP.  There are other forms of message handling—such as User Datagram Protocol (UDP)—to move information across the internet.  You don’t need to know how these work or move information.  Just know that IP is the backbone of the internet.

Any data that you can turn into an IP packet can travel over an IP network; that data can also travel local networks and the internet.  When a phone converts voice to an IP packet, it is called a Voice over IP phone (VoIP) meaning that it can send your phone call over the same network as email, web browsing, and everything else.

Blah-blah over IP is nothing fancy.  That means that someone has designed a network device (or interface) that translates information from a source to an IP-packet, and back.  You’ll hear about Radio-over-IP, Video-over-IP, Computer-over-IP, and just about everything else.

Data standards are really important in this area.  When each vendor comes up with their own way of doing ____-over-IP, then it is likely that vendors will not be compatible unless they use a standard.  While there are organizations that state standards, a standards true usefulness is proven by if and how people use it.

The International Telecommunications Union (ITU) has a series of standards for videoconferencing, including ITU H.320 and H.264.  When Cisco Telepresence was released, it was designed to bring a meeting room presence to teleconferencing.  Part of the design was full size displays that blended both conference rooms.  It was not compatible with any other video conferencing systems.  The Cisco sales rep explained to me that their product would look poor if it was used with lower quality non-telepresence systems, so the decision was made to be a non-standard data packet.  The problem with this is that it would require companies to invest in two separate video conferencing systems.  More recent advances have allowed some mixed use of video conferencing systems.

Now that we’ve talked a bit about what can go across the network, let us turn back to the network itself.

There are many different formats of networks.  A quick internet search on “network topology” will show the different forms.  Each has an advantage and a disadvantage.  For this course, the focus will be on a tree topology.  An internet connection enters a site through one point.  Switches and routers are used to split that internet connection to all the individual computers.

A demarc (short for demarcation) point is where a utility enters a building.  It is also the point that separates ownership between the utility company and the building owner.  The electrical demarc in a residential home is commonly the electric meter.  The power company will handle everything up to and including the meter.  The home owner handles everything from the meter to the power outlets.

A telephone demarc is located at the telephone network interface.  The network demarc is located at the network interface device (aka smartjack).  These can be located anywhere in a building, but I’ve found that most wireline utilities come in together.  These can be copper wire, fiber optic or some other type of cable.

The demarc is the head of the network for that site.  In a tree topology, this is where the site’s primary router would be located.  A router is a network device that moves data packets between two different networks.  Here, the router is directing the packets, only passing those that need to travel on the other network.  It is ideal for separating two networks to reduce congestion by keeping local data within the local network.  A primary router, sometimes called a site’s core router, is the one that controls the other routers and is mission-critical for the site to be connected.

Routers are the major component that give a network flexibility.  Professional (non-consumer) grade routers allow for the installation of modules, both physical and logical.  These modules connect the router to different devices.  These modules commonly allow a router to connect to a wireline (T1, T3, etc) circuit, a wireless (wifi, cell) circuit, or a different cabling (twisted pair, coax).  These modules can also be used to connect a router to a phone system, radio system, video system and so on.

Other network devices used to spread network segments out from the router include switches and hubs.  Switches can have different interfaces and be used to connect different network types.  This is handy in older buildings where you may need to use an existing style network and will overlay it with a different type of cabling or connections.  Hubs are almost non-intelligent splitters that just provide more ports.

The Warriors of the Net video provides an entertaining explanation of the different components.  Again, from a manager’s perspective, you do not need to get very technical with the network components.

Autonomous systems and robotics

An autonomous system is a system with a trigger that causes an action without the involvement of a person.  Two well known examples are tsunami buoys and earthquake sensors.  These continuously monitor a series of sensors.  When the sensors register a reading that exceeds a predefined threshold (the trigger), a signal is sent to a computer to generate warning alerts (the action).  These systems can range from the very simple to the extremely complex.

One could argue that the backup sensors on cars are an autonomous system: the trigger is the driver shifting into reverse, the computer turns on the back up lights, activates the camera, turns on the internal screen, and activates the sensors to start beeping.  I think it is a bit of a stretch but it does provide a simple example.

The other side of this spectrum is the Mars Rovers.  Due to the time delay in communications between Earth and Mars, the rover cannot be directly controlled.  NASA gives a general command for the rover to head to a new destination.  The rover acts independently to drive over the terrain, and makes decisions to avoid obstacles.  On-board the rover is the Autonomous Exploration for Gathering Increased Science (AEGIS) software — part of Onboard Autonomous Science Investigation System (OASIS) —to automatically analyze and prioritize science targets.  This allows more work to be done without human intervention.

Somewhere between the two is the Roomba.  This autonomous vacuum moves around the house cleaning up.  It will learn the layout of a room to be more effective in future runs.  When complete, it docks to recharge.  At a set time interval, it does this again.  Now don’t laugh about the Roomba; it is a very commonly hacked robot for people interested in DIY robotics.  Microsoft Robotics Developer Studio software has built in modules specific to controlling the Roomba.  That gives mobility and a way to control it.  Microsoft has released additional modules for the Robotics software which adapts the Kinect sensor to sense the environment.  This addition provides vastly better sensors over the traditional range-finder and light sensors.  Microsoft isn’t the only player in this market; Lego Mindstorms markets robotics as a toy for children ages eight and up.  Robotics isn’t just for engineering students in high-end tech universities.

There is enough technology in existence today to make huge leaps in the use of robotics.  The main challenge of robotics is the acceptance by the general public.

Watch the videos about the DARPA autonomous car, Urban Search and Rescue robots, BigDog robotic transport and Eythor Beder’s demonstration of human exoskeletons.  Combine these and we can vision some major transformations.

Take the computational guts from the autonomous car and put them into a fire engine.  Now the fire engine can be dispatched and navigate to a fire scene independently.  Once on-scene, sensors can detect the heat from the fires – even if they are in the walls and not visible to the human eye.  Robots from the fire engine can be sent into the structure.  Heavier duty robots can pull hoses and start to extinguish the fire.  Other robots can perform a search on the house to assist survivors out, and rescue those unable to escape.  Communication systems will link together all the sensors on the robots to generate a complete common operating picture that all robots use in decision making.

A similar thing can be done with an ambulance.  Focus just on the stretcher.  Imagine if the stretcher would automatically exit the ambulance and follow the medic with all the necessary equipment.  The stretcher could be equipped to up and down steep stairs, make tight turns and remain stable.  Automating the lifting and bending done by EMS workers handling patients on stretchers would reduce the number of back injuries caused by improper lifting.  This would keep more EMS workers in better health which reduces the sick leave and employee compensation claims.

Robotics in search and rescue could be the one thing that saves the most number of lives, both in the victims and the rescuers.  A building collapses and the SAR team responds.  On the scene, the workers setup specialized radio antennas at different points around the building site.  They release dozens, if not hundreds, of spider or snake like robots.  Each robot has the ability to autonomously move through the rubble.  They are light enough to not disturb as a human would do.  They are numerous enough to more quickly cover the building as a human would do.  The combined sensor data of their location and scans would quickly build a three dimensional model of the rubble.  They are location aware of each other so they don’t bunch up in one area or miss another.  Heat, sound and motion sensors could detect people.  Once this initial scan is done, the SAR team will be able to know where the survivors are and communicate with them through the robots.  The team will evaluate the 3D model for the best entry paths to get to and rescue the survivors.  If the situation is unstable, larger robots can be used ahead of the team to provide additional structural support to reduce the risk of collapse.  If a robot can’t communicate with the antennas outside, the robots can do mesh networking to pass information along.

Social Media & Engagement

Social media reflects the move from large media companies monopolizing all forms of media.  Newspapers, books, magazine, television and radio are controlled by large companies.  Technology advancements lowered the barriers to getting your opinions out.  Technology allowed for individuals to create website and blogs to promote their own thoughts.  The internet allowed people from different locations to find this information and band together.  A key term used to define social media is “user-generated content”.  An individual has the ability to create and distribute their own materials.  The very nature of individuals sharing information with individuals made it social.  Hence the tag being social media.

Social media is the commonly accepted term.  But I think it goes beyond sharing user-generated content.  Traditional media is a broadcast medium that reaches a lot of people, but it is primarily a one-way medium that isn’t inviting for other opinions.  Dissenting views are policed by the media gatekeepers who choose what will and will not be broadcast.  Social media is similar to broadcasting to this.  The person who is publishing the materials retains the control of the information being published.  They can block unwanted information and be a gatekeeper to their own media channel.

Social media is shifting to social engagement.  It is no longer enough to be able to publish and distribute your own opinions.  Success in media involves the engagement of the people who want your materials.  True communication requires a feedback loop.  It is transactional communication that allows a full back and forth.  Technology has opened new methods of engaging with your target audience.  You need to be open and willing to hear what they say.  The best successes in social media today are actually examples of social engagement where an individual (or organization) is using social media channels to have individual engagements with the audience.  These engagements may still originate as a broadcast message, but with the desire that each individual receives and acts on it as if it was meant for them.

Advertising theory isn’t that far from this.  Advertising in mass media has always worked to target their message to reach a specific audience to influence behavior.  Advertising has always wanted very specific details on how to segment, divide and categorize the public.  Social media has done a wonderful job of this.  Imagine if ten years ago I said that I want you to pay for this device in your home, pay for an internet connection, go to this website.  Once at the website, I instructed you to enter all your personal information, link up with all your friends, and update your daily activities.  You’d think I was crazy.  Today, people do it all the time on social media sites.

As of January 2012, Facebook is valued as a $50 BILLION company.  Some estimates place it as high as $100 BILLION.  Why?  They don’t make anything.  They don’t sell anything.  The code behind their site isn’t that valuable.  Or is it?  As was pointed out to me recently: If you can’t figure what someone is selling, they’re probably selling you.  When you joined Facebook, you’ve entered your information (even if only partially).  You’ve linked to your friends and you update your status often.  An analysis of your online friends and their information can be very telling about you.  It is natural that you are friends with people who are of a similar social, economic and political stance.  This information allows Facebook to generate extremely targeted advertising.  This is very appealing to advertisers.  They recognize that advertising is not a single shot silver bullet, but instead it is measure over time and impressions.  An impression occurs each time their message given to you.  Facebook is valued so high because the large number of people who use the service; the wonderfully large amount of data; and the data has a high confidence since it is self-entered.

Here’s another tidbit about Facebook.  They’re promoting “seamless browsing”.  This is a single sign-on that you can use to access Facebook and many partner websites.  What is actually does is allows the transfer of your actions back to a single database to improve the ability to target advertising.  The Facebook cookie left on your computer was discovered to be live even after logging out which allowed the tracking of your actions off Facebook.

But all this social media isn’t evil.  It is an exchange.  You are getting service that you value in exchange for advertising to be pushed to your web browsing experience.  This advertising is just an extension of what has been going on with TV.  You’ve accepted advertising on TV as a way for the stations to generate revenue so you don’t need to pay for the TV shows watched.

Social media sites are being used by everyone from major corporations to individuals.  This has certainly changed the media landscape from when there was one local newspaper and three TV stations.  The “noise” of all the available social media has grown so much that it becomes critically important to find the information you want and disregard the rest. It is like everyone has come to a single place and they’re all talking at one.  Some loudly and others softly.  Your role is to find the people and thought leaders that have the information you want.  This is where your friends come in handy.  Your connections on social media are our curating content and posting it to their streams.  You post both original content and shared content to your stream where they read.  Recommendations from people you trust is more valued in this environment as you seek content.

Is there any hope for an Emergency Manager to establish themselves in the realm of social media and reach their target in competition to the highly-funded advertising machines?  Absolutely, but it takes time.  People want content that is timely, relevant, helpful and available in their preferred medium.  That may sound very similar to successful public notifications; well, it is.

The first step in social media: listening.  You wouldn’t just walk up to a group of people at a party and start talking about something.  You are more likely to walk up, listen to the conversation, and participate after you’re comfortable with the people and the topic.  They’ll also be more receptive to you because you’ve shown the respect not to interrupt them.

Start by looking for people and conversations like the ones you want to have.  How are they being conducted?  Are there nuances in language and wording unique to that medium?  Seek out others that appear successful and ask for their insights.  Professional networking for emergency managers are having more discussions on the use of social media; engage them for help.

An Emergency Manager needs to take an initial guess at what information they want to and can provide, then pick a medium to start in.  Consider where the audience is and what can be built on when selecting a medium.  If you’re in a county that has a good Facebook presence, then starting there could allow you build off the success of the whole county.  Cross pollination of ideas and sharing content will encourage followers of one to follow both.  Pushing existing public notifications through the social media channel can be a useful way to start building followers.  This can be as simple as the weather warning and traffic alerts.

A facet of success is being flexible when starting.  Listen to the feedback from the followers, and adjust tact to keep up with needs of the audience.  The Los Angeles Fire Department made the decision to have two twitter streams.  One was the primary notification stream for people to follow at Twitter @LAFD.  They learned that their audience didn’t want to be overwhelmed with messages that weren’t relevant so they started the second one at Twitter @LAFDtalk which was their talking stream where they engage the individuals.

Be prepared during an emergency to be active on social media.  Some of the obvious mistakes are the social media sites going silent during a crisis.  This was very obvious during the breaking of the Penn State crisis in November 2011.  A good analysis of this is posted at Social Media Today (  Instead of actively engaging the audience through social media, they went silent on the major story but kept pushing soft general interest bits.

Develop a social media handbook for your work.  Establishing the foundation and purpose of social engagement will be critical when it comes time to justify (defend?) the time spent of social media.  It helps to bring colleagues onboard with a common set of expectations.  When establishing goals, avoid measuring success by the number of followers or friends.  Measuring impact and trust by counting followers is like measuring intelligence by the height of a person; it is simply not the same thing.

The American Red Cross posted their social media handbook for chapters openly at  The guidelines boil down to a few simple statements:  Tell them who you are.  Be factual.  Be honest.  Be timely.  Stick to what you know.  Represent the brand well.


Additional resources


Emergency resource identification and planning

An emergency manager needs to identify the resources that will be used in the community when an incident exceeds the daily norm.  There are many places to find resources that may be used in a disaster, but during a disaster is not the time to be seeking these resources and sharing business cards for the first time.

This does assume that the fire department, law enforcement, and emergency medical system are properly staffed to handle the majority of daily emergencies that occur in a community.  If there are not enough resources to handle the day-to-day events, then there is a much bigger problem for the community.

There are attempts to capture the information about the nation’s critical infrastructure into an open system so the data can be shared with authorized users yet secured so it doesn’t reveal too much information to those who don’t need to know it.  These systems can also feed real-time operational data in a way that shares a common operational picture of what is occurring.  One of these efforts is Virtual USA.  More information can be found at  Another effort to share information in a less formal way is through the First Responder Communities of Practice, found at

A commonly shared statement is that the private industry owns most of the resources and critical infrastructure that can be used or damaged during a disaster.  I believe it is a mistake to always tap the private industry with an expectation they provide their products and services free because it is a disaster.  Private industry needs to pay for their resources and their business model may not include giving away free stuff.  How important is a service during a disaster if someone says “well, if we can get the resource free then we’ll do it, but otherwise no”?  Only doing something because it is free shows that it isn’t as important as something you are willing to pay for.

The resources available in a community will greatly vary with the type of community.  An urban community will have less wild land firefighting gear then a rural counterpart, but is likely to have taller ladder trucks and better equipped for high-rise rescues.  Examples of how community need drives first responder resource can continue.  Fuel pipelines and storage tanks push the need for foam pumpers, but are becoming more common outside of industrial plants.  Large off-road areas push for special law enforcement vehicles to patrol those areas.  Large elderly and special needs populations push for more or differently equipment ambulances.

A common mistake is to look for FEMA for all the needed disaster response support.  While FEMA does own some assets, FEMA functions as a mechanism to gain access to assets located in other parts of Federal and state government.  FEMA uses mission assignments to request support from other agencies with an IOU that FEMA will reimburse the costs association with the mission assignment.  Any government agency can bring their assets to support a disaster relief operation, but only through a mission assignment will FEMA pay for them to be there.  As an emergency manager, you need to look out and identify these resources that may exist in your community.

A military installation in the community may become a valuable partner in disaster response because it increases the goodwill between the community and the base plus allows the base a way to support their members who live and work off the base.  On the flip side, military resources may not be available if the commander (or higher) determines that dedicating the resources to the response will weaken their level of readiness too much.

Look around your community.  Hopefully you will start to notice a relationship between the assets of the first responders (and their support teams) and the equipment, training and resources available for use during events.  The Urban Areas Security Initiative (UASI) grants ( may have allowed your community to purchase equipment for major incident response.

An emergency manager should be engaging with the community before a disaster to get to know the people and organizations in the area.  The entry into the private sector may be through the local Chamber of Commerce, or even the business directory of the Better Business Bureau.  Providing information to help the community’s residents and businesses be prepared in advance of a disaster will also help the emergency manager make contacts that may be used during the disaster.

There are public-private partnerships that successfully exist in technology.  The most common one that emergency responders may see is the National Communication System which arose from the telecommunications industry and Federal government working together.  Through NCS, emergency responders can get priority access to landline and cellular phone systems, priority restoration of telecommunications systems and shared access to HF radio frequencies.  The companies engaged in this are listed at

When a local or state response agency has significant communications problems, FEMA can be approved to provide assets to support them.  One of FEMA’s internal assets is the Mobile Emergency Response Support (  They’re role is to provide communication support to the disaster responding agencies; not to the general public.

This is a really good time to bring up the important part that when requesting assistance from someone, be very clear who is being directly helped and served.  When engaging a resource, provide them as much information as possible so they can plan ahead for the situation.


Additional resources

EOC Technologies

An Emergency Operations Center (EOC) is central to coordinating the resources for a response to an emergency.  While the term can be used loosely throughout government, public, NGO and private organizations and modified to fit their specific needs, there is a NIMS definition.  From the National Incident Management System, December 2008:

“Often, agencies within a political jurisdiction will establish coordination, communications, control, logistics, etc., at the department level for conducting overall management of their assigned resources. Governmental departments (or agencies, bureaus, etc.) or private organizations may also have operations centers (referred to here as Department Operations Centers, or DOCs) that serve as the interface between the ongoing operations of that organization and the emergency operations it is supporting. The DOC may directly support the incident and receive information relative to its operations. In most cases, DOCs are physically represented in a combined agency EOC by authorized agent(s) for the department or agency.”

An EOC for a locality brings together the numerous agencies into one spot for communications and coordination.  While it includes the fire department and law enforcement agencies, it also includes public works, health, utilities, transportation, volunteer organizations active in disaster (VOAD) and representatives from different private sectors.  Each of these entities brings valuable resources to an emergency response.  The EOC provides a physical location equipped with the tools and people necessary to manage the external resources for an incident.

Incident Command System (ICS) uses the EOC as the point of contact for any needs outside what is assigned to the operation.  An EOC may serve in this role for multiple incidents simultaneously.  The facility that houses the EOC may also be home to other centers used day-to-day or only during large operations, such as the Emergency Communications Center, Joint Information Center (JIC), Joint Operations Center (JOC), Multi Agency Coordination Center (MACC) and so on.

The Building

Given the core nature of the EOC to both daily first-responder work and crisis / disaster response, the building itself must be designed to function even in the worst circumstances.  Multiple redundancies are needed to cope with failures of the primary systems.

Electricity is usually the first redundancy that is thought of so there are many options there.  Positioning the building between two major power grids will mitigate the failure of one grid (or substation).  This limits the physical locations of the building since finding a location that can be served by two or more power grids may be challenging.

Redundant power through an on-site generator is a very common method for backup power.  A power transfer switch is installed in the building that will automatically switch from the power grid to generator power and back.  The power transfer switch isolates the generator from the transmission line which prevents a very dangerous situation called backfeeding.  Backfeeding is when electrical current goes backwards through the lines.  An electrical line worker may think the power is off on the line being worked on, but a facility creating a back feed will energize the lines creating a fatal situation.  The transformers that normally step down the power down to lower voltages will work opposite in backfeeding situations and step the power up to a high voltage.  I’ve experienced where the power transfer switch failed, and it took three days for it to be repaired.  Meanwhile, the building was not able to operate on either generator or the power grid.  Look for all bottle necks and single points of failure in the system design.

On-site generators can have multiple fuel sources.  Diesel engines are traditionally viewed as better to run at constant speeds for longer periods of time then gasoline engines.  Propane is another fuel source.  The downside is the need to store a sufficient quantity of fuel on-site to power the generator.

Natural gas generators can be installed and these run off the natural gas system in the community.  Natural gas systems have a high reliability because turning the system off would require trained professionals to turn it back on at every juncture.  It is expected in earthquake and flood catastrophe planning for these to be overwhelmed, but no worse then any other utility.

Alternative energy, such as wind and solar, is a way to offset a portion of the energy usage required by the facility.  It is rare that these can meet the peak demand on the site for an extended period of time.

Water and sewer systems may not be required to run the technical systems in the EOC, but are certainly required to operate kitchens, sinks and toilets used by the people in the EOC.  Drinking water may be stored at the site as a backup.  Contracts may be put in place to arrange for portable toilets.  Keep in mind that commercial kitchens require running water by local health code, if the EOC is that equipped.

Telecommunications — last but definitely not least.  An EOC should be a hub and contain at least one of every communication type used in the community and agencies represented in the EOC.  This can include terrestrial, cellular, two-way radio, broadcast radio, and satellite technologies.  It is important to be able to communicate with individual units, other sites, as well as other counties, the state EOC and Federal points.  The main land-line phone numbers in the location need to be re-routable remotely should the site go dark.  Only being able to forward a phone when you are at the site doesn’t help when the site is a smoldering hole in the ground.  This is especially true of the public safety access point (PSAP) phone number where 9-1-1 calls are routed.  Ensure that you have the direct line to the office in the telephone company that can reroute a phone number somewhere else.  And test this.  Often.  You’ll also need verification information such as the account number, circuit number, authorization codes and so on.  Have those written into your procedures.  Yes, you need written procedures so any authorized person can pick up and make things happen.

In short, the building should be ready to operate as an island unto itself in the worst situations.  Once this structure with all the redundancies is completed, it needs to be replicated.  Ultimately, even the EOC could be in the path of destruction and require a backup.

A Dark Room

As a side note, when an EOC goes active and will be staffed around the clock for many days, consider where workers can take a break.  While I was at the Pentagon for the response to the September 11, 2001 attacks, a tent was setup with cots and not much light.  This was a quiet place away from the noise and sights where individuals could go to disengage for a few moments.  It was frequently used during the first week when many people were working long shifts.  This is also helpful for permanent facilities.  During the Hurricane Katrina response, a few cots were setup in an out of the way conference room that allowed staff to take a moment for themselves too.  Keep the human in humanitarian response.

Pretty Sparkly Things

Technology must facilitate decision making or else the value is limited.  Technology that slows down decision making hinders response.  All this occurs at the user interface.  The user interface is what the workers see.  Everything else is like the wizard behind the curtain — most people won’t even know it is there.

Consider for a moment how many EOCs (and similar facilities) have a large radar or satellite image of a hurricane, yet they don’t employ a weather forecaster who can interpret the image.  There is a certain amount of technology in an EOC that lends to the visual appeal, otherwise known as the theater of disaster.  Admit and accept the fact that some technology will be installed in an EOC to make it more appealing for tours and media.  One aspect of an EOC is to be a known location for media conference which helps tell the story of how they are helping the people in the community.  The works in the EOC know who employs them; the big sign on the wall with the location name and graphic is for the visitors.

Every permanent site should have a list of all technology being used in a rank-ordered prioritized list with the most critical aspects listed first.  Get this list approved by all stake holders.  When system failures occur, this list will be the order that items are restored at the site or brought up at the alternate site.  Having this determined ahead of time will mitigate some of the ways that you and your team will be pulled during system failures.  It provides a common planning sequence which allows independent operation when a leader is not there to give the orders.  Remember, technology doesn’t fail users; technology fails to meet the users’ expectations.  Set the expectations correctly and you’re more likely to be considered a success even when the technology crashes.

IS-775, EOC Management and Operations

Now is a good time to mention that FEMA’s Emergency Management Institute has a self-study course on EOC Management and Operations.  It is IS-775.  This course gets into more details about the role, design and function of an EOC.  The course can be found on EMI’s website at

The Basics

At the core, the EOC needs to be equipped with systems that allows C3: Command, Control and Communications.

As mentioned earlier, the technology must assist in decision making and increase the effectiveness of a disaster response.  Therefore, the systems must be easy and efficient to use.  Success in this area is obvious because the system will be used.  Failure in this area leads users to make work-arounds, after-the-fact data entry, or simply not using the system at all.  A good user-interface is critical for data to be accurately input, and the right information to be returned at the right time.

The system should be able to identify trends in real-time that lead to failures, and provide notifications with recommendations so timely action can be taken to mitigate the failure.  This could be anywhere from a critical failure of life-safety services to the “minor” issue of accurate records.  Automation of data collection can create robust data sets that can be used for analysis and ease the burden on the workers.  These systems become the “system of record” that will be used during audits and legal proceedings so ensure the information is captured consistently and accurately, and then backed up using current best practices.  It must produce the documentation that may be requested after a disaster to assist with any sort of claims, reimbursement, and research.

Systems kept behind glass with “break in case of emergency” writing will fail when used.  Odds are good that the users will not be familiar with the systems or know when to shift from the daily systems to the crisis systems.  Require the system to be scalable so it can be used on routine daily events and grow to the massive response.  The system will need to track multiple incidents at the same time using resources from the same resource pool.  A process that will allow the same unit to be dispatched to different incidents at the same time creates confusion and therefore increases risk.

Sharing the data will be important when other governmental units get involved.  For instance, allowing data to pass to the financial systems will make it easier to capture costs, categorized accurately, adjust budgets and handle overtime pay.  Passing data to the maintenance section will highlight which resources have been used (abused) more than expected recently which could shorten the time to overhaul and inspection.

The data captured should be geotagged to allow analysis and view on a GIS system, and integrate with predictive models.  Example: capture the real-time location of all resources (facilities, people and equipment) using GPS or similar technology, then include that in all overlays of hazard modeling.  It will instantly show which units may no longer be in a safe zone and need to be relocated.

Make it mobile

Now that all the data is at the EOC — or the EOC is the hub for a community-wide network of information gatherers — it is time to push the data back to all the users.  How will this information be passed back to all the departments, units and individuals that need it?  Who needs what information?  There is a good chance that a lot of data will be transmitted wirelessly to vehicles with onboard computers.

Will it be a push model that data will be automatically pushed to the user’s attention, or will it be a pull model where the user needs to request the information?  The system will be both a push and pull in reality.  Life-safety and other critical data must be pushed out to the users as soon as it is known.  Other data may use a pull format so the user isn’t constantly streamed information when they are not ready to receive it.

A third way to handle data is with triggers.  When a particular action occurs then push the data.  For instance, when a unit marks itself as at the end of a shift, the system may produce a list of reports that need to be completed.  In another situation, when a unit is assigned to a particular type of response, a safety reminder specific to that response may be generated.

Make it virtual

The final step is to make it virtual.  If all the systems at the EOC are technologically based then it should be just a matter of programming to make the systems operate in a virtual environment.  With the right tools and resources, the EOC can go virtual and mobile in a command vehicle.  The Chicago Office of Emergency Management built a Unified Command Vehicle.  The “doomsday” scenario included this vehicle taking over the EOC responsibilities.  More often, this vehicle is sent to large events to bring some of the EOC capacity to the site and provide redundant communications to relieve the burden from the operators at the EOC.

Make it upgradable

When designing and selecting an EOC system, design an upgrade path.  Interconnected systems should not be over limiting to the future expansion of the system.  A few years ago, a T1 circuit was the gold standard for business connectivity.  That same circuit today barely covers the needs for a single small unit.  The FCC moves to narrow banding or shifting radio frequencies should not require an entirely new system.  Portals in the system should be compatible with various implementations of the national broadband plan and social media integration.

The rate of technical change today is rapid.  Purchasing a proprietary system with a single upgrade path dependent on a single company’s ability to keep pace is probably not the best choice.

GIS: Applications for emergency, crisis and risk management

Geographical information is often shared between organizations through ESRI shape files.  A shape file is a data interoperability standard developed by ESRI.  ESRI is the top dog in the GIS community.  Many geographical applications will create shape files so it isn’t limited to just ESRI approved software.  Another common file format is Keyhole Markup Language (KML).  This standard is associated with Google Earth, but becoming more widely used.  The National Hurricane Center provides their data in multiple formats on their website.

A virtual globe is a geographic data model that adds information such as elevation and the Earth’s sphere to give the impression of a 3D globe on a 2D screen.  There are over 30 different virtual globes and the list continues to grow.  Some of the current ones are listed at  Each one will have different features, such as: zoom, tile, rotate, overlays provided, custom overlays, queries, and analysis.

Google Earth is an example of a virtual globe.  Most of the data resides on a remote server.  Images are streamed over the internet to the client that assembles the mode.  The graphical information is limited to just what is show on the screen.  As the user zooms in, the broad low-resolution images are replaced with smaller higher-resolution images.  Blurry images that get progressively sharper are evidence of this process.

Geographical data forms the basis to create geographical models of damage.  Using accurate geographical data makes a large difference in modeling.  On the large scale, it is how mountain ranges impact weather.  On the urban scale, it is the movement of air between buildings forms and how it will speed up or slow down the dispersion of airborne particles.  Without the details of these structures, model would be less scientific and more guesses.

HAZUS-MH analyzes potential losses from floods, hurricane winds and earthquakes. Estimates of hazard-related damage are produced before, or after, a disaster occurs.  HAZUS can estimate losses in terms of physical damage, economics, and population.

Potential loss estimates analyzed in HAZUS-MH include:

  • Physical damage to residential and commercial buildings, schools, critical facilities, and infrastructure;
  • Economic loss, including lost jobs, business interruptions, repair and reconstruction costs; and
  • Social impacts, including estimates of shelter requirements, displaced households, and population exposed to scenario floods, earthquakes and hurricanes.

CAMEO is a collection of applications created by EPA’s Office of Emergency Management (OEM) and the National Oceanic and Atmospheric Administration Office (NOAA) of Response and Restoration.  The primary purpose is to plan and respond to chemical emergencies.  The CAMEO system integrates a chemical database and a method to manage the data, an air dispersion model, and a mapping capability.

The Consequences Assessment Toolset (CATS) was developed by the Defense Threat Reduction Agency (DTRA).  It is available free to response organizations.  The suite of tools can be used during the entire lifecycle of a disaster to help create planning scenarios, analyze information during the response to help with decision making, and gather data after the response to for after action reporting and lessons learned.

Here is an important tangent.  Before you use a tool or model, it is important to know who designed it and for what purpose.  Adapted models or a model’s secondary use need to be used carefully.  Even with the primary use of a model, check the assumptions.  Assumptions may have changed since the tool was created.  Slight changes in the assumptions or input can have significant impacts when the output is logarithmically scaled from the input.

This PowerPoint provides some examples of how GIS information can help managers understand the risks of a current or future incident.  GIS_Applications slide deck.

GIS: Got vector, Victor? No, Raster, Roger.

Graphical data comes in two forms: vector and raster.  Vector data is composed of mathematical points, lines and polygons.  Since the data is mathematical, it can be scaled to any level without a loss of quality.

Raster data, also known as bitmap data, is composed of different color squares next to each other.  In the most simplest form, take a sheet of grid paper and color in the squares; that is a bitmap image.  Digital cameras take bitmap images.  When you zoom into the image enough, you will start to see squares.

Satellite and aerial images are bitmap images because they are digital photographs.  A human can look at the image and see the object made by the pixels.  A computer had a harder time since it looks at the pixels on an individual level.  The digital photograph must carry meta-data so the computer can understand the basics of the image.

Vector images can be converted to raster images.  Just tell the computer how many pixels, and it will handle the rest.  Raster images cannot be easily converted into vector images.  A computer can draw contour lines at differences in hues, lightness and other color characteristics, but otherwise can’t do much.

The difference between raster (or bitmap) and vector images.
Another comparison of raster versus vector.


When the Haiti earthquake struck, there was no road map of the country.  This hampered relief efforts because there was no way for international responders to know what was where.  A crowd-sourced project based in Open Street Maps surfaced.  Volunteers around the globe were looking at before and after satellite images of Haiti.  They drew the vector data by looking at the raster data.  They were able to mark the roads, and then add meta-data of the road’s condition, name and other features of importance.  This was collaborated online and downloaded directly to responders’ GPS units.  The GPS units were able to use this data to navigate the responders.  This was continued to be enhanced by loading in building names and layouts prior to the earthquake.  Now an accurate pre- and post- map exists of Haiti.

It is better to have accurately captured vector data that is loaded with the meta-data.  However, much of the source data that exists is raster information so there are conversions occurring.  Converting data either way between raster and vector creates a margin for error and inaccuracies.

It is a good idea to check the data you are using against the source data.  A common example is when a road’s vector doesn’t exactly line up well with the raster satellite image of the area.  Raw satellite imagery doesn’t include an overlay of the geographical coordinates.  Either the system or a person must anchor the image to a geographical location.  While the center may be exact, the edge can be slightly off due to the angle of the satellite to the surface plane.  This means that we can’t be certain if the variation is caused by an error on placing the satellite image or an error on the location of the vector.  These errors are often small and most people won’t notice them … BUT it depends on the level of accuracy needed in the map.


GIS continues: Applications and tools in disaster, crisis and risk management.