A little more on usability

About two years ago, I did a blog regarding usability. This video adds to that including my thoughts on BYOD (Bring Your Own Device) and the impact on disaster technology. Regardless of how the future rolls out, the advances in technology should not make things more complex for the users. In fact, the additional computing power needs to be used to make work easier for the users.

Why free stuff isn’t free

GiftI have heard too many times from people in disaster response: “If we can just get the product donated, then we can do…”  If a person or organization is willing to do a program only if it is all provided for free, they are simply stating that the program is not important enough to budget for it.  That attitude minimizes the value of the program and makes me wonder if it was important enough in the first place.  They miss the point on in-kind donations.  An in-kind donation is when someone gives you a something at no financial cost.  But don’t think it is free.  Free stuff is never free.  Everything in a supply and demand economy has a cost.  There are financial, time, resource costs associated with everything.

Let’s look at fictitious non-profit group Acme.  Acme has a mission to bring internet access to disaster survivors.  One of the tools they use is a widget and hundreds of widgets are used each year.  Each widget costs $100 and is produced by Ajax.  There are two ways to get widgets.  Acme can buy the widgets from Ajax using donated money, or Acme can ask Ajax to donate the equipment.  Procuring a widget meets Acme’s need regardless of how the widget is procured.

Acme’s fundraisers are tasked to raise the necessary funds to cover the organization’s annual budget.  As money is brought in to the organization, it is applied to the annual budget.  The money goes to offset the general (or core) expenses including facilities, salaries, program maintenance, daily operations … and the purchase of Ajax widgets.  In general, donors like to see where their money goes to know that they are making a difference.  That is what makes fund raising such a hard task; it is convincing the donor to give money and trust Acme to the right thing without being able to show them a specific thing that their money did.  There is another concept called a “directed donation” where funds are raised for a specific goal.  Directed donations are very commonly seen as capital improvement projects.  I’m leaving directed donations out of this discussion.

Donors are not restricted to just providing cash.  They can provide goods and services; this is an in-kind donation (IKD).  In-kind donations are unique because it should match Acme’s needs with what the donor has to offer.  (Receiving product that isn’t needed becomes wasteful in costs to ship, receive, store and dispose.)  When Acme receives an in-kind donation, it offsets expenses that would be spent otherwise to get the products. For our example of widgets, this is declared as income on Acme’s taxes, and a donation on Ajax’s taxes.  When Ajax decides to donate the widgets to Acme, Ajax is providing a value of products in lieu of a cash donation of the same value.

The end results of any of these actions is the same: Acme has widgets.  It didn’t really matter if the fund-raisers directly courted Ajaz for the widgets or had a third-party donor provide cash to buy the widgets.  The result is budget-neutral: the right amount of cash or products came in to match the same amount of expenses for the product procurement.

Here’s why free stuff isn’t free.  At the start of the year, Acme set forth a financial budget based on expected donations (IKD or cash) and expenses.  The cash value of the widgets that Ajax donated gets applied to the budget and reduces the cash that needs to be raised that year to buy widgets.  Ajax’s donation doesn’t free up Acme’s budgeted amounts to be applied elsewhere; the donation met the business needs of procuring widgets per the budget.  The budget is just a financial tool to manage incoming donations and outgoing expenses regardless if the donation show up as cash or IKD.  A budget is very different from an account balance of real money in the bank.  The hope is the budget, actual expenses and cash in the bank match up during the fiscal period.

In-kind donations often come with additional strings that are not part of a cash procurement.  The donations are usually large enough that the donor wants publicity which will help create an impression of the donor.  Here, Ajax wants to be able to publicize that they donated to Acme which helps create the public impression that Ajax is a good corporate citizen.  Acme and Ajax producing a joint press release to promote the relationship doesn’t take too much time.  But imagine if Ajax’s expectation is for Acme to take a photo and publish a story every time a widget is used.  The cost in Acme’s resources to meet that expectation could exceed the cost of just buying the widgets with cash.

So next time you hear that a project will only be done if product is given for free, ask the question if the product needs to be free or just be budget-neutral for the organization?

Historic Information Breakdowns

Risk managers study causes of tragedies to identify control measures in order to prevent future tragedies.  “There are no new ways to get in trouble, but many new ways to stay out of trouble.” — Gordon Graham

Nearly every After Action Report (AAR) that I’ve read has cited a breakdown in communications.  The right information didn’t get the right place at the right time.  After hearing Gordon Graham at the IAEM convention , I recognized that the failures stretch back beyond just communications.  Gordon sets forth 10 families of risk that can all be figured out ahead of an incident and used to prevent or mitigate the incident.  These categories of risk make sense to me and seemed to resonate with the rest of the audience too.

Here are a few common areas of breakdowns:

Standards: Did building codes exist?  Were they the right codes?  Were they enforced?  Were system backups and COOP testing done according to the standard?

Predict: Did the models provide accurate information?  Were public warnings based on these models?

External influences: How was the media, public and social media managed?  Did add positively or negatively to the response?

Command and politics: Does the government structure help or hurt?  Was Incident Command System used?  Was the situational awareness completed?  Was information shared effectively?

Tactical: How was information shared to and from the first responders and front line workers?  Did these workers suffer from information overload?


“Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.”  — George Santayana

I add that in since few people actually know the source and accurately quote it.  Experience is a great teacher.  Most importantly, remembering the past helps shape the future in the right direction.

Below are a list of significant disasters that altered the direction of Emergency Management.  Think about what should be remembered for each of these incidents, and then how these events would have unfolded with today’s technology – including the internet and social media.

Seveso, Italy (1976).  An industrial accident in a small chemical manufacturing plant.  It resulted in the highest known exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in residential population.  The local community was unaware of the risk.  It was a week before public notification of the release and another week before evacuations.

Bhopal Methyl Isocyanate Release (1984).  An industrial accident that released 40 tones of MIC.  There was no public warning.  The exact mixture of the gas was not shared so the first responders did not know how to treat the public.

Chernobyl Nuclear Disaster (1986).  An explosion at the plant and subsequent radioactive contamination of the surrounding geographic area. Large parts of Europe and even North America were contaminated.  The Communistiic regime hid the initial information and did not share information until another country detected it.

Hurricane Hugo (1989).  At the time, this was the costliest hurricane disaster.  There was an insufficient damage assessment that lead to wrong resource allocation.  The survivors in rural communities were not located and responded to for many days.  Much of the response was dependent on manual systems.

Loma Prieta (1989).  A M7 earthquake that injured around 3800 in 15 seconds.  Extensive damage also occurred in San Francisco’s Marina District, where many expensive homes built on filled ground collapsed and / or caught fire. Beside that major roads and bridges were damaged.  The initial response focused on areas covered by the media.  Responding agencies had incompatible software and couldn’t share information.

Exxon Valdex (1989).  The American oil tanker Exxon Valdez clashed with the Bligh Reef, causing a major oil leakage.  The tanker did not turn rapidly enough at one point, causing the collision with the reef hours. This caused an oil spill of between 41,000 and 132,000 square meters, polluting 1900 km of coastline.  Mobilization of response was slow due to “paper resources” that never existed in reality.  The computer systems in various agencies were incompatible and there was no baseline data for comparison.

Hurricane Andrew (1993).  Andrew was the first named storm and only major hurricane of the otherwise inactive 1992 Atlantic hurricane season. Hurricane Andrew was the final and third most powerful of three Category 5 hurricanes to make landfall in the United States during the 20th century, after the Labor Day Hurricane of 1935 and Hurricane Camille in 1969.  The initial response was slowed due to poor damage assessment and incompatible systems.

Northridge Earthquake (1994).  This M6.7 earthquake lasted 20 seconds.  Major damage occurred to 11 area hospitals.  The damage made FEMA unable to assess the damage prior to distributing assistance.  Seventy-two deaths were attributed to the earthquake, with over 9,000 injured. In addition, the earthquake caused an estimated $20 billion in damage, making it one of the costliest natural disasters in U.S. history.

Izmit, Turkey Earthquake (1999).  This M7.6 earthquake struck in the overnight hours and lasted 37 seconds.  It killed around 17,000 people and left half a million people homeless.  The Mayor did not receive a damage report until 34 hours after the earthquake.  Some 70 percent of buildings in Turkey are unlicensed, meaning they did not get approval on their building code.  In this situation, the governmental unit that established the codes was separate from the unit that enforced the codes.  The politics between the two units caused the codes to not be enforced.

Sept 11 attacks (2001).  The numerous intelligence failures and response challenges during these three events are well documented.

Florida hurricanes (2004).  The season was notable as one of the deadliest and most costly Atlantic hurricane seasons on record in the last decade, with at least 3,132 deaths and roughly $50 billion (2004 US dollars) in damage. The most notable storms for the season were the five named storms that made landfall in the U.S. state of Florida, three of them with at least 115 mph (185 km/h) sustained winds: Tropical Storms Bonnie, Charley, Frances, Ivan, and Jeanne. This is the only time in recorded history that four hurricanes affected Florida.

Indian Ocean Tsunami (2004). With a magnitude of between 9.1 and 9.3, it is the second largest earthquake ever recorded on a seismograph. This earthquake had the longest duration of faulting ever observed, between 8.3 and 10 minutes. It caused the entire planet to vibrate as much as 1 cm (0.4 inches) and triggered other earthquakes as far away as Alaska.  There were no warning systems in the Indian Ocean compounded by an inability to communicate with the population at risk.

Hurricane Katrina and Rita (2005).  At least 1,836 people lost their lives in the actual hurricane and in the subsequent floods, making it the deadliest U.S. hurricane since the 1928 Okeechobee hurricane.  There were many evacuation failures due to inadequate considerations of the demographic.  Massive communication failures occurred with no alternatives considered.


Additional resources


Data in standard uniforms

Data standards

Standards are a common language for discussing and sharing data that can be approved or ad-hoc.  A standard is defined by the people who use it.  That is key.  In the end, it doesn’t matter if the standard is approved by a governing body or not.  What matters is that the people who use it agree to it.  When used properly, standards will save time and money, and ensure quality and completeness.

In a meeting about missing persons’ data standards it was stated that if the Red Cross, Facebook and Google agreed on a standard to share data, then everyone else will follow.  Not because the three organizations are a governing committee but instead they would be the three largest players in the space.

Data standards make it possible for you to share data within and between organizations.  They make it possible to compare different sets of data for improved analysis.  They form the basis of data infrastructure (framework for collecting, storing and retrieving data).

Here are a few examples of data standards:


Data, data everywhere; and not a bit to eat

Data sets

Data sets are packages of information that you can use before, during or after a disaster.  It is important in planning to determine who has what data, if you can get access to it, and if it is compatible with your systems.

The following are common data sets that provide baseline data, or information that is available prior to any events occurring.  These are useful for planning purposes and exercises.

  • Baseline data: Topography; Political boundaries; Demography; Land ownership / use, Critical facilities
  • Scientific Data: Hydrography / hydrology; Soils; Geology; Seismology
  • Engineering and Environmental Data: Control structures: locks, dams, levees; Building inventories/codes; Transportation, bridges, tunnels; Utility infrastructure, pipelines, power lines; Water quality; Hazardous sites; Critical facilities
  • Economic Data
  • Census Data

Other data sets are only available during a response.  Some data sets are specific to the incident.  These are usually dynamic and depend heavily on solid damage assessment and data exchange agreements.

  • Critical infrastructure status: Road and bridge closures; Airport status; Utility status – water / electricity / gas / telephone
  • Secondary hazards: Condition of dams and levees; Fires and toxic release potential
  • Resource Information: Personnel deployment; Equipment deployment
  • Medical system condition: Hospital status; Nursing home status
  • Casualty information: Injuries and deaths; Medical evacuations; Location of trapped persons; Evacuation routes; Shelters

Keep in mind that not everyone uses data for the same purpose.  Many do a damage assessment and this data is all called “damage assessment”, but it not useful to each other due to different needs and standards.  Some examples include:

  • FEMA looks a major infrastructure and systems, and overall impact.
  • SBA looks at impact to businesses.
  • USACE looks at impact to locks, dams, and their projects.
  • Red Cross looks at impact to individuals and families.
  • Insurance companies look at impact to policy holders.

Most of the data sets include a geographic location, such as an address, road, or other positioning information.  This helps transform the data set from just existing in a database to being analyzed through a GIS tool.  More on that in the GIS section.


A database is a location that stores data.  There can be simple databases, and very complex databases.  Remember that we’re looking at this from the perspective of an emergency manager.  You should be familiar with the terms and some other basic information, but leave the complex database creation to the SMEs.

Here are a few terms that are used in database discussions:

  • Database – an organized collection of data
  • Table – data organized in rows and columns
  • Attribute or field – a variable or item, think of a cell in a spreadsheet
  • Record – a collection of attributes
  • Domain – the range of values an attribute may have
  • Key – unique data used to identify records
  • Index – organize and order records
  • Data dictionary & schema – documentation of connections between all the parts

Quick Tip: never buy or accept a database from someone without a properly documented data dictionary and schema.  Having that will save you hassle in the future when you need support or to change it.

A database can either store all the data in a single table, or spread the data across multiple tables.  Remember that large amounts of data are best handled in a multi-table database, but this also creates the problems of trying to share data as all the data must be linked back.  It shouldn’t be a problem in a properly designed and documented database.

A single table database is like keeping data in Microsoft Excel.  It is simple to create all the rows and columns on one sheet.  Lots of data points and duplicate data points may be better organized in a multi-table database.  Key fields are used to link records across many tables.  Tables can be a one to one (1:1) relationship.  For example, if one table contained your personal information and another table contained your transcript that would be a one to one relationship.  Tables can be a one to many (1:many) relationship.  A table of course descriptions may link by class name to everyone’s individual grades.  Once course taken by many people.  The course description could be updated once without needing to touch every record of all the people that took it.  If you want to dive deeper, start at http://en.wikipedia.org/wiki/Database_model.


Meta data

Meta data is a way of providing information about data, or anything else really.  Metadata makes data retrieval and understand much easier.  It also makes data gathering more complicated and difficult since there is more work required on the data gathering side.  In my experience, everyone agrees that good metadata is good.

Like everything else, garbage in = garbage out.

http://www.clientdatastandard.org/dcds/schema/1.0 is an example of data and metadata in capturing.  Each field is described as to the use and what it contains.

Another way to look at meta data.  The nutrition facts is meta data for the banana.  The banana is meta data for the information listed in the nutrition facts.

An image of a bananaBanana nutrition label


Next: Data standards

Additional resources:

So much information about information

Information.  There is a lot of information about information.  Effective use of information is expected, yet in my experiences there are always “day after experts” that will tell you it could have been done better.

This is really the overarching question about information.  Craig Fugate has noted on a number of occasions: Every EOC has the weather radar or satellite image of a hurricane making landfall.  How many people in the EOC do you think can actually read that image?  Much of what you see in an EOC is eye-candy or part of the “theater of disaster” for visitors.

High-tech EOCs and pretty graphics are useless unless they help someone make a decision. – Craig Fugate

Information sources include: Historical records, personal experience, first responders, local and state EOCs, media (traditional & new), remote sensing, and the general public (inside and outside event area)

Imagine if all smart phones had sensors (like GPS, microphones, photo, video, and text capability) that people could use to upload multi-media information to first responders.

Oh wait.  They do.

Gathering and analyzing data

The challenge there is in the gathering and analyzing of the data.  That is where the information hierarchy or “DIKW” models come in to help understand it.

  • Data is a simple specific fact.  Alone it doesn’t do anything and doesn’t have much value.  Data needs to be built on.  Example: There are 1000 people in a shelter.
  • Information is data processed to be relevant and meaningful.  It adds the “what” element.  Example: There are 1000 people in a shelter and no toilets.
  • Knowledge is information combined with opinions, skills and experience.  It adds the “how to” element.  Example:  A 1000 people in a shelter will need 75 toilets.
  • Instead of Wisdom, I add Action.  Knowledge is good, but putting that knowledge to action is better.  Example: We need to get 75 toilets for the 1000 people in the shelter.

Imaging being asked three questions: “So what’s the problem?”; “What would fix it?”; “How will you do it?”  When you ask these questions, you are loosely following the DIKW model to turn data into action.

Information processing steps

All data systems, including you, will follow basic steps to process data.  These steps can be very simple or increasingly complex depending on the needs.

Project management folks will recognize a critical step that is assumed before these can occur: the requirements.  What are your information needs?  What is the goal that you’re trying to achieve with this work?

  • Collection: Gathering and Recording Information
  • Evaluation: Determine confidence: credibility, reliability, validity, relevance
  • Abstracting: Editing and reducing information
  • Indexing: Classification for retrieval
  • Storage: Physical storage of information
  • Dissemination: Get the right information to the right people at the right time

Many systems are good at the collection of data.  That’s usually the easy part.  Finding the right computer algorithms to handle evaluation to indexing is the hard part.  Google is an example of a company that has gotten this down so well that they became the top company for internet searches.

 Next: Databases, data sets and meta data


Additional resources

Cyber-security and disasters

More and more systems are being connected to share information, and IP networks provide a very cost-effective solution.  One physical network can be used to connect many different devices.  The water company can use a computer interface to control the water pumps and valves at treatment plants and throughout the distribution system.  The natural gas and electric providers can do the same.  Hospitals connect medical devices throughout the facility to central monitoring stations.  A few people in one room can watch all the ICU patients.  Fire departments, law enforcement and EMS can use a wireless network to communication, dispatch units, provide navigation, and track vehicle telematics to manage maintenance cycles.

All networks do not need to lead to the internet, however this is rare and needs to be specifically designed into the system when it is being designed.  Having a physically separate system does provide the best security if all the data is being kept internal to that network.  Remember that internal-only networks are still subject to security issues from internal threats.

Any network or device that does have an internet connection is subject to external attacks through that connection.  A malicious hacker can break into the water treatment system and change the valves to contaminate drinking water.  They could open all the gates on a dam flooding downstream communities.  They could reroute electrical paths to overload circuits or such down other areas.  They could change the programming so dispatchers are sending the farthest unit instead of the nearest, or create false dispatch instructions.

Cyber attacks can disable systems but they can also create real-world disasters.  First responders are trained to consider secondary-devices during intentionally started emergencies.  What if that secondary-device is a cyber attack, or a cyber attack precedes a real event?  During the September 2001 attacks in New York City, a secondary effect of the plane hitting the tower was the crippling of the first responder’s radio system.  Imagine if a cyber attack was coordinate with the plane’s impact.  The attackers could turn all traffic lights to green which could cause traffic accidents at nearly all intersection.  This would snarl traffic and prevent the first responders from getting to the towers.

A side step on the use of the term hacker.  A hacker is anyone that hacks together a technical or electronics solution in an uncommon way.  I explain it as “MacGyver’ing” a solution.  There is no positive or negative connotation in the term used that way.  Hacker also describes a person that breaks into computer systems by bypassing security.  A more accurate description is calling them a cracker, like a safe cracker.  This type of hacker is divided into criminals (black hats) and ethical hackers (white hats).  Ethical hackers are people who test computer security by attempting to break into systems.

By now, you’re probably aware of the Anonymous hacker group.  They have been collectively getting more organized and increasing in actions that drive toward internet freedom since 2008.  Often they’re called “hacktivists” meaning they hack to protest.  There are many more malicious hackers out there with different agendas: status, economic, political, religious … any reason people might disagree could be a reason for a hacker.

Somewhere on the internet is a team of highly trained cyber ninjas that are constantly probing devices for openings.  They use a combination of attack forms including social engineering (phishing) attacks.  Automated tools probe IP addresses in a methodically efficient manner.  The brute force method is used to test common passwords on accounts across many logins.  Worms and Trojans are sent out to gather information and get behind defenses.  Any found weaknesses will be exploited.

Pew Internet reports that 79% of adults have access to the internet and two-thirds of American adults have broadband internet in their home.  The lower cost of computers and internet access has dramatically increase the number of Americans online.  The stand-alone computer connected to the internet has forced the home user to have the role of the system administrator, software analyst, hardware engineer, and information security specialist.  The must be prepared to stop the dynamic onslaught of cyber ninjas, yet are only armed with the tools pre-loaded on the computer or off-the-shelf security software.

Organizations are in a better and worse position.  The enterprise network can afford full-time professionals to ensure the software is updated, the security measures meet the emerging threats, and professional resources to share information with peers.  Enterprise networks are also a larger target; especially to increase the online reputation of a hacker.

On Disasters

During a disaster, there will be many hastily formed networks.  The nature of rushed work increases the number of errors and loopholes in technical systems.

During the Haiti Earthquake response, malware and viruses were common across the shared NGO networks.  The lack of security software on all of the laptops created major problems.  Some organizations purchased laptops and brought them on-scene without any preloaded security software.  Other organizations hadn’t used their response computers in over a year, so no recent security patches to the operating systems or updates to the anti-virus software was done.  USB sticks move data from computer to computer, bypassing any network-level protections.  The spread of malware and viruses across the networked caused problems and delays.

There are a number of key factors when designing a technology system that will be used in response that differ from traditional IT installations.  One of the most important considerations is a way for the system to be installed in a consistent manner by people with minimal technical skills.  Pre-configuration will ensure that the equipment is used efficiently and in the most secure manner.


Additional Resources

Understanding networking

As a manager, it is not your responsibility to know how to configure a router and make things work in the network.   The best way that you should consider networking is the “black box theory”.  You really don’t care how the individual parts work.  You need to know what they are capable of.  Believe it or not, networking is really simple.

At the simplest form, a network is a few computers that are connected by a wire to a network device that shares the information to each computer.   A network is similar to a big post office that is sharing information packets electronically.  The computers each have a unique name that helps the network devices know what information goes to what computer.

The internet is an IP-based network.  IP stands for Internet Protocol.  Easy, huh?  The Transmission Control Protocol is the way that the computers break up large data chunks to send across the internet.  Stick the two together and you’ll get the commonly referred to TCP/IP.  There are other forms of message handling—such as User Datagram Protocol (UDP)—to move information across the internet.  You don’t need to know how these work or move information.  Just know that IP is the backbone of the internet.

Any data that you can turn into an IP packet can travel over an IP network; that data can also travel local networks and the internet.  When a phone converts voice to an IP packet, it is called a Voice over IP phone (VoIP) meaning that it can send your phone call over the same network as email, web browsing, and everything else.

Blah-blah over IP is nothing fancy.  That means that someone has designed a network device (or interface) that translates information from a source to an IP-packet, and back.  You’ll hear about Radio-over-IP, Video-over-IP, Computer-over-IP, and just about everything else.

Data standards are really important in this area.  When each vendor comes up with their own way of doing ____-over-IP, then it is likely that vendors will not be compatible unless they use a standard.  While there are organizations that state standards, a standards true usefulness is proven by if and how people use it.

The International Telecommunications Union (ITU) has a series of standards for videoconferencing, including ITU H.320 and H.264.  When Cisco Telepresence was released, it was designed to bring a meeting room presence to teleconferencing.  Part of the design was full size displays that blended both conference rooms.  It was not compatible with any other video conferencing systems.  The Cisco sales rep explained to me that their product would look poor if it was used with lower quality non-telepresence systems, so the decision was made to be a non-standard data packet.  The problem with this is that it would require companies to invest in two separate video conferencing systems.  More recent advances have allowed some mixed use of video conferencing systems.

Now that we’ve talked a bit about what can go across the network, let us turn back to the network itself.

There are many different formats of networks.  A quick internet search on “network topology” will show the different forms.  Each has an advantage and a disadvantage.  For this course, the focus will be on a tree topology.  An internet connection enters a site through one point.  Switches and routers are used to split that internet connection to all the individual computers.

A demarc (short for demarcation) point is where a utility enters a building.  It is also the point that separates ownership between the utility company and the building owner.  The electrical demarc in a residential home is commonly the electric meter.  The power company will handle everything up to and including the meter.  The home owner handles everything from the meter to the power outlets.

A telephone demarc is located at the telephone network interface.  The network demarc is located at the network interface device (aka smartjack).  These can be located anywhere in a building, but I’ve found that most wireline utilities come in together.  These can be copper wire, fiber optic or some other type of cable.

The demarc is the head of the network for that site.  In a tree topology, this is where the site’s primary router would be located.  A router is a network device that moves data packets between two different networks.  Here, the router is directing the packets, only passing those that need to travel on the other network.  It is ideal for separating two networks to reduce congestion by keeping local data within the local network.  A primary router, sometimes called a site’s core router, is the one that controls the other routers and is mission-critical for the site to be connected.

Routers are the major component that give a network flexibility.  Professional (non-consumer) grade routers allow for the installation of modules, both physical and logical.  These modules connect the router to different devices.  These modules commonly allow a router to connect to a wireline (T1, T3, etc) circuit, a wireless (wifi, cell) circuit, or a different cabling (twisted pair, coax).  These modules can also be used to connect a router to a phone system, radio system, video system and so on.

Other network devices used to spread network segments out from the router include switches and hubs.  Switches can have different interfaces and be used to connect different network types.  This is handy in older buildings where you may need to use an existing style network and will overlay it with a different type of cabling or connections.  Hubs are almost non-intelligent splitters that just provide more ports.

The Warriors of the Net video provides an entertaining explanation of the different components.  Again, from a manager’s perspective, you do not need to get very technical with the network components.

Autonomous systems and robotics

An autonomous system is a system with a trigger that causes an action without the involvement of a person.  Two well known examples are tsunami buoys and earthquake sensors.  These continuously monitor a series of sensors.  When the sensors register a reading that exceeds a predefined threshold (the trigger), a signal is sent to a computer to generate warning alerts (the action).  These systems can range from the very simple to the extremely complex.

One could argue that the backup sensors on cars are an autonomous system: the trigger is the driver shifting into reverse, the computer turns on the back up lights, activates the camera, turns on the internal screen, and activates the sensors to start beeping.  I think it is a bit of a stretch but it does provide a simple example.

The other side of this spectrum is the Mars Rovers.  Due to the time delay in communications between Earth and Mars, the rover cannot be directly controlled.  NASA gives a general command for the rover to head to a new destination.  The rover acts independently to drive over the terrain, and makes decisions to avoid obstacles.  On-board the rover is the Autonomous Exploration for Gathering Increased Science (AEGIS) software — part of Onboard Autonomous Science Investigation System (OASIS) —to automatically analyze and prioritize science targets.  This allows more work to be done without human intervention.

Somewhere between the two is the Roomba.  This autonomous vacuum moves around the house cleaning up.  It will learn the layout of a room to be more effective in future runs.  When complete, it docks to recharge.  At a set time interval, it does this again.  Now don’t laugh about the Roomba; it is a very commonly hacked robot for people interested in DIY robotics.  Microsoft Robotics Developer Studio software has built in modules specific to controlling the Roomba.  That gives mobility and a way to control it.  Microsoft has released additional modules for the Robotics software which adapts the Kinect sensor to sense the environment.  This addition provides vastly better sensors over the traditional range-finder and light sensors.  Microsoft isn’t the only player in this market; Lego Mindstorms markets robotics as a toy for children ages eight and up.  Robotics isn’t just for engineering students in high-end tech universities.

There is enough technology in existence today to make huge leaps in the use of robotics.  The main challenge of robotics is the acceptance by the general public.

Watch the videos about the DARPA autonomous car, Urban Search and Rescue robots, BigDog robotic transport and Eythor Beder’s demonstration of human exoskeletons.  Combine these and we can vision some major transformations.

Take the computational guts from the autonomous car and put them into a fire engine.  Now the fire engine can be dispatched and navigate to a fire scene independently.  Once on-scene, sensors can detect the heat from the fires – even if they are in the walls and not visible to the human eye.  Robots from the fire engine can be sent into the structure.  Heavier duty robots can pull hoses and start to extinguish the fire.  Other robots can perform a search on the house to assist survivors out, and rescue those unable to escape.  Communication systems will link together all the sensors on the robots to generate a complete common operating picture that all robots use in decision making.

A similar thing can be done with an ambulance.  Focus just on the stretcher.  Imagine if the stretcher would automatically exit the ambulance and follow the medic with all the necessary equipment.  The stretcher could be equipped to up and down steep stairs, make tight turns and remain stable.  Automating the lifting and bending done by EMS workers handling patients on stretchers would reduce the number of back injuries caused by improper lifting.  This would keep more EMS workers in better health which reduces the sick leave and employee compensation claims.

Robotics in search and rescue could be the one thing that saves the most number of lives, both in the victims and the rescuers.  A building collapses and the SAR team responds.  On the scene, the workers setup specialized radio antennas at different points around the building site.  They release dozens, if not hundreds, of spider or snake like robots.  Each robot has the ability to autonomously move through the rubble.  They are light enough to not disturb as a human would do.  They are numerous enough to more quickly cover the building as a human would do.  The combined sensor data of their location and scans would quickly build a three dimensional model of the rubble.  They are location aware of each other so they don’t bunch up in one area or miss another.  Heat, sound and motion sensors could detect people.  Once this initial scan is done, the SAR team will be able to know where the survivors are and communicate with them through the robots.  The team will evaluate the 3D model for the best entry paths to get to and rescue the survivors.  If the situation is unstable, larger robots can be used ahead of the team to provide additional structural support to reduce the risk of collapse.  If a robot can’t communicate with the antennas outside, the robots can do mesh networking to pass information along.

Are you reaching the public or just sending notifications?

Public notification is successfully informing the public as to what is going on during an emergency.  The key to reaching people is to reach them timely; where they are; how they want to be reached; with positive actionable information; and in a culturally appropriate manner.

Timely: Information could be too late to be useful if it takes too long to reach them or the information is out-of-date.  Imagine if a building fire alarm took 10 minutes from the time the alert is sent to the time the alarm started to ring.  A building fire alarm needs to ring quickly to give people more time to evacuate the building.  A wildland fire evacuation notice is very similar; the fire moves extremely quickly and can change directions unexpectedly.

How they want to be reached:  Think of how you interact with your family and friends.  Some you will call by phone, some email, some text message and there may even be a few that you mail a real letter to.  You might even admit to have the crazy relative that you’d rather talk to their spouse and have the message passed along.  The public is the same way: all different.  This means that your message must use many different methods to reach all the audience.  Some will want text messages to their cell phone; some will want a voice call to their land-line phone; some will want an email; and there may be a few that are only reachable through the community or faith leader.

Each medium needs to convey similar information, but it need not be the exact same words.  Why should you limit the email to 140 characters just because Twitter is one of many mediums?  For convenience and speed, a message might have a long version and a short version.  The short versions could cover Twitter, SMS, and other short message forms.  The basic information would be shared, along with where to get more information.  The long version could cover email and voice calls.  It would start with the basics and then provide the additional information.

Many of the emergency messages that would be sent can be pre-scripted with blanks left for the immediate details.  Consider the weather watches and warnings.  These are scripted messages that contain all the ever-green information with spots to insert timely specific weather details.  Use the time before an emergency to word-smith the message and get necessary approvals on when it will be used.  Trying to get multiple approvals to send an emergency message is contrary to sending a timely message.

Where they are: This can refer to two places.  Where someone is geographically, and where someone is in the mentality of readiness.

A thing that bugs me is signing up for weather alerts by zip code or locality.  I still get weather alerts for there even when I travel elsewhere.  I want to sign up for one system that follows me.  It can already happen with weather alerts through mobile apps, but it doesn’t happen with local EM alerts.  I have hope that CMAS is changing this.

I live in Fairfax, VA and work in Washington, DC.  I’m registered for county-level alerts in Fairfax, VA; Arlington, VA; and Washington, DC.  Why do I have Arlington, VA alerts?  Because I commute through Arlington and this gives me information on my path.  This becomes amusing on metropolitan-wide alerts as I can see which system sends the information out first and which one takes the longest.

When I travel to another city, I do not get local alerts for that city.  I still get the other alerts from home which is fine so I can take actions to protect my family and property.  When travelling I could do my research, find the local alerting system and sign up for it; but let’s be honest, that’s too much work.  The capability exists today using a feature called “cell broadcast.”  An SMS alert message is point-to-point.  It originates somewhere and goes directly to the single recipient.  SMS alerting requires lots of individual messages all containing the same information which can bog down systems.  Cell broadcasts are point-to-area messages.  It originates somewhere and is broadcasted out to all the phones in a specific area, usually by cell tower.  This doesn’t overload the system because it is one message to many phones.  The technology is commonly used in Europe.  Use in the United States is very limited because it originally released as a way to do local advertising.  Pass the front of a store, and you’d get a text message with a coupon or ad.  People were naturally against this and cell broadcasting has been minimized in the US.  The feature is hard to find on most phones in the US, and defaults to opt-in with no channels loaded.

People also need to be reached where they are in their mentality of readiness.  Telling someone to use their emergency preparedness kit isn’t helpful if they don’t accept the fact they need to have one.  Someone may have a fatalistic attitude of there’s nothing I can do or it is God’s will.  The message needs to be crafted in a way to reach these people where they are mentally.  This leads right into the next point.

Positive actionable information: I chuckle when I hear someone say don’t forget or don’t panic.  How do you not do something?  Mentally, you must flip the message around to figure out what you need to be doing.  That assumes the person reading the message would know the opposite you’d expect them to know.  Craft the message to be a positive action message so the receiver will know what you want them to do and give them something to focus on.  The two statements above should be remember and stay calm.

I forget this all the time in parenting.  I tell my kids things like: don’t touch that, stop making that noise, don’t go over there; instead of keep your hands in your pocket, stay quiet and stay over here.  People should be told what to do, not what not to do.  Messages in a disaster should be simple and direct to be quickly understood and acted on.

Culturally appropriate: Being culturally appropriate starts with using the right language.  Keep in mind just because someone speaks another language doesn’t mean they are literate to read materials written in their native language.  A common mistake I hear is when people say they’ll make print materials in Spanish to reach a Spanish-speaking audience.  Reading and speaking are different things.  A native Puerto Rican told me that he’d rather distribute our materials in English then Spanish.  Apparently, it is easier to understand materials written in English than materials written in European Spanish because Puerto Rican Spanish is that different.  European Spanish— or Castilian Spanish — is commonly taught in academics and is the default Spanish when asking for a translation.  The lesson here is to ask someone from the community the best way to provide written or auditory materials to the community.  Translate to their specific dialect.

Culturally appropriate also refers to the sensitivities of the people.  Migrant farm workers are sensitive to the immigration status of themselves, their family or their friends.  Consider FEMA assistance to these workers before or after a disaster.  The workers will see the DHS logo on the materials.  Who else does DHS have?  U.S. Immigration and Customs Enforcement.  Do you really think that people who are sensitive to their immigration status want to engage with any DHS offices?

Some communities get all their trusted information from a community leader.  Information from other sources may not be readily accepted by the community and have less impact.  Public notifications to these communities need to involve and go through the community leader.  Individuals don’t have relationships with organizations; individuals have relationships with individuals who represent an organization.  Think about it for a minute: your best organizational relationships are likely to have an individual or series of people who you’ve built trust with.  That will be a key when we talk about social media: how do you make your organization interact with individuals on the individual level?

Next time you write a public notification, check off the points I listed above and see if you can improve the effectiveness of the message.