The Myths created by the Mercator Projection

The Mercator Projection is the biggest myth about the Earth that we pass on (often unknowingly) to our children.  OK, I don’t know if it is the biggest but it certainly builds the wrong perception of the globe.

The Mercator projection was originally designed in the mid 1500’s.  A highly useful projection because it kept course lines constant.  A ship’s navigator could plot a course with a straight line from one port to another.  No map projection can keep all features accurate, so the Mercator projection distorts the size and shape of large objects.  Land masses at the Equator appear smaller and land masses at the poles are magnified significantly. Continue reading The Myths created by the Mercator Projection

Historic Information Breakdowns

Risk managers study causes of tragedies to identify control measures in order to prevent future tragedies.  “There are no new ways to get in trouble, but many new ways to stay out of trouble.” — Gordon Graham

Nearly every After Action Report (AAR) that I’ve read has cited a breakdown in communications.  The right information didn’t get the right place at the right time.  After hearing Gordon Graham at the IAEM convention , I recognized that the failures stretch back beyond just communications.  Gordon sets forth 10 families of risk that can all be figured out ahead of an incident and used to prevent or mitigate the incident.  These categories of risk make sense to me and seemed to resonate with the rest of the audience too.

Here are a few common areas of breakdowns:

Standards: Did building codes exist?  Were they the right codes?  Were they enforced?  Were system backups and COOP testing done according to the standard?

Predict: Did the models provide accurate information?  Were public warnings based on these models?

External influences: How was the media, public and social media managed?  Did add positively or negatively to the response?

Command and politics: Does the government structure help or hurt?  Was Incident Command System used?  Was the situational awareness completed?  Was information shared effectively?

Tactical: How was information shared to and from the first responders and front line workers?  Did these workers suffer from information overload?


“Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.”  — George Santayana

I add that in since few people actually know the source and accurately quote it.  Experience is a great teacher.  Most importantly, remembering the past helps shape the future in the right direction.

Below are a list of significant disasters that altered the direction of Emergency Management.  Think about what should be remembered for each of these incidents, and then how these events would have unfolded with today’s technology – including the internet and social media.

Seveso, Italy (1976).  An industrial accident in a small chemical manufacturing plant.  It resulted in the highest known exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in residential population.  The local community was unaware of the risk.  It was a week before public notification of the release and another week before evacuations.

Bhopal Methyl Isocyanate Release (1984).  An industrial accident that released 40 tones of MIC.  There was no public warning.  The exact mixture of the gas was not shared so the first responders did not know how to treat the public.

Chernobyl Nuclear Disaster (1986).  An explosion at the plant and subsequent radioactive contamination of the surrounding geographic area. Large parts of Europe and even North America were contaminated.  The Communistiic regime hid the initial information and did not share information until another country detected it.

Hurricane Hugo (1989).  At the time, this was the costliest hurricane disaster.  There was an insufficient damage assessment that lead to wrong resource allocation.  The survivors in rural communities were not located and responded to for many days.  Much of the response was dependent on manual systems.

Loma Prieta (1989).  A M7 earthquake that injured around 3800 in 15 seconds.  Extensive damage also occurred in San Francisco’s Marina District, where many expensive homes built on filled ground collapsed and / or caught fire. Beside that major roads and bridges were damaged.  The initial response focused on areas covered by the media.  Responding agencies had incompatible software and couldn’t share information.

Exxon Valdex (1989).  The American oil tanker Exxon Valdez clashed with the Bligh Reef, causing a major oil leakage.  The tanker did not turn rapidly enough at one point, causing the collision with the reef hours. This caused an oil spill of between 41,000 and 132,000 square meters, polluting 1900 km of coastline.  Mobilization of response was slow due to “paper resources” that never existed in reality.  The computer systems in various agencies were incompatible and there was no baseline data for comparison.

Hurricane Andrew (1993).  Andrew was the first named storm and only major hurricane of the otherwise inactive 1992 Atlantic hurricane season. Hurricane Andrew was the final and third most powerful of three Category 5 hurricanes to make landfall in the United States during the 20th century, after the Labor Day Hurricane of 1935 and Hurricane Camille in 1969.  The initial response was slowed due to poor damage assessment and incompatible systems.

Northridge Earthquake (1994).  This M6.7 earthquake lasted 20 seconds.  Major damage occurred to 11 area hospitals.  The damage made FEMA unable to assess the damage prior to distributing assistance.  Seventy-two deaths were attributed to the earthquake, with over 9,000 injured. In addition, the earthquake caused an estimated $20 billion in damage, making it one of the costliest natural disasters in U.S. history.

Izmit, Turkey Earthquake (1999).  This M7.6 earthquake struck in the overnight hours and lasted 37 seconds.  It killed around 17,000 people and left half a million people homeless.  The Mayor did not receive a damage report until 34 hours after the earthquake.  Some 70 percent of buildings in Turkey are unlicensed, meaning they did not get approval on their building code.  In this situation, the governmental unit that established the codes was separate from the unit that enforced the codes.  The politics between the two units caused the codes to not be enforced.

Sept 11 attacks (2001).  The numerous intelligence failures and response challenges during these three events are well documented.

Florida hurricanes (2004).  The season was notable as one of the deadliest and most costly Atlantic hurricane seasons on record in the last decade, with at least 3,132 deaths and roughly $50 billion (2004 US dollars) in damage. The most notable storms for the season were the five named storms that made landfall in the U.S. state of Florida, three of them with at least 115 mph (185 km/h) sustained winds: Tropical Storms Bonnie, Charley, Frances, Ivan, and Jeanne. This is the only time in recorded history that four hurricanes affected Florida.

Indian Ocean Tsunami (2004). With a magnitude of between 9.1 and 9.3, it is the second largest earthquake ever recorded on a seismograph. This earthquake had the longest duration of faulting ever observed, between 8.3 and 10 minutes. It caused the entire planet to vibrate as much as 1 cm (0.4 inches) and triggered other earthquakes as far away as Alaska.  There were no warning systems in the Indian Ocean compounded by an inability to communicate with the population at risk.

Hurricane Katrina and Rita (2005).  At least 1,836 people lost their lives in the actual hurricane and in the subsequent floods, making it the deadliest U.S. hurricane since the 1928 Okeechobee hurricane.  There were many evacuation failures due to inadequate considerations of the demographic.  Massive communication failures occurred with no alternatives considered.


Additional resources


GIS: Applications for emergency, crisis and risk management

Geographical information is often shared between organizations through ESRI shape files.  A shape file is a data interoperability standard developed by ESRI.  ESRI is the top dog in the GIS community.  Many geographical applications will create shape files so it isn’t limited to just ESRI approved software.  Another common file format is Keyhole Markup Language (KML).  This standard is associated with Google Earth, but becoming more widely used.  The National Hurricane Center provides their data in multiple formats on their website.

A virtual globe is a geographic data model that adds information such as elevation and the Earth’s sphere to give the impression of a 3D globe on a 2D screen.  There are over 30 different virtual globes and the list continues to grow.  Some of the current ones are listed at  Each one will have different features, such as: zoom, tile, rotate, overlays provided, custom overlays, queries, and analysis.

Google Earth is an example of a virtual globe.  Most of the data resides on a remote server.  Images are streamed over the internet to the client that assembles the mode.  The graphical information is limited to just what is show on the screen.  As the user zooms in, the broad low-resolution images are replaced with smaller higher-resolution images.  Blurry images that get progressively sharper are evidence of this process.

Geographical data forms the basis to create geographical models of damage.  Using accurate geographical data makes a large difference in modeling.  On the large scale, it is how mountain ranges impact weather.  On the urban scale, it is the movement of air between buildings forms and how it will speed up or slow down the dispersion of airborne particles.  Without the details of these structures, model would be less scientific and more guesses.

HAZUS-MH analyzes potential losses from floods, hurricane winds and earthquakes. Estimates of hazard-related damage are produced before, or after, a disaster occurs.  HAZUS can estimate losses in terms of physical damage, economics, and population.

Potential loss estimates analyzed in HAZUS-MH include:

  • Physical damage to residential and commercial buildings, schools, critical facilities, and infrastructure;
  • Economic loss, including lost jobs, business interruptions, repair and reconstruction costs; and
  • Social impacts, including estimates of shelter requirements, displaced households, and population exposed to scenario floods, earthquakes and hurricanes.

CAMEO is a collection of applications created by EPA’s Office of Emergency Management (OEM) and the National Oceanic and Atmospheric Administration Office (NOAA) of Response and Restoration.  The primary purpose is to plan and respond to chemical emergencies.  The CAMEO system integrates a chemical database and a method to manage the data, an air dispersion model, and a mapping capability.

The Consequences Assessment Toolset (CATS) was developed by the Defense Threat Reduction Agency (DTRA).  It is available free to response organizations.  The suite of tools can be used during the entire lifecycle of a disaster to help create planning scenarios, analyze information during the response to help with decision making, and gather data after the response to for after action reporting and lessons learned.

Here is an important tangent.  Before you use a tool or model, it is important to know who designed it and for what purpose.  Adapted models or a model’s secondary use need to be used carefully.  Even with the primary use of a model, check the assumptions.  Assumptions may have changed since the tool was created.  Slight changes in the assumptions or input can have significant impacts when the output is logarithmically scaled from the input.

This PowerPoint provides some examples of how GIS information can help managers understand the risks of a current or future incident.  GIS_Applications slide deck.

GIS: Got vector, Victor? No, Raster, Roger.

Graphical data comes in two forms: vector and raster.  Vector data is composed of mathematical points, lines and polygons.  Since the data is mathematical, it can be scaled to any level without a loss of quality.

Raster data, also known as bitmap data, is composed of different color squares next to each other.  In the most simplest form, take a sheet of grid paper and color in the squares; that is a bitmap image.  Digital cameras take bitmap images.  When you zoom into the image enough, you will start to see squares.

Satellite and aerial images are bitmap images because they are digital photographs.  A human can look at the image and see the object made by the pixels.  A computer had a harder time since it looks at the pixels on an individual level.  The digital photograph must carry meta-data so the computer can understand the basics of the image.

Vector images can be converted to raster images.  Just tell the computer how many pixels, and it will handle the rest.  Raster images cannot be easily converted into vector images.  A computer can draw contour lines at differences in hues, lightness and other color characteristics, but otherwise can’t do much.

The difference between raster (or bitmap) and vector images.
Another comparison of raster versus vector.


When the Haiti earthquake struck, there was no road map of the country.  This hampered relief efforts because there was no way for international responders to know what was where.  A crowd-sourced project based in Open Street Maps surfaced.  Volunteers around the globe were looking at before and after satellite images of Haiti.  They drew the vector data by looking at the raster data.  They were able to mark the roads, and then add meta-data of the road’s condition, name and other features of importance.  This was collaborated online and downloaded directly to responders’ GPS units.  The GPS units were able to use this data to navigate the responders.  This was continued to be enhanced by loading in building names and layouts prior to the earthquake.  Now an accurate pre- and post- map exists of Haiti.

It is better to have accurately captured vector data that is loaded with the meta-data.  However, much of the source data that exists is raster information so there are conversions occurring.  Converting data either way between raster and vector creates a margin for error and inaccuracies.

It is a good idea to check the data you are using against the source data.  A common example is when a road’s vector doesn’t exactly line up well with the raster satellite image of the area.  Raw satellite imagery doesn’t include an overlay of the geographical coordinates.  Either the system or a person must anchor the image to a geographical location.  While the center may be exact, the edge can be slightly off due to the angle of the satellite to the surface plane.  This means that we can’t be certain if the variation is caused by an error on placing the satellite image or an error on the location of the vector.  These errors are often small and most people won’t notice them … BUT it depends on the level of accuracy needed in the map.


GIS continues: Applications and tools in disaster, crisis and risk management.

How far would you walk for a degree change?

This set of images show the difference between a second, a minute and a degree based on an office in Washington, DC.  The office is located at 38° 53’ 52.59’ N; 77° 02’ 29.60” W.  A couple seconds is about the length of a city block.  As you would expect, the North-South points 1 minute apart along a meridian are 1 nm apart.  The East-West points 1 minute apart along the parallel are only .78 nm apart.  Points that are 1 degree apart North-South are 60 nm apart, yet East-West points 1 degree apart are only 47 nautical miles apart.  This approximation is only valid at this latitude because meridians converge at the poles.  Closer to the poles will make East-West point 1 degree apart closer.  Closer to the equator will make East-West points farther apart.

One second changes.


One minute changes. N/S points are 1 nautical mile apart. E/W points are .78 nautical mile apart.


One degree change:. N/S points are 60 nautical mile apart. E/W points are 47 nautical mile apart


I’m sure that all this talk about degrees, minutes and seconds has made you wonder why there are 60 seconds in a minute, 60 minutes in an degree and 360 degrees in a circle.  We have the Babylonians to thank for that.  They used a base-60 numeric system (sexagesimal) that is used in both time measurements and angles.  You are familiar with a based-10 numeric system (denary), and maybe a base-16 (hexadecimal) if you program computers.  Latitude and longitude are minutes of an arc that originates in the middle of the Earth.

Sexagesimal numbers would name each place past the point in Latin: primus, seconde, tierce, etc.  Minutes are the first position.  Second position is 1/60th of a minute, or seconds as we call them.

While we are off topic, there are 24 hours in a day because the ancient Egyptians used sundials that showed 10 parts day, 12 parts night, 1 part morning twilight, and 1 part evening twilight.  The Egyptians used a base-12 numbering system so it was natural to break day and night each into twelve parts on the sun dial.  Although hours were not the same length of time until the Greeks got involved.

Has anyone ever come up to me in a disaster and asked why our time is a base-60 numbering sequence?  Well, no.  But it is handy knowledge where you’re at a cocktail party, the conversation is in an uncomfortable silence, and you have nothing else to say.


GIS continues: Layering on the data

GIS: Laying on the data

We can identify a location on a map using latitude and longitude.  Remember there are other ways to identify a location.  The US National Grid system is the Federal Geographic Data committee’s preferred method.  Amateur radio operators use the Maidenhead Locator System.  Technology today allows the coordinates to be converted from one system to another.  Just be very clear what coordinate system you are using and conform to accepted standards when writing out information.

Our world is physical.  All objects that you can touch have a location and take space.  You can interact with these, such as buildings, roads, mountains, water features, ground features and vegetation.

Objects also have non-physical properties, such as information about the object.  This information could be age, value, names, messages — anything that isn’t physical.  Inputs, outputs, relationships, risk are other non-physical features.

Think of people.  We all have physical characteristics that can be used to define us: height, weight, shape, color, strength, flexibility, etc.  We also have non-physical characteristics that can define us: intellect, emotions, spirituality, productivity, leadership, social networks, financial value, etc.

Both physical and non-physical characteristics of objects interact with other objects.  These can be textually explained but may be better understood with a graphical representation.  I could provide you a list of buildings and their floor space, but you may more quickly see the differences if we overlaid the plans of the buildings so you could roughly compare floor area and outer dimensions.

There is evidence that Cro-Magnon people drew animals and migration routes more than 15,000 years ago.  Dr. John Snow used mapping to study a cholera outbreak in London in 1854.  He drew a map of the neighborhood, drew points for the location of individual cases, and drew an X for the water pumps.  It was easily visible there was one pump in the middle of the outbreak.  Dr. Snow simply removed the handle from the pump and stopped the outbreak.

Original map made by John Snow in 1854. Cholera cases are highlighted in black.

Modern GIS is dependent on geocoding data and layers.  A complete database combined with solid analysis tools allows the leap from map making to true GIS analysis.  GIS is all about the relationships and space of data in the real world.

Layers were easier to describe when most people were familiar with acetate sheets (clear overhead transparency paper).  Imaging a stack of clear sheets where each sheet contained a different set of data.  The very bottom sheet was a base map.

A base map can be referred to as the common data, or the part of the map that you don’t need to create.  It can be the base geography, yet still up for debate.  A base map can be the geographic, demographic or topographic information that services as the common base.  ESRI lists the following as examples of base maps that may be selected: World Imagery, World Street Map, World Topographic Map, World Shaded Relief, World Physical Map, World Terrain Base, USA Topographic Maps, and Ocean Basemap.  In reality, the base map is just another layer that can be turned on or off.

Each layer contains a specific set of related data laid out by geographic coordinates.  Examples of different layers can be transportation systems, gas, electric, water & sewer, telecommunications, terrain, vegetation, buildings, hydrology and subsurface geology.

Layers will be turned on and off depending on what the end-user needs to see.  If I’m interested in the terrain and vegetation to predict wildland fire movements, then hide utilities since they won’t make a difference to the major motions.  A good GIS person will show major water features and highways as they will provide some fire breaks.

Just as the world exists in three dimensions, so can your layers.  Including the elevation (or height) in the geographic data will show the volume of something.  When modeling a hazardous plume, it can show if the plume is at the surface or above the population.  As Dale Loberger pointed out to me: flood analysis, plume dispersion or volumetric surface measures should be done in 3D.  Using technology to display a 3D model that allows you to view from many angles will hopefully reduce your urge to request a printed map when you really need an analysis tool.  Think of GIS as a visual interactive database.

Layers can also include time and historical elements: the fourth dimension (4D).  Historical data can be used to show the growth or shrink in a feature.  The wild land urban interface is an easy example.  Imagine that a community evaluated their risk and models a wild land fire near their community 10 years ago.  Today, their community has expanded yet the prediction has not been updated.  A map of three layers can be used to show what assumptions have changed.  The layers would be the community 10 years ago, the wild land fire prediction, and the community today.  Areas of rapid growth expansion will show.  Small roads 10 years ago may have been widened which may impact the direction and spread of the wildfire.

If earthquake research is my thing, then definitely keep the hydrology and subsurface information.  I’d like to see if the buildings in the community are built on solid footings, or if the ground will liquefy.  Once the major movement areas are identified, then add back the utilities to see where they cross higher risk areas.



GIS Continues: Vector and raster data types

Getting grounded in GIS: Map making

GIS starts with mapping.  Map making is cartography.  The earliest known map is dated 6,200 BC: a Babylonian clay tablet recovered in 1930 that depicted district boundaries, hills and water features.  Mapping helped to advance human knowledge because it was a new way to capture and share information with each other and future generations.  Passing information to future generations is a basis for developing culture.

Maps of the stars are also maps.  Caves with dots on the walls that represent stars have been found as old as 16,500 BC.  There are other drawing of caves older than the 6,200 BC tablet that represent mountains and rivers.  It is hard to determine if these older drawings of Earth were actual geographic features or just drawings.

I often joke with GIS professionals that they just make pretty maps.  In reality, GIS professionals take a large three-dimensional object (known as the Earth), merge it with data sets through geo-coding, and then provide it to customers in an easy to understand two-dimensional image.  Computers that run GIS are often the most powerful systems in an office because of the large data sets, complex graphics, and significant calculations to perform.

Mapping’s inherent challenge has always been to take a three dimensional object and project it in two dimensions.  A map can provide information on area, shape, direction, bearing, distance and scale.  However, every projection distorts some properties to keep others accurate.  There is no two-dimensional projection that accurately captures all properties of a three-dimensional object.  It is important to select the best projection for the information that is needed.

How hard is it to convert a 3D object to a 2D representation?  Next time you are peeling an orange, try to get the peel off in a few large sections.  Flatten the orange peel.  You will see it stretch, twist and rip as it goes flat.

When mapping smaller areas — such as neighborhoods and cities — an assumption of flatness can be made.  It does depend on the risk of inaccuracy for the map’s purpose.  A few yards of inaccuracy may be fine for consumer level navigation.  A lower level of inaccuracy may be required for landing airplanes in the middle of a runway.  Property boundary disputes may need accuracy to within a few inches.

Some historically powerful island nations may prefer a Mercator projection to give the appearance of a larger land mass.  Just as with statistics, a projection’s distortion can be used to enhance or diminish a feature.

The Mercator projection is commonly used in maritime navigation because it accurately represents initial the shortest distance initial direct bearing.  The other “feature” of a Mercator projection is the size and shape distortion of objects increasing in distorted scale from the equator to the poles, where land masses appear significantly larger.

Why are they called projections?  Imagine a bare light bulb representing the Earth.  Take a piece of paper and hold it to the light.  The features of the Earth are projected on to the paper.  Holding the paper at different angles or shapes make different projections.  Draw a square, circle and triangle on the light bulb.  Now you’ll really be able to observe the distortion of these shapes.

The Mercator projection is a cylinder shape in parallel with the Earth’s axis.  Maps can be referred to by the projection, or by the property that is accurately represented.  Projection shapes include conic or plane (flat).  Preserved properties include direction (azimuthal), shape locally, area, distance, and shortest route.  There are hybrid maps that blend properties of different maps.  While no properties are kept accurate, it “looks” better and more accurate.  Again, it all goes back to the purpose of the map.

Overlay of three projections to show distortions.
Overlay of two projections to show distortions.

There are a few terms used in mapping and GIS that you need to know.

Latitude (aka parallels): These horizontal lines on a map parallel to the equator.  These can be remembered because you give people “latitude” to get their job done so they can go as far to the sides as possible but that doesn’t include a promotion or demotion.  Latitudes are names 0° at the equator to 90° North or South at the poles.  1° difference in Latitude is just about 69 miles.  Except for the equator, a circle of latitude is not the shortest distance between two points.

Longitudes (aka meridians): These vertical lines on a map go North South.  These can be remembered because of the Prime Meridian.  The Prime Meridian is 0° going through Greenwich, England.  The meridians run from there East or West to the International Date Line at 180°.  One the equator, 1° of longitude is ~69 miles converging to 0 miles at the poles.

Great Circle: The shortest distance between two points occurs along a great circle.  A great circle cuts a sphere into two equal halves.  A great circle is the largest circle that can be drawn on a sphere.  All meridians are great circles.  The only parallel that is a great circle is the equator.

Nautical Mile: One minute of latitude along any meridian is a nautical mile.  One nautical mile = 1.85 kilometers = 1.15 miles = 6076 feet.  Note that meridians are used because the distance along parallels change.  The distance along a meridian is the distance between parallels.


GIS continues : How far would you walk for a degree change?