Why free stuff isn’t free

GiftI have heard too many times from people in disaster response: “If we can just get the product donated, then we can do…”  If a person or organization is willing to do a program only if it is all provided for free, they are simply stating that the program is not important enough to budget for it.  That attitude minimizes the value of the program and makes me wonder if it was important enough in the first place.  They miss the point on in-kind donations.  An in-kind donation is when someone gives you a something at no financial cost.  But don’t think it is free.  Free stuff is never free.  Everything in a supply and demand economy has a cost.  There are financial, time, resource costs associated with everything.

Let’s look at fictitious non-profit group Acme.  Acme has a mission to bring internet access to disaster survivors.  One of the tools they use is a widget and hundreds of widgets are used each year.  Each widget costs $100 and is produced by Ajax.  There are two ways to get widgets.  Acme can buy the widgets from Ajax using donated money, or Acme can ask Ajax to donate the equipment.  Procuring a widget meets Acme’s need regardless of how the widget is procured.

Acme’s fundraisers are tasked to raise the necessary funds to cover the organization’s annual budget.  As money is brought in to the organization, it is applied to the annual budget.  The money goes to offset the general (or core) expenses including facilities, salaries, program maintenance, daily operations … and the purchase of Ajax widgets.  In general, donors like to see where their money goes to know that they are making a difference.  That is what makes fund raising such a hard task; it is convincing the donor to give money and trust Acme to the right thing without being able to show them a specific thing that their money did.  There is another concept called a “directed donation” where funds are raised for a specific goal.  Directed donations are very commonly seen as capital improvement projects.  I’m leaving directed donations out of this discussion.

Donors are not restricted to just providing cash.  They can provide goods and services; this is an in-kind donation (IKD).  In-kind donations are unique because it should match Acme’s needs with what the donor has to offer.  (Receiving product that isn’t needed becomes wasteful in costs to ship, receive, store and dispose.)  When Acme receives an in-kind donation, it offsets expenses that would be spent otherwise to get the products. For our example of widgets, this is declared as income on Acme’s taxes, and a donation on Ajax’s taxes.  When Ajax decides to donate the widgets to Acme, Ajax is providing a value of products in lieu of a cash donation of the same value.

The end results of any of these actions is the same: Acme has widgets.  It didn’t really matter if the fund-raisers directly courted Ajaz for the widgets or had a third-party donor provide cash to buy the widgets.  The result is budget-neutral: the right amount of cash or products came in to match the same amount of expenses for the product procurement.

Here’s why free stuff isn’t free.  At the start of the year, Acme set forth a financial budget based on expected donations (IKD or cash) and expenses.  The cash value of the widgets that Ajax donated gets applied to the budget and reduces the cash that needs to be raised that year to buy widgets.  Ajax’s donation doesn’t free up Acme’s budgeted amounts to be applied elsewhere; the donation met the business needs of procuring widgets per the budget.  The budget is just a financial tool to manage incoming donations and outgoing expenses regardless if the donation show up as cash or IKD.  A budget is very different from an account balance of real money in the bank.  The hope is the budget, actual expenses and cash in the bank match up during the fiscal period.

In-kind donations often come with additional strings that are not part of a cash procurement.  The donations are usually large enough that the donor wants publicity which will help create an impression of the donor.  Here, Ajax wants to be able to publicize that they donated to Acme which helps create the public impression that Ajax is a good corporate citizen.  Acme and Ajax producing a joint press release to promote the relationship doesn’t take too much time.  But imagine if Ajax’s expectation is for Acme to take a photo and publish a story every time a widget is used.  The cost in Acme’s resources to meet that expectation could exceed the cost of just buying the widgets with cash.

So next time you hear that a project will only be done if product is given for free, ask the question if the product needs to be free or just be budget-neutral for the organization?

Historic Information Breakdowns

Risk managers study causes of tragedies to identify control measures in order to prevent future tragedies.  “There are no new ways to get in trouble, but many new ways to stay out of trouble.” — Gordon Graham

Nearly every After Action Report (AAR) that I’ve read has cited a breakdown in communications.  The right information didn’t get the right place at the right time.  After hearing Gordon Graham at the IAEM convention , I recognized that the failures stretch back beyond just communications.  Gordon sets forth 10 families of risk that can all be figured out ahead of an incident and used to prevent or mitigate the incident.  These categories of risk make sense to me and seemed to resonate with the rest of the audience too.

Here are a few common areas of breakdowns:

Standards: Did building codes exist?  Were they the right codes?  Were they enforced?  Were system backups and COOP testing done according to the standard?

Predict: Did the models provide accurate information?  Were public warnings based on these models?

External influences: How was the media, public and social media managed?  Did add positively or negatively to the response?

Command and politics: Does the government structure help or hurt?  Was Incident Command System used?  Was the situational awareness completed?  Was information shared effectively?

Tactical: How was information shared to and from the first responders and front line workers?  Did these workers suffer from information overload?


“Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.”  — George Santayana

I add that in since few people actually know the source and accurately quote it.  Experience is a great teacher.  Most importantly, remembering the past helps shape the future in the right direction.

Below are a list of significant disasters that altered the direction of Emergency Management.  Think about what should be remembered for each of these incidents, and then how these events would have unfolded with today’s technology – including the internet and social media.

Seveso, Italy (1976).  An industrial accident in a small chemical manufacturing plant.  It resulted in the highest known exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in residential population.  The local community was unaware of the risk.  It was a week before public notification of the release and another week before evacuations.

Bhopal Methyl Isocyanate Release (1984).  An industrial accident that released 40 tones of MIC.  There was no public warning.  The exact mixture of the gas was not shared so the first responders did not know how to treat the public.

Chernobyl Nuclear Disaster (1986).  An explosion at the plant and subsequent radioactive contamination of the surrounding geographic area. Large parts of Europe and even North America were contaminated.  The Communistiic regime hid the initial information and did not share information until another country detected it.

Hurricane Hugo (1989).  At the time, this was the costliest hurricane disaster.  There was an insufficient damage assessment that lead to wrong resource allocation.  The survivors in rural communities were not located and responded to for many days.  Much of the response was dependent on manual systems.

Loma Prieta (1989).  A M7 earthquake that injured around 3800 in 15 seconds.  Extensive damage also occurred in San Francisco’s Marina District, where many expensive homes built on filled ground collapsed and / or caught fire. Beside that major roads and bridges were damaged.  The initial response focused on areas covered by the media.  Responding agencies had incompatible software and couldn’t share information.

Exxon Valdex (1989).  The American oil tanker Exxon Valdez clashed with the Bligh Reef, causing a major oil leakage.  The tanker did not turn rapidly enough at one point, causing the collision with the reef hours. This caused an oil spill of between 41,000 and 132,000 square meters, polluting 1900 km of coastline.  Mobilization of response was slow due to “paper resources” that never existed in reality.  The computer systems in various agencies were incompatible and there was no baseline data for comparison.

Hurricane Andrew (1993).  Andrew was the first named storm and only major hurricane of the otherwise inactive 1992 Atlantic hurricane season. Hurricane Andrew was the final and third most powerful of three Category 5 hurricanes to make landfall in the United States during the 20th century, after the Labor Day Hurricane of 1935 and Hurricane Camille in 1969.  The initial response was slowed due to poor damage assessment and incompatible systems.

Northridge Earthquake (1994).  This M6.7 earthquake lasted 20 seconds.  Major damage occurred to 11 area hospitals.  The damage made FEMA unable to assess the damage prior to distributing assistance.  Seventy-two deaths were attributed to the earthquake, with over 9,000 injured. In addition, the earthquake caused an estimated $20 billion in damage, making it one of the costliest natural disasters in U.S. history.

Izmit, Turkey Earthquake (1999).  This M7.6 earthquake struck in the overnight hours and lasted 37 seconds.  It killed around 17,000 people and left half a million people homeless.  The Mayor did not receive a damage report until 34 hours after the earthquake.  Some 70 percent of buildings in Turkey are unlicensed, meaning they did not get approval on their building code.  In this situation, the governmental unit that established the codes was separate from the unit that enforced the codes.  The politics between the two units caused the codes to not be enforced.

Sept 11 attacks (2001).  The numerous intelligence failures and response challenges during these three events are well documented.

Florida hurricanes (2004).  The season was notable as one of the deadliest and most costly Atlantic hurricane seasons on record in the last decade, with at least 3,132 deaths and roughly $50 billion (2004 US dollars) in damage. The most notable storms for the season were the five named storms that made landfall in the U.S. state of Florida, three of them with at least 115 mph (185 km/h) sustained winds: Tropical Storms Bonnie, Charley, Frances, Ivan, and Jeanne. This is the only time in recorded history that four hurricanes affected Florida.

Indian Ocean Tsunami (2004). With a magnitude of between 9.1 and 9.3, it is the second largest earthquake ever recorded on a seismograph. This earthquake had the longest duration of faulting ever observed, between 8.3 and 10 minutes. It caused the entire planet to vibrate as much as 1 cm (0.4 inches) and triggered other earthquakes as far away as Alaska.  There were no warning systems in the Indian Ocean compounded by an inability to communicate with the population at risk.

Hurricane Katrina and Rita (2005).  At least 1,836 people lost their lives in the actual hurricane and in the subsequent floods, making it the deadliest U.S. hurricane since the 1928 Okeechobee hurricane.  There were many evacuation failures due to inadequate considerations of the demographic.  Massive communication failures occurred with no alternatives considered.


Additional resources


Cyber-security and disasters

More and more systems are being connected to share information, and IP networks provide a very cost-effective solution.  One physical network can be used to connect many different devices.  The water company can use a computer interface to control the water pumps and valves at treatment plants and throughout the distribution system.  The natural gas and electric providers can do the same.  Hospitals connect medical devices throughout the facility to central monitoring stations.  A few people in one room can watch all the ICU patients.  Fire departments, law enforcement and EMS can use a wireless network to communication, dispatch units, provide navigation, and track vehicle telematics to manage maintenance cycles.

All networks do not need to lead to the internet, however this is rare and needs to be specifically designed into the system when it is being designed.  Having a physically separate system does provide the best security if all the data is being kept internal to that network.  Remember that internal-only networks are still subject to security issues from internal threats.

Any network or device that does have an internet connection is subject to external attacks through that connection.  A malicious hacker can break into the water treatment system and change the valves to contaminate drinking water.  They could open all the gates on a dam flooding downstream communities.  They could reroute electrical paths to overload circuits or such down other areas.  They could change the programming so dispatchers are sending the farthest unit instead of the nearest, or create false dispatch instructions.

Cyber attacks can disable systems but they can also create real-world disasters.  First responders are trained to consider secondary-devices during intentionally started emergencies.  What if that secondary-device is a cyber attack, or a cyber attack precedes a real event?  During the September 2001 attacks in New York City, a secondary effect of the plane hitting the tower was the crippling of the first responder’s radio system.  Imagine if a cyber attack was coordinate with the plane’s impact.  The attackers could turn all traffic lights to green which could cause traffic accidents at nearly all intersection.  This would snarl traffic and prevent the first responders from getting to the towers.

A side step on the use of the term hacker.  A hacker is anyone that hacks together a technical or electronics solution in an uncommon way.  I explain it as “MacGyver’ing” a solution.  There is no positive or negative connotation in the term used that way.  Hacker also describes a person that breaks into computer systems by bypassing security.  A more accurate description is calling them a cracker, like a safe cracker.  This type of hacker is divided into criminals (black hats) and ethical hackers (white hats).  Ethical hackers are people who test computer security by attempting to break into systems.

By now, you’re probably aware of the Anonymous hacker group.  They have been collectively getting more organized and increasing in actions that drive toward internet freedom since 2008.  Often they’re called “hacktivists” meaning they hack to protest.  There are many more malicious hackers out there with different agendas: status, economic, political, religious … any reason people might disagree could be a reason for a hacker.

Somewhere on the internet is a team of highly trained cyber ninjas that are constantly probing devices for openings.  They use a combination of attack forms including social engineering (phishing) attacks.  Automated tools probe IP addresses in a methodically efficient manner.  The brute force method is used to test common passwords on accounts across many logins.  Worms and Trojans are sent out to gather information and get behind defenses.  Any found weaknesses will be exploited.

Pew Internet reports that 79% of adults have access to the internet and two-thirds of American adults have broadband internet in their home.  The lower cost of computers and internet access has dramatically increase the number of Americans online.  The stand-alone computer connected to the internet has forced the home user to have the role of the system administrator, software analyst, hardware engineer, and information security specialist.  The must be prepared to stop the dynamic onslaught of cyber ninjas, yet are only armed with the tools pre-loaded on the computer or off-the-shelf security software.

Organizations are in a better and worse position.  The enterprise network can afford full-time professionals to ensure the software is updated, the security measures meet the emerging threats, and professional resources to share information with peers.  Enterprise networks are also a larger target; especially to increase the online reputation of a hacker.

On Disasters

During a disaster, there will be many hastily formed networks.  The nature of rushed work increases the number of errors and loopholes in technical systems.

During the Haiti Earthquake response, malware and viruses were common across the shared NGO networks.  The lack of security software on all of the laptops created major problems.  Some organizations purchased laptops and brought them on-scene without any preloaded security software.  Other organizations hadn’t used their response computers in over a year, so no recent security patches to the operating systems or updates to the anti-virus software was done.  USB sticks move data from computer to computer, bypassing any network-level protections.  The spread of malware and viruses across the networked caused problems and delays.

There are a number of key factors when designing a technology system that will be used in response that differ from traditional IT installations.  One of the most important considerations is a way for the system to be installed in a consistent manner by people with minimal technical skills.  Pre-configuration will ensure that the equipment is used efficiently and in the most secure manner.


Additional Resources

Autonomous systems and robotics

An autonomous system is a system with a trigger that causes an action without the involvement of a person.  Two well known examples are tsunami buoys and earthquake sensors.  These continuously monitor a series of sensors.  When the sensors register a reading that exceeds a predefined threshold (the trigger), a signal is sent to a computer to generate warning alerts (the action).  These systems can range from the very simple to the extremely complex.

One could argue that the backup sensors on cars are an autonomous system: the trigger is the driver shifting into reverse, the computer turns on the back up lights, activates the camera, turns on the internal screen, and activates the sensors to start beeping.  I think it is a bit of a stretch but it does provide a simple example.

The other side of this spectrum is the Mars Rovers.  Due to the time delay in communications between Earth and Mars, the rover cannot be directly controlled.  NASA gives a general command for the rover to head to a new destination.  The rover acts independently to drive over the terrain, and makes decisions to avoid obstacles.  On-board the rover is the Autonomous Exploration for Gathering Increased Science (AEGIS) software — part of Onboard Autonomous Science Investigation System (OASIS) —to automatically analyze and prioritize science targets.  This allows more work to be done without human intervention.

Somewhere between the two is the Roomba.  This autonomous vacuum moves around the house cleaning up.  It will learn the layout of a room to be more effective in future runs.  When complete, it docks to recharge.  At a set time interval, it does this again.  Now don’t laugh about the Roomba; it is a very commonly hacked robot for people interested in DIY robotics.  Microsoft Robotics Developer Studio software has built in modules specific to controlling the Roomba.  That gives mobility and a way to control it.  Microsoft has released additional modules for the Robotics software which adapts the Kinect sensor to sense the environment.  This addition provides vastly better sensors over the traditional range-finder and light sensors.  Microsoft isn’t the only player in this market; Lego Mindstorms markets robotics as a toy for children ages eight and up.  Robotics isn’t just for engineering students in high-end tech universities.

There is enough technology in existence today to make huge leaps in the use of robotics.  The main challenge of robotics is the acceptance by the general public.

Watch the videos about the DARPA autonomous car, Urban Search and Rescue robots, BigDog robotic transport and Eythor Beder’s demonstration of human exoskeletons.  Combine these and we can vision some major transformations.

Take the computational guts from the autonomous car and put them into a fire engine.  Now the fire engine can be dispatched and navigate to a fire scene independently.  Once on-scene, sensors can detect the heat from the fires – even if they are in the walls and not visible to the human eye.  Robots from the fire engine can be sent into the structure.  Heavier duty robots can pull hoses and start to extinguish the fire.  Other robots can perform a search on the house to assist survivors out, and rescue those unable to escape.  Communication systems will link together all the sensors on the robots to generate a complete common operating picture that all robots use in decision making.

A similar thing can be done with an ambulance.  Focus just on the stretcher.  Imagine if the stretcher would automatically exit the ambulance and follow the medic with all the necessary equipment.  The stretcher could be equipped to up and down steep stairs, make tight turns and remain stable.  Automating the lifting and bending done by EMS workers handling patients on stretchers would reduce the number of back injuries caused by improper lifting.  This would keep more EMS workers in better health which reduces the sick leave and employee compensation claims.

Robotics in search and rescue could be the one thing that saves the most number of lives, both in the victims and the rescuers.  A building collapses and the SAR team responds.  On the scene, the workers setup specialized radio antennas at different points around the building site.  They release dozens, if not hundreds, of spider or snake like robots.  Each robot has the ability to autonomously move through the rubble.  They are light enough to not disturb as a human would do.  They are numerous enough to more quickly cover the building as a human would do.  The combined sensor data of their location and scans would quickly build a three dimensional model of the rubble.  They are location aware of each other so they don’t bunch up in one area or miss another.  Heat, sound and motion sensors could detect people.  Once this initial scan is done, the SAR team will be able to know where the survivors are and communicate with them through the robots.  The team will evaluate the 3D model for the best entry paths to get to and rescue the survivors.  If the situation is unstable, larger robots can be used ahead of the team to provide additional structural support to reduce the risk of collapse.  If a robot can’t communicate with the antennas outside, the robots can do mesh networking to pass information along.

My bag of holding

From time to time, I pull something out of my bag and folks wonder just what I carry in it.  So here is the contents of my bag that I carry with me nearly everywhere.  It is my home, commute, work, disaster and COOP bag.

Picture of the contents in my every day laptop bag.

I start with a Timbuk2 messenger bag.  Unlike most messenger bags that are horizontal (wider than tall), this bag is vertical (taller than wide).  It is TSA friendly too with a separate laptop compartment.  I looked at their site and it might not be made anymore.

Top left of the table is a USB clip extender.  This handy doodad clips the USB aircard to the top of the monitor for better signal reception.

Two power supplies for the laptop.  The left one is 12v DC and the right one is 110v 90w.

The silver thing in the middle of the top row is Imodium.  Because when it is needed, it is needed right away.

Two standard micro-USB chargers.  All my USB chargeable devices are standardized on the micro USB.

Near the standard charges are two USB blocks.  The bullet shaped one is for 12v to USB.  The square one is 110v to USB.

The far right of the table is a couple micro USB cables, and one mini USB cable.

Next row start at the business cards.  Note the high quality business card holder.

Pens, assorted.  One of those is really a pencil.

Nail clippers.  Also cuts cable ties, hanging threads, and anything else that needs a nip.

USB aircard.  This one happens to be a 4G Verizon card.

Surge protector.  Three 110v outlets and two USB outlets.  Handy when the hotel or airport only has space for one plug.  The short extension cord goes with this so the other wall outlet isn’t blocked.

Bluetooth mouse.  There is only so much of a touch pad that one can tolerate.  Honestly though, my wife uses that more then I do.

USB sticks.  The black one is an IronKey for sensitive data.  The other two are for file movement only.  I don’t store data on unprotected USB sticks.  Risk of theft/loss is too great.  IronKey moved all the secure USB products to Imation.

Finally, the bottom left of the picture: headsets.  Two are simple listen only.  One has a microphone for phone calls.  They break or get lost so often, that I keep stashing more in the bag.

Not shown:



Extra laptop battery

Note pad … I mean paper, not electrified or anything.



Satellite Comms and Antennas

Satellite Communications

Satellites provide a valuable link during disasters since it requires no local terrestrial infrastructure beyond where you are setting up.  Cell phones require cell towers within a few miles to be working and not overloaded.  Wireline services require a connection through the disaster to where you are.  Satellite systems do require a power source.  Depending on the size, it can be a vehicle’s 12 volt power outlet, a portable generator or a vehicle mounted generator.

A satellite is in an orbit around the Earth.  There are many different ways to position a satellite in orbit depending on the need.  A common orbit for communication satellites is a geostationary orbit 22,236 miles above the Earth.  Precision is needed when working with satellites at that distance.  One degree off and the satellite will be missed by 388 miles.  That is like aiming to land in Washington DC and really ending up in Detroit or Boston.


The antenna used makes a big difference.  Let’s start by looking at a two-way radio antenna.  Most handheld two-way radios have an omni-directional antenna.  That means it doesn’t favor any specific direction so orientation doesn’t matter as much.  This gain in flexibility is matched with a loss in “punch” or sending power.  Imagine a basic light bulb in a lamp with no shade.  It spreads light everywhere.  That’s how an omni-directional antenna works.

What if you want that light to be focused to only project sideways?  Like an all around white light on a boat.  The bulb and the lens are constructed to direct the light in a specific pattern.  This is similar to a high gain antenna.  The main punch of the radio signal is increased perpendicular to the antenna by reducing the energy projected parallel out the top and bottom of the antenna.

Now you want to project light in a single focused direction such as a spot light or flashlight.  The bulb is constructed with reflectors and other features to direct the light.  The same is true with radio antenna.  A directional antenna is also called a beam or Yagi-Uda antenna.  Most of your neighborhood roof mount TV antennas take this form.  A series of metal rods direct the radio waves to a higher focus then the use of a single rod.

Wait, a TV antenna?  I thought those were receiving only?  The neat thing about antennas is that they will receive with the same characteristics as they transmit.  A highly directional antenna is more sensitive and will better pick up a signal from its pointed direction then an omni-directional antenna.  However, if the same signal came from a different direction then where the directional antenna is pointed, then the omni-directional antenna will receive it better.

So why don’t we always use directional antennas?  Think back to two-way radio repeaters.  The repeater’s antenna an example of when you want to broadcast the signal widely.  Directional antennas are good for communications between two known locations.  Onmi-directional antennas are good when you don’t know where the other location is, or it keeps changing and moving the antenna continually is impractical.

Going back to our analogy of light to describe radio waves, now imagine that you need a highly focused light.  A laser pointer; it is designed to send out a highly focused beam of light that can be seen for long distances.  The radio version of this is the satellite dish.  The transmitter bounces the signal off a parabolic reflector which theoretically sends all the energy in the same direction in a narrow beam.  These very narrow focus antennas are called “very small aperture terminal” (VSAT).

This series of examples is just discussing the shape of the antenna relative to the direction and focus of the radio waves.  It is possible to use practically any frequency with any shape of antenna so long as the antenna is properly tuned.  Different radio bands are naturally more efficient for communication modes when combined with certain types of antennas.

The important thing to remember here is just because you can do something doesn’t mean you want to do it.  This is where you need to rely on your radio technicians to design the most effective system using the right frequencies and modes to get the message to the final destination.


Additional readings


Getting a little more technical on satellites

You’ll probably want to start with the entry on satellite communication and antennas before this one.


Orbits are put into four levels based on altitude.  Low Earth Orbit is about 100 miles to 1240 miles.  The short distance allows low-power devices with omni-directional antennas to be used.  The most common example is satellite phones, like Iridium.  These satellites circle the Earth in 90 minutes.  From any spot on the Earth, the satellite will only be visible overhead for 10 minutes.

Being visible isn’t referring to seeing it with your naked eye.  Visible means a direct line-of-sight view of a spot so communications can occur.  It also assumes a large sky, such as being in Montana or the open ocean.  The more clutter blocking the sky reduces the satellite visibility.  This includes hills, mountains, trees, buildings and other obstructions, or being in a low spot like a valley.  It is nearly impossible to use satellite equipment in downtown New York City at ground level due to all the buildings.

Medium Earth Orbits are 1240 miles to under 22,236 miles.  At this altitude, the satellites orbit the earth in four to six hours.  That extends the visibility overhead to about two hours.  GPS satellite operate in this orbit.

Geosynchronous Orbits are satellites at 22,236 miles.  It takes a full day to orbit the Earth so the satellite will appear in the same spot of the sky once a day.

Geostationary Orbits are satellites at 22,236 miles and parallel to the equator.  Since the satellite is moving at the same speed the Earth is rotating and in the same plane as the rotation, the satellite is in the same spot of the sky all the time.  This is the most popular orbit to park a satellite in.

High Earth Orbits are satellites above 22,236 miles.  They are not commonly used for our purposes.

Footprints, beams and look angles

A satellite’s footprint is the circular area on the surface of the Earth that is visible to the satellite.  This is the potential area of coverage that the satellite can communicate with.  Areas directly under the satellite will receive a stronger signal then those on the fringe areas due to the increased distance and atmospheric interference.

Satellite operators want efficient use of their equipment so they use beams, or specifically focused transceivers, to cover areas within the footprint.  Imagine a satellite positioned above the equator roughly centered on the United States.  The satellite operator’s intended audience is maritime users.  They would focus the beams toward the waters of the Atlantic, Pacific, Gulf and Great Lakes; and away from inland areas knowing that there are few oceangoing freighters in Colorado.  No energy is wasted trying to fill a part of the satellite’s footprint where it will never be used.  When evaluating a satellite for use, you need to consider the beams and not the total footprint.

The more transceivers a satellite has, the more traffic it can handle at simultaneously.  Satellite operators rate their satellites by the total cumulative traffic it can handle simultaneously through the entire satellite.  The other factor when evaluating service is how much traffic a specific beam can handle.  In normal daily use, it is hard to overload a single spot beam as the resources are geographically dispersed.  A catastrophic disaster will bring many of these resources to a single geographical location; all trying to use the same beam.  That is when the beam will be overloaded.

Outages and overloads on satellite services are common during major hurricanes such as Katrina, Rita, Gustav and Ike.  This is most common on shared satellite services.  Consider the satellite user density that occurs with the convergence of local, state and Federal responders; media and observers; utility companies restoring service; private companies COOPing; and NGOs, CBOs and FBOs responding as well.  Many of these rely on some form of satellite service.  Paying for dedicated satellite airtime is quite costly especially when they only use it occasionally.  Now that a disaster has occurred, they all want to use it at the same time.  In many ways, a satellite in orbit is similar to a cell tower: they are designed to maximize revenue efficiently for normal use, and extreme circumstances quickly exceed the designed capacities.

Finding a satellite in the sky is done through look angles.  These measurements are unique based on the observer’s location.  With geostationary satellites, the look angles will remain constant so long as the observer’s location remains constant.  A look angle is made up of three parts: the azimuth, elevation and polarization.  The azimuth is the compass direction (0-360°).  The elevation is how high to look up (0-90°).  Polarization is rotating the transmitter to align the radio waves with the satellite.

Here’s an exercise.  Imagine that we are using Intelsat’s G-18 satellite located at 123° West.  This is a geostationary satellite so we know that it will be above the equator and 22,236 miles up.  123° West is near the California coast.  If we are in San Francisco, the azimuth would be 180° and elevation 46°.  The higher the elevation, the easier it is to clear tree and other obstacles.  Move to St. Thomas, USVI; the azimuth is 258° and elevation is 22°.  St. Thomas is an island with hilly peaks in the middle so a satellite shot is unlikely from the NE side of the island due to the low look angle.  Change our location again to Boston; the azimuth becomes 242° and elevation drops to 19°.  The same situation occurs in Maine where the elevation is very close to the horizon.  We were lucky during an operation in Maine and setup headquarters at a military airbase that had a runway near the same angle we needed.

Bands and frequencies

Just as two-way radios have a number of different bands with difference characteristics, so do satellites.  Satellites operate at higher frequencies then two-way radios.  A number of these frequencies are shared with terrestrial services.  For example, the S Band (2-4 GHz) includes both satellite radio (Sirius), and Wifi and Bluetooth signals.

Inmarsat BGANs operate on the L-band (1-2 GHz).  These are small, easy to point and decent global coverage.  The major issue with BGANs is the low data throughput (32 to 256 kbps) and high cost.

C-Band (3.7-8 GHz) is well known for the “direct to home” TV signals using larger dishes of 2 – 3½ meters in diameter.  The downside is that C-band has power restrictions and receives interference from microwave services.

Ku-Band (12-18 GHz) doesn’t have the power restrictions of the C-band and is used by the DirecTV system.  The main challenge of Ku-band is the nearness to the resonant frequency of water.  This means that water absorbs radio waves reducing the strength of the signal.  This is commonly called rain fade.  If you have DirecTV, you’ve experienced this when your signal goes out during heavy rain storms.  The wavelength absorption peaks at 22.2 GHz.  For non-technical purposes, think of it this way: The subscript U representing being under this peak.  The subscript A of the Ka-band represents being at or above this peak.

These characteristics are important considerations depending how satellite service will be used.  A Ku-band service will not help you for communications during the storm, but it will have the fastest speeds before and after the storm.  The C-band could work in the storm, but the size of the dish makes portability unlikely and temporary setups risky in high winds.  L-band will get through nearly all the time, but only a relatively slow speeds and high cost.


Additional resources


Satellite technology as reliable backup?

Satellite in spaceIn reading Andy Opsahi’s article Satellite Technology Provides Disaster Communications When Cell Towers Fail at http://www.emergencymgmt.com/disaster/Satellite-Technology-Provides-Disaster.html, I was at first heartened with the statement:

Emergency managers know that having foolproof disaster communications plan is nothing more than fantasy.  That’s because even the most redundant backup strategies can leave responders unable to communicate.

Unfortunately, Andy missed two major drawbacks to satellite communications in the article that appears bias toward the positives of using satellite.  It isn’t surprising as they are frequently overlooked.  A clear view of the sky, and the spot beam capacity.  Although he was dead on when he said it was expensive.

Continue reading Satellite technology as reliable backup?

Is rugged equipment worth the cost?

A lot of vendors assume that if you respond to disasters that you need ruggedized equipment.  They must have a picture in their head of my colleagues heading into disaster zone with satellite phone in-hand, military spec ruggedized laptop under the arm, BGAN in the backpack with an intention of sitting down in the mud and rain to work. Truth of the matter is that the answer is simply “it depends.” And I hear the collective groan from everyone reading this that simply and it depends should never be used together in the same sentence. 

Continue reading Is rugged equipment worth the cost?