So much information about information

Information.  There is a lot of information about information.  Effective use of information is expected, yet in my experiences there are always “day after experts” that will tell you it could have been done better.

This is really the overarching question about information.  Craig Fugate has noted on a number of occasions: Every EOC has the weather radar or satellite image of a hurricane making landfall.  How many people in the EOC do you think can actually read that image?  Much of what you see in an EOC is eye-candy or part of the “theater of disaster” for visitors.

High-tech EOCs and pretty graphics are useless unless they help someone make a decision. – Craig Fugate

Information sources include: Historical records, personal experience, first responders, local and state EOCs, media (traditional & new), remote sensing, and the general public (inside and outside event area)

Imagine if all smart phones had sensors (like GPS, microphones, photo, video, and text capability) that people could use to upload multi-media information to first responders.

Oh wait.  They do.

Gathering and analyzing data

The challenge there is in the gathering and analyzing of the data.  That is where the information hierarchy or “DIKW” models come in to help understand it.

  • Data is a simple specific fact.  Alone it doesn’t do anything and doesn’t have much value.  Data needs to be built on.  Example: There are 1000 people in a shelter.
  • Information is data processed to be relevant and meaningful.  It adds the “what” element.  Example: There are 1000 people in a shelter and no toilets.
  • Knowledge is information combined with opinions, skills and experience.  It adds the “how to” element.  Example:  A 1000 people in a shelter will need 75 toilets.
  • Instead of Wisdom, I add Action.  Knowledge is good, but putting that knowledge to action is better.  Example: We need to get 75 toilets for the 1000 people in the shelter.

Imaging being asked three questions: “So what’s the problem?”; “What would fix it?”; “How will you do it?”  When you ask these questions, you are loosely following the DIKW model to turn data into action.

Information processing steps

All data systems, including you, will follow basic steps to process data.  These steps can be very simple or increasingly complex depending on the needs.

Project management folks will recognize a critical step that is assumed before these can occur: the requirements.  What are your information needs?  What is the goal that you’re trying to achieve with this work?

  • Collection: Gathering and Recording Information
  • Evaluation: Determine confidence: credibility, reliability, validity, relevance
  • Abstracting: Editing and reducing information
  • Indexing: Classification for retrieval
  • Storage: Physical storage of information
  • Dissemination: Get the right information to the right people at the right time

Many systems are good at the collection of data.  That’s usually the easy part.  Finding the right computer algorithms to handle evaluation to indexing is the hard part.  Google is an example of a company that has gotten this down so well that they became the top company for internet searches.

 Next: Databases, data sets and meta data

 

Additional resources

Cyber-security and disasters

More and more systems are being connected to share information, and IP networks provide a very cost-effective solution.  One physical network can be used to connect many different devices.  The water company can use a computer interface to control the water pumps and valves at treatment plants and throughout the distribution system.  The natural gas and electric providers can do the same.  Hospitals connect medical devices throughout the facility to central monitoring stations.  A few people in one room can watch all the ICU patients.  Fire departments, law enforcement and EMS can use a wireless network to communication, dispatch units, provide navigation, and track vehicle telematics to manage maintenance cycles.

All networks do not need to lead to the internet, however this is rare and needs to be specifically designed into the system when it is being designed.  Having a physically separate system does provide the best security if all the data is being kept internal to that network.  Remember that internal-only networks are still subject to security issues from internal threats.

Any network or device that does have an internet connection is subject to external attacks through that connection.  A malicious hacker can break into the water treatment system and change the valves to contaminate drinking water.  They could open all the gates on a dam flooding downstream communities.  They could reroute electrical paths to overload circuits or such down other areas.  They could change the programming so dispatchers are sending the farthest unit instead of the nearest, or create false dispatch instructions.

Cyber attacks can disable systems but they can also create real-world disasters.  First responders are trained to consider secondary-devices during intentionally started emergencies.  What if that secondary-device is a cyber attack, or a cyber attack precedes a real event?  During the September 2001 attacks in New York City, a secondary effect of the plane hitting the tower was the crippling of the first responder’s radio system.  Imagine if a cyber attack was coordinate with the plane’s impact.  The attackers could turn all traffic lights to green which could cause traffic accidents at nearly all intersection.  This would snarl traffic and prevent the first responders from getting to the towers.

A side step on the use of the term hacker.  A hacker is anyone that hacks together a technical or electronics solution in an uncommon way.  I explain it as “MacGyver’ing” a solution.  There is no positive or negative connotation in the term used that way.  Hacker also describes a person that breaks into computer systems by bypassing security.  A more accurate description is calling them a cracker, like a safe cracker.  This type of hacker is divided into criminals (black hats) and ethical hackers (white hats).  Ethical hackers are people who test computer security by attempting to break into systems.

By now, you’re probably aware of the Anonymous hacker group.  They have been collectively getting more organized and increasing in actions that drive toward internet freedom since 2008.  Often they’re called “hacktivists” meaning they hack to protest.  There are many more malicious hackers out there with different agendas: status, economic, political, religious … any reason people might disagree could be a reason for a hacker.

Somewhere on the internet is a team of highly trained cyber ninjas that are constantly probing devices for openings.  They use a combination of attack forms including social engineering (phishing) attacks.  Automated tools probe IP addresses in a methodically efficient manner.  The brute force method is used to test common passwords on accounts across many logins.  Worms and Trojans are sent out to gather information and get behind defenses.  Any found weaknesses will be exploited.

Pew Internet reports that 79% of adults have access to the internet and two-thirds of American adults have broadband internet in their home.  The lower cost of computers and internet access has dramatically increase the number of Americans online.  The stand-alone computer connected to the internet has forced the home user to have the role of the system administrator, software analyst, hardware engineer, and information security specialist.  The must be prepared to stop the dynamic onslaught of cyber ninjas, yet are only armed with the tools pre-loaded on the computer or off-the-shelf security software.

Organizations are in a better and worse position.  The enterprise network can afford full-time professionals to ensure the software is updated, the security measures meet the emerging threats, and professional resources to share information with peers.  Enterprise networks are also a larger target; especially to increase the online reputation of a hacker.

On Disasters

During a disaster, there will be many hastily formed networks.  The nature of rushed work increases the number of errors and loopholes in technical systems.

During the Haiti Earthquake response, malware and viruses were common across the shared NGO networks.  The lack of security software on all of the laptops created major problems.  Some organizations purchased laptops and brought them on-scene without any preloaded security software.  Other organizations hadn’t used their response computers in over a year, so no recent security patches to the operating systems or updates to the anti-virus software was done.  USB sticks move data from computer to computer, bypassing any network-level protections.  The spread of malware and viruses across the networked caused problems and delays.

There are a number of key factors when designing a technology system that will be used in response that differ from traditional IT installations.  One of the most important considerations is a way for the system to be installed in a consistent manner by people with minimal technical skills.  Pre-configuration will ensure that the equipment is used efficiently and in the most secure manner.

 

Additional Resources

Understanding networking

As a manager, it is not your responsibility to know how to configure a router and make things work in the network.   The best way that you should consider networking is the “black box theory”.  You really don’t care how the individual parts work.  You need to know what they are capable of.  Believe it or not, networking is really simple.

At the simplest form, a network is a few computers that are connected by a wire to a network device that shares the information to each computer.   A network is similar to a big post office that is sharing information packets electronically.  The computers each have a unique name that helps the network devices know what information goes to what computer.

The internet is an IP-based network.  IP stands for Internet Protocol.  Easy, huh?  The Transmission Control Protocol is the way that the computers break up large data chunks to send across the internet.  Stick the two together and you’ll get the commonly referred to TCP/IP.  There are other forms of message handling—such as User Datagram Protocol (UDP)—to move information across the internet.  You don’t need to know how these work or move information.  Just know that IP is the backbone of the internet.

Any data that you can turn into an IP packet can travel over an IP network; that data can also travel local networks and the internet.  When a phone converts voice to an IP packet, it is called a Voice over IP phone (VoIP) meaning that it can send your phone call over the same network as email, web browsing, and everything else.

Blah-blah over IP is nothing fancy.  That means that someone has designed a network device (or interface) that translates information from a source to an IP-packet, and back.  You’ll hear about Radio-over-IP, Video-over-IP, Computer-over-IP, and just about everything else.

Data standards are really important in this area.  When each vendor comes up with their own way of doing ____-over-IP, then it is likely that vendors will not be compatible unless they use a standard.  While there are organizations that state standards, a standards true usefulness is proven by if and how people use it.

The International Telecommunications Union (ITU) has a series of standards for videoconferencing, including ITU H.320 and H.264.  When Cisco Telepresence was released, it was designed to bring a meeting room presence to teleconferencing.  Part of the design was full size displays that blended both conference rooms.  It was not compatible with any other video conferencing systems.  The Cisco sales rep explained to me that their product would look poor if it was used with lower quality non-telepresence systems, so the decision was made to be a non-standard data packet.  The problem with this is that it would require companies to invest in two separate video conferencing systems.  More recent advances have allowed some mixed use of video conferencing systems.

Now that we’ve talked a bit about what can go across the network, let us turn back to the network itself.

There are many different formats of networks.  A quick internet search on “network topology” will show the different forms.  Each has an advantage and a disadvantage.  For this course, the focus will be on a tree topology.  An internet connection enters a site through one point.  Switches and routers are used to split that internet connection to all the individual computers.

A demarc (short for demarcation) point is where a utility enters a building.  It is also the point that separates ownership between the utility company and the building owner.  The electrical demarc in a residential home is commonly the electric meter.  The power company will handle everything up to and including the meter.  The home owner handles everything from the meter to the power outlets.

A telephone demarc is located at the telephone network interface.  The network demarc is located at the network interface device (aka smartjack).  These can be located anywhere in a building, but I’ve found that most wireline utilities come in together.  These can be copper wire, fiber optic or some other type of cable.

The demarc is the head of the network for that site.  In a tree topology, this is where the site’s primary router would be located.  A router is a network device that moves data packets between two different networks.  Here, the router is directing the packets, only passing those that need to travel on the other network.  It is ideal for separating two networks to reduce congestion by keeping local data within the local network.  A primary router, sometimes called a site’s core router, is the one that controls the other routers and is mission-critical for the site to be connected.

Routers are the major component that give a network flexibility.  Professional (non-consumer) grade routers allow for the installation of modules, both physical and logical.  These modules connect the router to different devices.  These modules commonly allow a router to connect to a wireline (T1, T3, etc) circuit, a wireless (wifi, cell) circuit, or a different cabling (twisted pair, coax).  These modules can also be used to connect a router to a phone system, radio system, video system and so on.

Other network devices used to spread network segments out from the router include switches and hubs.  Switches can have different interfaces and be used to connect different network types.  This is handy in older buildings where you may need to use an existing style network and will overlay it with a different type of cabling or connections.  Hubs are almost non-intelligent splitters that just provide more ports.

The Warriors of the Net video provides an entertaining explanation of the different components.  Again, from a manager’s perspective, you do not need to get very technical with the network components.

Autonomous systems and robotics

An autonomous system is a system with a trigger that causes an action without the involvement of a person.  Two well known examples are tsunami buoys and earthquake sensors.  These continuously monitor a series of sensors.  When the sensors register a reading that exceeds a predefined threshold (the trigger), a signal is sent to a computer to generate warning alerts (the action).  These systems can range from the very simple to the extremely complex.

One could argue that the backup sensors on cars are an autonomous system: the trigger is the driver shifting into reverse, the computer turns on the back up lights, activates the camera, turns on the internal screen, and activates the sensors to start beeping.  I think it is a bit of a stretch but it does provide a simple example.

The other side of this spectrum is the Mars Rovers.  Due to the time delay in communications between Earth and Mars, the rover cannot be directly controlled.  NASA gives a general command for the rover to head to a new destination.  The rover acts independently to drive over the terrain, and makes decisions to avoid obstacles.  On-board the rover is the Autonomous Exploration for Gathering Increased Science (AEGIS) software — part of Onboard Autonomous Science Investigation System (OASIS) —to automatically analyze and prioritize science targets.  This allows more work to be done without human intervention.

Somewhere between the two is the Roomba.  This autonomous vacuum moves around the house cleaning up.  It will learn the layout of a room to be more effective in future runs.  When complete, it docks to recharge.  At a set time interval, it does this again.  Now don’t laugh about the Roomba; it is a very commonly hacked robot for people interested in DIY robotics.  Microsoft Robotics Developer Studio software has built in modules specific to controlling the Roomba.  That gives mobility and a way to control it.  Microsoft has released additional modules for the Robotics software which adapts the Kinect sensor to sense the environment.  This addition provides vastly better sensors over the traditional range-finder and light sensors.  Microsoft isn’t the only player in this market; Lego Mindstorms markets robotics as a toy for children ages eight and up.  Robotics isn’t just for engineering students in high-end tech universities.

There is enough technology in existence today to make huge leaps in the use of robotics.  The main challenge of robotics is the acceptance by the general public.

Watch the videos about the DARPA autonomous car, Urban Search and Rescue robots, BigDog robotic transport and Eythor Beder’s demonstration of human exoskeletons.  Combine these and we can vision some major transformations.

Take the computational guts from the autonomous car and put them into a fire engine.  Now the fire engine can be dispatched and navigate to a fire scene independently.  Once on-scene, sensors can detect the heat from the fires – even if they are in the walls and not visible to the human eye.  Robots from the fire engine can be sent into the structure.  Heavier duty robots can pull hoses and start to extinguish the fire.  Other robots can perform a search on the house to assist survivors out, and rescue those unable to escape.  Communication systems will link together all the sensors on the robots to generate a complete common operating picture that all robots use in decision making.

A similar thing can be done with an ambulance.  Focus just on the stretcher.  Imagine if the stretcher would automatically exit the ambulance and follow the medic with all the necessary equipment.  The stretcher could be equipped to up and down steep stairs, make tight turns and remain stable.  Automating the lifting and bending done by EMS workers handling patients on stretchers would reduce the number of back injuries caused by improper lifting.  This would keep more EMS workers in better health which reduces the sick leave and employee compensation claims.

Robotics in search and rescue could be the one thing that saves the most number of lives, both in the victims and the rescuers.  A building collapses and the SAR team responds.  On the scene, the workers setup specialized radio antennas at different points around the building site.  They release dozens, if not hundreds, of spider or snake like robots.  Each robot has the ability to autonomously move through the rubble.  They are light enough to not disturb as a human would do.  They are numerous enough to more quickly cover the building as a human would do.  The combined sensor data of their location and scans would quickly build a three dimensional model of the rubble.  They are location aware of each other so they don’t bunch up in one area or miss another.  Heat, sound and motion sensors could detect people.  Once this initial scan is done, the SAR team will be able to know where the survivors are and communicate with them through the robots.  The team will evaluate the 3D model for the best entry paths to get to and rescue the survivors.  If the situation is unstable, larger robots can be used ahead of the team to provide additional structural support to reduce the risk of collapse.  If a robot can’t communicate with the antennas outside, the robots can do mesh networking to pass information along.

My bag of holding

From time to time, I pull something out of my bag and folks wonder just what I carry in it.  So here is the contents of my bag that I carry with me nearly everywhere.  It is my home, commute, work, disaster and COOP bag.

Picture of the contents in my every day laptop bag.

I start with a Timbuk2 messenger bag.  Unlike most messenger bags that are horizontal (wider than tall), this bag is vertical (taller than wide).  It is TSA friendly too with a separate laptop compartment.  I looked at their site and it might not be made anymore.

Top left of the table is a USB clip extender.  This handy doodad clips the USB aircard to the top of the monitor for better signal reception.

Two power supplies for the laptop.  The left one is 12v DC and the right one is 110v 90w.

The silver thing in the middle of the top row is Imodium.  Because when it is needed, it is needed right away.

Two standard micro-USB chargers.  All my USB chargeable devices are standardized on the micro USB.

Near the standard charges are two USB blocks.  The bullet shaped one is for 12v to USB.  The square one is 110v to USB.

The far right of the table is a couple micro USB cables, and one mini USB cable.

Next row start at the business cards.  Note the high quality business card holder.

Pens, assorted.  One of those is really a pencil.

Nail clippers.  Also cuts cable ties, hanging threads, and anything else that needs a nip.

USB aircard.  This one happens to be a 4G Verizon card.

Surge protector.  Three 110v outlets and two USB outlets.  Handy when the hotel or airport only has space for one plug.  The short extension cord goes with this so the other wall outlet isn’t blocked.

Bluetooth mouse.  There is only so much of a touch pad that one can tolerate.  Honestly though, my wife uses that more then I do.

USB sticks.  The black one is an IronKey for sensitive data.  The other two are for file movement only.  I don’t store data on unprotected USB sticks.  Risk of theft/loss is too great.  IronKey moved all the secure USB products to Imation.

Finally, the bottom left of the picture: headsets.  Two are simple listen only.  One has a microphone for phone calls.  They break or get lost so often, that I keep stashing more in the bag.

Not shown:

Laptop

Kindle

Extra laptop battery

Note pad … I mean paper, not electrified or anything.

 

 

Getting grounded in GIS: Map making

GIS starts with mapping.  Map making is cartography.  The earliest known map is dated 6,200 BC: a Babylonian clay tablet recovered in 1930 that depicted district boundaries, hills and water features.  Mapping helped to advance human knowledge because it was a new way to capture and share information with each other and future generations.  Passing information to future generations is a basis for developing culture.

Maps of the stars are also maps.  Caves with dots on the walls that represent stars have been found as old as 16,500 BC.  There are other drawing of caves older than the 6,200 BC tablet that represent mountains and rivers.  It is hard to determine if these older drawings of Earth were actual geographic features or just drawings.

I often joke with GIS professionals that they just make pretty maps.  In reality, GIS professionals take a large three-dimensional object (known as the Earth), merge it with data sets through geo-coding, and then provide it to customers in an easy to understand two-dimensional image.  Computers that run GIS are often the most powerful systems in an office because of the large data sets, complex graphics, and significant calculations to perform.

Mapping’s inherent challenge has always been to take a three dimensional object and project it in two dimensions.  A map can provide information on area, shape, direction, bearing, distance and scale.  However, every projection distorts some properties to keep others accurate.  There is no two-dimensional projection that accurately captures all properties of a three-dimensional object.  It is important to select the best projection for the information that is needed.

How hard is it to convert a 3D object to a 2D representation?  Next time you are peeling an orange, try to get the peel off in a few large sections.  Flatten the orange peel.  You will see it stretch, twist and rip as it goes flat.

When mapping smaller areas — such as neighborhoods and cities — an assumption of flatness can be made.  It does depend on the risk of inaccuracy for the map’s purpose.  A few yards of inaccuracy may be fine for consumer level navigation.  A lower level of inaccuracy may be required for landing airplanes in the middle of a runway.  Property boundary disputes may need accuracy to within a few inches.

Some historically powerful island nations may prefer a Mercator projection to give the appearance of a larger land mass.  Just as with statistics, a projection’s distortion can be used to enhance or diminish a feature.

The Mercator projection is commonly used in maritime navigation because it accurately represents initial the shortest distance initial direct bearing.  The other “feature” of a Mercator projection is the size and shape distortion of objects increasing in distorted scale from the equator to the poles, where land masses appear significantly larger.

Why are they called projections?  Imagine a bare light bulb representing the Earth.  Take a piece of paper and hold it to the light.  The features of the Earth are projected on to the paper.  Holding the paper at different angles or shapes make different projections.  Draw a square, circle and triangle on the light bulb.  Now you’ll really be able to observe the distortion of these shapes.

The Mercator projection is a cylinder shape in parallel with the Earth’s axis.  Maps can be referred to by the projection, or by the property that is accurately represented.  Projection shapes include conic or plane (flat).  Preserved properties include direction (azimuthal), shape locally, area, distance, and shortest route.  There are hybrid maps that blend properties of different maps.  While no properties are kept accurate, it “looks” better and more accurate.  Again, it all goes back to the purpose of the map.

Overlay of three projections to show distortions.
Overlay of two projections to show distortions.

There are a few terms used in mapping and GIS that you need to know.

Latitude (aka parallels): These horizontal lines on a map parallel to the equator.  These can be remembered because you give people “latitude” to get their job done so they can go as far to the sides as possible but that doesn’t include a promotion or demotion.  Latitudes are names 0° at the equator to 90° North or South at the poles.  1° difference in Latitude is just about 69 miles.  Except for the equator, a circle of latitude is not the shortest distance between two points.

Longitudes (aka meridians): These vertical lines on a map go North South.  These can be remembered because of the Prime Meridian.  The Prime Meridian is 0° going through Greenwich, England.  The meridians run from there East or West to the International Date Line at 180°.  One the equator, 1° of longitude is ~69 miles converging to 0 miles at the poles.

Great Circle: The shortest distance between two points occurs along a great circle.  A great circle cuts a sphere into two equal halves.  A great circle is the largest circle that can be drawn on a sphere.  All meridians are great circles.  The only parallel that is a great circle is the equator.

Nautical Mile: One minute of latitude along any meridian is a nautical mile.  One nautical mile = 1.85 kilometers = 1.15 miles = 6076 feet.  Note that meridians are used because the distance along parallels change.  The distance along a meridian is the distance between parallels.

 

GIS continues : How far would you walk for a degree change?

Satellite Comms and Antennas

Satellite Communications

Satellites provide a valuable link during disasters since it requires no local terrestrial infrastructure beyond where you are setting up.  Cell phones require cell towers within a few miles to be working and not overloaded.  Wireline services require a connection through the disaster to where you are.  Satellite systems do require a power source.  Depending on the size, it can be a vehicle’s 12 volt power outlet, a portable generator or a vehicle mounted generator.

A satellite is in an orbit around the Earth.  There are many different ways to position a satellite in orbit depending on the need.  A common orbit for communication satellites is a geostationary orbit 22,236 miles above the Earth.  Precision is needed when working with satellites at that distance.  One degree off and the satellite will be missed by 388 miles.  That is like aiming to land in Washington DC and really ending up in Detroit or Boston.

Antennas

The antenna used makes a big difference.  Let’s start by looking at a two-way radio antenna.  Most handheld two-way radios have an omni-directional antenna.  That means it doesn’t favor any specific direction so orientation doesn’t matter as much.  This gain in flexibility is matched with a loss in “punch” or sending power.  Imagine a basic light bulb in a lamp with no shade.  It spreads light everywhere.  That’s how an omni-directional antenna works.

What if you want that light to be focused to only project sideways?  Like an all around white light on a boat.  The bulb and the lens are constructed to direct the light in a specific pattern.  This is similar to a high gain antenna.  The main punch of the radio signal is increased perpendicular to the antenna by reducing the energy projected parallel out the top and bottom of the antenna.

Now you want to project light in a single focused direction such as a spot light or flashlight.  The bulb is constructed with reflectors and other features to direct the light.  The same is true with radio antenna.  A directional antenna is also called a beam or Yagi-Uda antenna.  Most of your neighborhood roof mount TV antennas take this form.  A series of metal rods direct the radio waves to a higher focus then the use of a single rod.

Wait, a TV antenna?  I thought those were receiving only?  The neat thing about antennas is that they will receive with the same characteristics as they transmit.  A highly directional antenna is more sensitive and will better pick up a signal from its pointed direction then an omni-directional antenna.  However, if the same signal came from a different direction then where the directional antenna is pointed, then the omni-directional antenna will receive it better.

So why don’t we always use directional antennas?  Think back to two-way radio repeaters.  The repeater’s antenna an example of when you want to broadcast the signal widely.  Directional antennas are good for communications between two known locations.  Onmi-directional antennas are good when you don’t know where the other location is, or it keeps changing and moving the antenna continually is impractical.

Going back to our analogy of light to describe radio waves, now imagine that you need a highly focused light.  A laser pointer; it is designed to send out a highly focused beam of light that can be seen for long distances.  The radio version of this is the satellite dish.  The transmitter bounces the signal off a parabolic reflector which theoretically sends all the energy in the same direction in a narrow beam.  These very narrow focus antennas are called “very small aperture terminal” (VSAT).

This series of examples is just discussing the shape of the antenna relative to the direction and focus of the radio waves.  It is possible to use practically any frequency with any shape of antenna so long as the antenna is properly tuned.  Different radio bands are naturally more efficient for communication modes when combined with certain types of antennas.

The important thing to remember here is just because you can do something doesn’t mean you want to do it.  This is where you need to rely on your radio technicians to design the most effective system using the right frequencies and modes to get the message to the final destination.

 

Additional readings

 

Getting a little more technical on satellites

You’ll probably want to start with the entry on satellite communication and antennas before this one.

Orbits

Orbits are put into four levels based on altitude.  Low Earth Orbit is about 100 miles to 1240 miles.  The short distance allows low-power devices with omni-directional antennas to be used.  The most common example is satellite phones, like Iridium.  These satellites circle the Earth in 90 minutes.  From any spot on the Earth, the satellite will only be visible overhead for 10 minutes.

Being visible isn’t referring to seeing it with your naked eye.  Visible means a direct line-of-sight view of a spot so communications can occur.  It also assumes a large sky, such as being in Montana or the open ocean.  The more clutter blocking the sky reduces the satellite visibility.  This includes hills, mountains, trees, buildings and other obstructions, or being in a low spot like a valley.  It is nearly impossible to use satellite equipment in downtown New York City at ground level due to all the buildings.

Medium Earth Orbits are 1240 miles to under 22,236 miles.  At this altitude, the satellites orbit the earth in four to six hours.  That extends the visibility overhead to about two hours.  GPS satellite operate in this orbit.

Geosynchronous Orbits are satellites at 22,236 miles.  It takes a full day to orbit the Earth so the satellite will appear in the same spot of the sky once a day.

Geostationary Orbits are satellites at 22,236 miles and parallel to the equator.  Since the satellite is moving at the same speed the Earth is rotating and in the same plane as the rotation, the satellite is in the same spot of the sky all the time.  This is the most popular orbit to park a satellite in.

High Earth Orbits are satellites above 22,236 miles.  They are not commonly used for our purposes.

Footprints, beams and look angles

A satellite’s footprint is the circular area on the surface of the Earth that is visible to the satellite.  This is the potential area of coverage that the satellite can communicate with.  Areas directly under the satellite will receive a stronger signal then those on the fringe areas due to the increased distance and atmospheric interference.

Satellite operators want efficient use of their equipment so they use beams, or specifically focused transceivers, to cover areas within the footprint.  Imagine a satellite positioned above the equator roughly centered on the United States.  The satellite operator’s intended audience is maritime users.  They would focus the beams toward the waters of the Atlantic, Pacific, Gulf and Great Lakes; and away from inland areas knowing that there are few oceangoing freighters in Colorado.  No energy is wasted trying to fill a part of the satellite’s footprint where it will never be used.  When evaluating a satellite for use, you need to consider the beams and not the total footprint.

The more transceivers a satellite has, the more traffic it can handle at simultaneously.  Satellite operators rate their satellites by the total cumulative traffic it can handle simultaneously through the entire satellite.  The other factor when evaluating service is how much traffic a specific beam can handle.  In normal daily use, it is hard to overload a single spot beam as the resources are geographically dispersed.  A catastrophic disaster will bring many of these resources to a single geographical location; all trying to use the same beam.  That is when the beam will be overloaded.

Outages and overloads on satellite services are common during major hurricanes such as Katrina, Rita, Gustav and Ike.  This is most common on shared satellite services.  Consider the satellite user density that occurs with the convergence of local, state and Federal responders; media and observers; utility companies restoring service; private companies COOPing; and NGOs, CBOs and FBOs responding as well.  Many of these rely on some form of satellite service.  Paying for dedicated satellite airtime is quite costly especially when they only use it occasionally.  Now that a disaster has occurred, they all want to use it at the same time.  In many ways, a satellite in orbit is similar to a cell tower: they are designed to maximize revenue efficiently for normal use, and extreme circumstances quickly exceed the designed capacities.

Finding a satellite in the sky is done through look angles.  These measurements are unique based on the observer’s location.  With geostationary satellites, the look angles will remain constant so long as the observer’s location remains constant.  A look angle is made up of three parts: the azimuth, elevation and polarization.  The azimuth is the compass direction (0-360°).  The elevation is how high to look up (0-90°).  Polarization is rotating the transmitter to align the radio waves with the satellite.

Here’s an exercise.  Imagine that we are using Intelsat’s G-18 satellite located at 123° West.  This is a geostationary satellite so we know that it will be above the equator and 22,236 miles up.  123° West is near the California coast.  If we are in San Francisco, the azimuth would be 180° and elevation 46°.  The higher the elevation, the easier it is to clear tree and other obstacles.  Move to St. Thomas, USVI; the azimuth is 258° and elevation is 22°.  St. Thomas is an island with hilly peaks in the middle so a satellite shot is unlikely from the NE side of the island due to the low look angle.  Change our location again to Boston; the azimuth becomes 242° and elevation drops to 19°.  The same situation occurs in Maine where the elevation is very close to the horizon.  We were lucky during an operation in Maine and setup headquarters at a military airbase that had a runway near the same angle we needed.

Bands and frequencies

Just as two-way radios have a number of different bands with difference characteristics, so do satellites.  Satellites operate at higher frequencies then two-way radios.  A number of these frequencies are shared with terrestrial services.  For example, the S Band (2-4 GHz) includes both satellite radio (Sirius), and Wifi and Bluetooth signals.

Inmarsat BGANs operate on the L-band (1-2 GHz).  These are small, easy to point and decent global coverage.  The major issue with BGANs is the low data throughput (32 to 256 kbps) and high cost.

C-Band (3.7-8 GHz) is well known for the “direct to home” TV signals using larger dishes of 2 – 3½ meters in diameter.  The downside is that C-band has power restrictions and receives interference from microwave services.

Ku-Band (12-18 GHz) doesn’t have the power restrictions of the C-band and is used by the DirecTV system.  The main challenge of Ku-band is the nearness to the resonant frequency of water.  This means that water absorbs radio waves reducing the strength of the signal.  This is commonly called rain fade.  If you have DirecTV, you’ve experienced this when your signal goes out during heavy rain storms.  The wavelength absorption peaks at 22.2 GHz.  For non-technical purposes, think of it this way: The subscript U representing being under this peak.  The subscript A of the Ka-band represents being at or above this peak.

These characteristics are important considerations depending how satellite service will be used.  A Ku-band service will not help you for communications during the storm, but it will have the fastest speeds before and after the storm.  The C-band could work in the storm, but the size of the dish makes portability unlikely and temporary setups risky in high winds.  L-band will get through nearly all the time, but only a relatively slow speeds and high cost.

 

Additional resources

 

Radio types and bands

Radio Types

The simplest radio is the analog radio that transmits and receives on the same frequency.  All radios build from this model.  Imagine two people standing apart, each with a simple radio and antenna.  When one talks, the other listens; and vise versa.

Radio operators wanted to get more distance from their equipment.  Repeaters came into service.  A repeater can rebroadcast (or repeat) a transmission from a higher tower, at a higher power and over a longer range.  The radio operator will transmit on frequency “A” while the repeater listens on frequency “A”.  The repeater rebroadcasts the transmission on frequency “B”.  Other radio operators will tune to frequency “B” to hear the broadcast.  Each radio will transmit on one frequency while receive on a second.

Analog radios transmit the operator’s voice directly by modulating either the frequency or amplitude depending on the mode.  Digital radios encode the operator’s voice into a binary pattern.  The binary pattern modulates the radio signal.  This allows a digital radio to receive the binary pattern and convert it back to voice more clearly than an analog radio.  Interference to the digital radio signal is less likely to influence the quality of the signal.  Digital radio signals can also carry non-voice information.  These include radio handset identification, unit numbers and location information.

In the past, public safety departments would acquire many different frequencies to cover all their projected needs.  They may set aside frequencies for major incident coordination, training, and secondary activities.  Unfortunately, these were also seldom events that occurred only a few times a year.  This lead to a waste of resources to maintain the frequencies and the additional equipment.  A trunked radio system separates the concept of a radio channel from a specific frequency.  One frequency is designated as the control channel while all the other frequencies are open.  All radios monitor the control frequency to get digital control signals from the central coordinating system.  A user would never listen to this frequency as it is all digital control signals for the radio to use.  A user may dial a channel called “dispatch”.  The radio checks to see which frequency is currently assigned to the dispatch channel, and then tunes that frequency for the radio operator.  Meanwhile, the radio will monitor the control channel in the event the dispatch channel changes frequency.  If a radio is set to a channel that has no traffic, there will not be a frequency assigned.  A trunked radio system may have only 20 radio frequencies to serve 100 channels.  The radio of frequencies to channels really depends on how often each channel is expected to be used.

Radio Bands

The frequency spectrum is divided into many bands, or areas of use.  Frequency ranges can be assigned to any number of uses, such as: maritime, aeronautical, amateur, broadcast, fixed or mobile stations, land mobile, satellite, public safety and private/business.  The NTIA frequency allocation chart shows all of these bands color coded.  Some bands have multiple uses yet they’re always very similar, such as mobile satellite and fixed satellite.

The two way radio bands most used in emergency management are private land mobile (which includes public safety and business), amateur and to a lesser extent FRS, GMRS, and CB.

Satellite is another form of radio with a highly focused antenna.  We’ll talk more about satellite frequencies in that section.

 

Additional resources

 

Cellular communications

Cell phones are practically everywhere in the US.  83% of American Adults own some kind of cell phone (Pew Internet, http://pewinternet.org/Reports/2011/Cell-Phones.aspx).  These are useful in emergency situations and 40% of American Adults have used them during an emergency.

Most cell phones are low power at .5 watts with an internal antenna.  However, the features of the frequency and other advances allow a single cell site to have a maximum range of 30 to 35 miles in optimal conditions with low user load.  In urban areas, maximum range doesn’t matter as it is more a factor of cell phone density (how many phones per square mile) and building penetration that influences how many cell sites are needed.

A factoid is that a cell tower is not in the center of one cell, but instead on the edge of three cells.  Cell towers are easily identified by the long narrow vertical antennas mounted to a triangular frame so they point in three distinct directions.

Cell sites can also overlap.  A large area may be served by a macrocell.  A high density area within the macrocell may be served by a microcell.  This could be a major interstate intersection, a shopping mall, or stadium — any place where a large number of cell phone users will gather and use their phones.  Individual buildings can install a femtocell, which is a small cellular base station that connects the cellular devices in a building to the cell network through an antenna on the roof or an internet connection.  This is especially useful where buildings are constructed with energy efficient features that block radio waves, or where important section of the build are underground.  Energy efficiency and heat blocking films applied to windows reduce the radio signal passing through the windows.  It is not uncommon to have a great cellular signal outside a building that drops to barely useable inside a building.

During disasters and other unique events, cellular companies bring in specialized units to restore or augment existing service.  Two common units are COWs (Cell on Wheels) and COLTs (Cell on Light Truck).  Cell service was bolstered on the National Mall during the last Presidential Inauguration.  The service providers new that people would making calls, and taking pictures and videos to upload during the swearing in ceremony.  This could have overloaded the existing cellular infrastructure that is designed around normal Mall traffic.

A subtle, yet important, shift from the cellular providers is the placement of branded Wifi hot spots in urban areas.  These Wifi hot spots available at no charge to their own customers shifts load from the cellular network to the wired broadband networks.  Phones from the major providers come preconfigured to prioritize the movement of data across the providers Wifi networks instead of the cellular network when available.  It is a way to load balance the overall system transparently to the users.

Faux G

Cellular systems can carry data as well as voice.  The International Telecommunication Union, Radiocommunication (ITU-R) is responsible for the cellular standards.  The ITU defines what can be called 4G.  Technically, the standard is the International Mobile Telecommunications-Advanced (IMT-A) standard but it is commonly marketed as 4G or LTE-Advanced.  IMT-A dictates minimum data transfer speeds of 100 Mbit/s while in motion and up to 1 Gbit/s while stationary.

You may have not yet experienced these speeds even if your device is labeled as 4G, yet many systems today tout 4G.  In late 2010, the ITU-R gave in to cellular vendors requests and allowed them to use the 4G name if the current system was substantially better then third generation systems and being built to meet the 4G standard.  Resulting from this change, companies went from 3G to 4G overnight because of shifts in the marketing department despite no major changes in the technology overnight.

It is important to take note of the possibility of 4G.  A T1 circuit is 1½ Mbit/s.  The minimum 4G standard of 100 Mbit/s is 66 times larger.  Take a look at the graphic posted on my blog Explaining Bandwidth at http://keith.robertory.com/?p=560 for a better understanding of this.  A cell phone running true 4G will have more bandwidth then an entire site serviced by a T1.  We are right on the verge of a major cellular service shift.  When setting up a site during a disaster, it is common to use one cellular data card (aka aircard) per computer.  With these faster speeds, we can use one cellular data card to be the head of the site’s network.

My team has already successfully setup a network in a disaster with one 4G aircard providing connectivity for 30 computers.  Granted it was rare that there were users on all 30 computers simultaneously surfing the net and streaming large files.  But, that’s the point during disasters — and really even day to day.  It isn’t about providing maximum bandwidth to each user all the time.  Instead, focus on load balancing to provide enough bandwidth to meet the combined average need ~90% of the time.  It is ok for the system to be a little slower during peak demand times.  Set the user’s expectations correctly, and your team will get through it.

A cellular connection could be used to back up a wireline circuit.  Advanced routers can handle multiple uplink connections with prioritization and failover settings.  This will provide redundancy.  It is better than two wireline circuits backing each other up when the backhoe cuts through the utility lines outside the building.  Redundancy is nice.  Diverse redundancy is better.

Your users in a disaster response will be on the computer only part of the time, with the rest of their time filled with other activities.  If a disaster responder travels to a location and spends the entire time behind a computer, then the question should be asked: could that person just stay in the office or at home to complete the same work?

 

Additional resources