A few images I use when describing satellite footprints and beams.
A few images I use when describing satellite footprints and beams.
Two images to help provide a gauge of orbital heights compared to the Earth’s size.
Just a few graphics on atmospheric opacity lifted from the internet to help explain why some waves get through the atmosphere while others don’t.
I stopped wearing a watch when I got in the habit of checking my cell phone or looking at a wall clock for time. I’ve also stopped wearing accessories (bracelets, rings, earrings and so on). Really, at this time in my life, I’ve also stopped wearing ties. The JawBone UP24 didn’t even stay on my wrist as It got in the way when I typed.
My big debate when the Apple Watch was announced was if I would wear it. Dropping $400 on something I may or may not use is a tough call. I already tried Google Glass and it didn’t stick. Continue reading Apple Watch and Google Glass
About two years ago, I did a blog regarding usability. This video adds to that including my thoughts on BYOD (Bring Your Own Device) and the impact on disaster technology. Regardless of how the future rolls out, the advances in technology should not make things more complex for the users. In fact, the additional computing power needs to be used to make work easier for the users.
Think about it for just a moment. Who looks at your Facebook history? There are only two types of people who look back at what you’ve posted on Facebook: advertisers and stalkers. The human-to-human interaction on social media is about the now. It is not really much about last month let alone last year. Continue reading Removing your Facebook foot prints
The recent changes in Instagram almost made me delete my account. I probably would have if it wasn’t for a lesson I learned with FourSquare a few weeks ago. Deleting and erasing a social networking account is usually a fairly permanent decision. All your history, links, scores and whatever are gone. That can be a good thing. Or not.
I was an early adopter of NetFlix and watched/rated quite a few movies (seriously like hundreds). The system was really good at finding new movies to recommend. When I cut back my expenses, I deleted my NetFlix account. Fast forward a bunch of years to when I had children. Now Netflix was great because I could stream shows on my phone for the kids in a restaurant so they don’t bug other patrons. When I signed up for NetFlix the second time, all the ratings from the first time were still there. Now I’m getting recommendations for movies like Dora the Destroyer.
FourSquare was a nifty little service that turned location check-ins into a game. I did this for a while and amassed a large number of badges. Then I considered what I was getting out of FourSquare. All this data was pushed in but I didn’t get much out of it. Naturally, I said “Badges, we don’t need to stinkin’ badges.” I delete my FourSquare account. A few weeks ago during the response to Hurricane Sandy, I was dropped into a location reporting discussion. I hopped on a few social location check-in services including FourSquare. FourSquare hooked me again. Now I’m missing all the old badges and connections I had on FourSquare.
Instagram changed their terms of service. A huge shockwave spread across social networks. But instead of deleting my Instagram account as a knee jerk reaction, I stopped. Would I ever come back to Instagram? What if they adjusted their terms of service again? What is the impact now that Facebook owns them? Could someone take my screen name and pretend to be me?
I decided to keep my Instagram account but in an unused state. After using an app to download all my images, I’ve deleted all my photos off the account except one or two. For security reasons, I’ll “park” the account with an obscure password kept in my password vault.
Where to put all the images? I was debating between G+ and Flickr. I opted to go with Flickr primarily because it seemed less tied into other social accounts. It also had tools to allow bulk management of the images. The advantage to G+ would be managing the images on my phone without another app installed. We’ll see how it goes.
Risk managers study causes of tragedies to identify control measures in order to prevent future tragedies. “There are no new ways to get in trouble, but many new ways to stay out of trouble.” — Gordon Graham
Nearly every After Action Report (AAR) that I’ve read has cited a breakdown in communications. The right information didn’t get the right place at the right time. After hearing Gordon Graham at the IAEM convention , I recognized that the failures stretch back beyond just communications. Gordon sets forth 10 families of risk that can all be figured out ahead of an incident and used to prevent or mitigate the incident. These categories of risk make sense to me and seemed to resonate with the rest of the audience too.
Here are a few common areas of breakdowns:
Standards: Did building codes exist? Were they the right codes? Were they enforced? Were system backups and COOP testing done according to the standard?
Predict: Did the models provide accurate information? Were public warnings based on these models?
External influences: How was the media, public and social media managed? Did add positively or negatively to the response?
Command and politics: Does the government structure help or hurt? Was Incident Command System used? Was the situational awareness completed? Was information shared effectively?
Tactical: How was information shared to and from the first responders and front line workers? Did these workers suffer from information overload?
“Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.” — George Santayana
I add that in since few people actually know the source and accurately quote it. Experience is a great teacher. Most importantly, remembering the past helps shape the future in the right direction.
Below are a list of significant disasters that altered the direction of Emergency Management. Think about what should be remembered for each of these incidents, and then how these events would have unfolded with today’s technology – including the internet and social media.
Seveso, Italy (1976). An industrial accident in a small chemical manufacturing plant. It resulted in the highest known exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in residential population. The local community was unaware of the risk. It was a week before public notification of the release and another week before evacuations.
Bhopal Methyl Isocyanate Release (1984). An industrial accident that released 40 tones of MIC. There was no public warning. The exact mixture of the gas was not shared so the first responders did not know how to treat the public.
Chernobyl Nuclear Disaster (1986). An explosion at the plant and subsequent radioactive contamination of the surrounding geographic area. Large parts of Europe and even North America were contaminated. The Communistiic regime hid the initial information and did not share information until another country detected it.
Hurricane Hugo (1989). At the time, this was the costliest hurricane disaster. There was an insufficient damage assessment that lead to wrong resource allocation. The survivors in rural communities were not located and responded to for many days. Much of the response was dependent on manual systems.
Loma Prieta (1989). A M7 earthquake that injured around 3800 in 15 seconds. Extensive damage also occurred in San Francisco’s Marina District, where many expensive homes built on filled ground collapsed and / or caught fire. Beside that major roads and bridges were damaged. The initial response focused on areas covered by the media. Responding agencies had incompatible software and couldn’t share information.
Exxon Valdex (1989). The American oil tanker Exxon Valdez clashed with the Bligh Reef, causing a major oil leakage. The tanker did not turn rapidly enough at one point, causing the collision with the reef hours. This caused an oil spill of between 41,000 and 132,000 square meters, polluting 1900 km of coastline. Mobilization of response was slow due to “paper resources” that never existed in reality. The computer systems in various agencies were incompatible and there was no baseline data for comparison.
Hurricane Andrew (1993). Andrew was the first named storm and only major hurricane of the otherwise inactive 1992 Atlantic hurricane season. Hurricane Andrew was the final and third most powerful of three Category 5 hurricanes to make landfall in the United States during the 20th century, after the Labor Day Hurricane of 1935 and Hurricane Camille in 1969. The initial response was slowed due to poor damage assessment and incompatible systems.
Northridge Earthquake (1994). This M6.7 earthquake lasted 20 seconds. Major damage occurred to 11 area hospitals. The damage made FEMA unable to assess the damage prior to distributing assistance. Seventy-two deaths were attributed to the earthquake, with over 9,000 injured. In addition, the earthquake caused an estimated $20 billion in damage, making it one of the costliest natural disasters in U.S. history.
Izmit, Turkey Earthquake (1999). This M7.6 earthquake struck in the overnight hours and lasted 37 seconds. It killed around 17,000 people and left half a million people homeless. The Mayor did not receive a damage report until 34 hours after the earthquake. Some 70 percent of buildings in Turkey are unlicensed, meaning they did not get approval on their building code. In this situation, the governmental unit that established the codes was separate from the unit that enforced the codes. The politics between the two units caused the codes to not be enforced.
Sept 11 attacks (2001). The numerous intelligence failures and response challenges during these three events are well documented.
Florida hurricanes (2004). The season was notable as one of the deadliest and most costly Atlantic hurricane seasons on record in the last decade, with at least 3,132 deaths and roughly $50 billion (2004 US dollars) in damage. The most notable storms for the season were the five named storms that made landfall in the U.S. state of Florida, three of them with at least 115 mph (185 km/h) sustained winds: Tropical Storms Bonnie, Charley, Frances, Ivan, and Jeanne. This is the only time in recorded history that four hurricanes affected Florida.
Indian Ocean Tsunami (2004). With a magnitude of between 9.1 and 9.3, it is the second largest earthquake ever recorded on a seismograph. This earthquake had the longest duration of faulting ever observed, between 8.3 and 10 minutes. It caused the entire planet to vibrate as much as 1 cm (0.4 inches) and triggered other earthquakes as far away as Alaska. There were no warning systems in the Indian Ocean compounded by an inability to communicate with the population at risk.
Hurricane Katrina and Rita (2005). At least 1,836 people lost their lives in the actual hurricane and in the subsequent floods, making it the deadliest U.S. hurricane since the 1928 Okeechobee hurricane. There were many evacuation failures due to inadequate considerations of the demographic. Massive communication failures occurred with no alternatives considered.
Standards are a common language for discussing and sharing data that can be approved or ad-hoc. A standard is defined by the people who use it. That is key. In the end, it doesn’t matter if the standard is approved by a governing body or not. What matters is that the people who use it agree to it. When used properly, standards will save time and money, and ensure quality and completeness.
In a meeting about missing persons’ data standards it was stated that if the Red Cross, Facebook and Google agreed on a standard to share data, then everyone else will follow. Not because the three organizations are a governing committee but instead they would be the three largest players in the space.
Data standards make it possible for you to share data within and between organizations. They make it possible to compare different sets of data for improved analysis. They form the basis of data infrastructure (framework for collecting, storing and retrieving data).
Here are a few examples of data standards:
Data sets are packages of information that you can use before, during or after a disaster. It is important in planning to determine who has what data, if you can get access to it, and if it is compatible with your systems.
The following are common data sets that provide baseline data, or information that is available prior to any events occurring. These are useful for planning purposes and exercises.
Other data sets are only available during a response. Some data sets are specific to the incident. These are usually dynamic and depend heavily on solid damage assessment and data exchange agreements.
Keep in mind that not everyone uses data for the same purpose. Many do a damage assessment and this data is all called “damage assessment”, but it not useful to each other due to different needs and standards. Some examples include:
Most of the data sets include a geographic location, such as an address, road, or other positioning information. This helps transform the data set from just existing in a database to being analyzed through a GIS tool. More on that in the GIS section.
A database is a location that stores data. There can be simple databases, and very complex databases. Remember that we’re looking at this from the perspective of an emergency manager. You should be familiar with the terms and some other basic information, but leave the complex database creation to the SMEs.
Here are a few terms that are used in database discussions:
Quick Tip: never buy or accept a database from someone without a properly documented data dictionary and schema. Having that will save you hassle in the future when you need support or to change it.
A database can either store all the data in a single table, or spread the data across multiple tables. Remember that large amounts of data are best handled in a multi-table database, but this also creates the problems of trying to share data as all the data must be linked back. It shouldn’t be a problem in a properly designed and documented database.
A single table database is like keeping data in Microsoft Excel. It is simple to create all the rows and columns on one sheet. Lots of data points and duplicate data points may be better organized in a multi-table database. Key fields are used to link records across many tables. Tables can be a one to one (1:1) relationship. For example, if one table contained your personal information and another table contained your transcript that would be a one to one relationship. Tables can be a one to many (1:many) relationship. A table of course descriptions may link by class name to everyone’s individual grades. Once course taken by many people. The course description could be updated once without needing to touch every record of all the people that took it. If you want to dive deeper, start at http://en.wikipedia.org/wiki/Database_model.
Meta data is a way of providing information about data, or anything else really. Metadata makes data retrieval and understand much easier. It also makes data gathering more complicated and difficult since there is more work required on the data gathering side. In my experience, everyone agrees that good metadata is good.
Like everything else, garbage in = garbage out.
http://www.clientdatastandard.org/dcds/schema/1.0 is an example of data and metadata in capturing. Each field is described as to the use and what it contains.
Another way to look at meta data. The nutrition facts is meta data for the banana. The banana is meta data for the information listed in the nutrition facts.
Next: Data standards