Why free stuff isn’t free

GiftI have heard too many times from people in disaster response: “If we can just get the product donated, then we can do…”  If a person or organization is willing to do a program only if it is all provided for free, they are simply stating that the program is not important enough to budget for it.  That attitude minimizes the value of the program and makes me wonder if it was important enough in the first place.  They miss the point on in-kind donations.  An in-kind donation is when someone gives you a something at no financial cost.  But don’t think it is free.  Free stuff is never free.  Everything in a supply and demand economy has a cost.  There are financial, time, resource costs associated with everything.

Let’s look at fictitious non-profit group Acme.  Acme has a mission to bring internet access to disaster survivors.  One of the tools they use is a widget and hundreds of widgets are used each year.  Each widget costs $100 and is produced by Ajax.  There are two ways to get widgets.  Acme can buy the widgets from Ajax using donated money, or Acme can ask Ajax to donate the equipment.  Procuring a widget meets Acme’s need regardless of how the widget is procured.

Acme’s fundraisers are tasked to raise the necessary funds to cover the organization’s annual budget.  As money is brought in to the organization, it is applied to the annual budget.  The money goes to offset the general (or core) expenses including facilities, salaries, program maintenance, daily operations … and the purchase of Ajax widgets.  In general, donors like to see where their money goes to know that they are making a difference.  That is what makes fund raising such a hard task; it is convincing the donor to give money and trust Acme to the right thing without being able to show them a specific thing that their money did.  There is another concept called a “directed donation” where funds are raised for a specific goal.  Directed donations are very commonly seen as capital improvement projects.  I’m leaving directed donations out of this discussion.

Donors are not restricted to just providing cash.  They can provide goods and services; this is an in-kind donation (IKD).  In-kind donations are unique because it should match Acme’s needs with what the donor has to offer.  (Receiving product that isn’t needed becomes wasteful in costs to ship, receive, store and dispose.)  When Acme receives an in-kind donation, it offsets expenses that would be spent otherwise to get the products. For our example of widgets, this is declared as income on Acme’s taxes, and a donation on Ajax’s taxes.  When Ajax decides to donate the widgets to Acme, Ajax is providing a value of products in lieu of a cash donation of the same value.

The end results of any of these actions is the same: Acme has widgets.  It didn’t really matter if the fund-raisers directly courted Ajaz for the widgets or had a third-party donor provide cash to buy the widgets.  The result is budget-neutral: the right amount of cash or products came in to match the same amount of expenses for the product procurement.

Here’s why free stuff isn’t free.  At the start of the year, Acme set forth a financial budget based on expected donations (IKD or cash) and expenses.  The cash value of the widgets that Ajax donated gets applied to the budget and reduces the cash that needs to be raised that year to buy widgets.  Ajax’s donation doesn’t free up Acme’s budgeted amounts to be applied elsewhere; the donation met the business needs of procuring widgets per the budget.  The budget is just a financial tool to manage incoming donations and outgoing expenses regardless if the donation show up as cash or IKD.  A budget is very different from an account balance of real money in the bank.  The hope is the budget, actual expenses and cash in the bank match up during the fiscal period.

In-kind donations often come with additional strings that are not part of a cash procurement.  The donations are usually large enough that the donor wants publicity which will help create an impression of the donor.  Here, Ajax wants to be able to publicize that they donated to Acme which helps create the public impression that Ajax is a good corporate citizen.  Acme and Ajax producing a joint press release to promote the relationship doesn’t take too much time.  But imagine if Ajax’s expectation is for Acme to take a photo and publish a story every time a widget is used.  The cost in Acme’s resources to meet that expectation could exceed the cost of just buying the widgets with cash.

So next time you hear that a project will only be done if product is given for free, ask the question if the product needs to be free or just be budget-neutral for the organization?

Historic Information Breakdowns

Risk managers study causes of tragedies to identify control measures in order to prevent future tragedies.  “There are no new ways to get in trouble, but many new ways to stay out of trouble.” — Gordon Graham

Nearly every After Action Report (AAR) that I’ve read has cited a breakdown in communications.  The right information didn’t get the right place at the right time.  After hearing Gordon Graham at the IAEM convention , I recognized that the failures stretch back beyond just communications.  Gordon sets forth 10 families of risk that can all be figured out ahead of an incident and used to prevent or mitigate the incident.  These categories of risk make sense to me and seemed to resonate with the rest of the audience too.

Here are a few common areas of breakdowns:

Standards: Did building codes exist?  Were they the right codes?  Were they enforced?  Were system backups and COOP testing done according to the standard?

Predict: Did the models provide accurate information?  Were public warnings based on these models?

External influences: How was the media, public and social media managed?  Did add positively or negatively to the response?

Command and politics: Does the government structure help or hurt?  Was Incident Command System used?  Was the situational awareness completed?  Was information shared effectively?

Tactical: How was information shared to and from the first responders and front line workers?  Did these workers suffer from information overload?

History

“Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.”  — George Santayana

I add that in since few people actually know the source and accurately quote it.  Experience is a great teacher.  Most importantly, remembering the past helps shape the future in the right direction.

Below are a list of significant disasters that altered the direction of Emergency Management.  Think about what should be remembered for each of these incidents, and then how these events would have unfolded with today’s technology – including the internet and social media.

Seveso, Italy (1976).  An industrial accident in a small chemical manufacturing plant.  It resulted in the highest known exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in residential population.  The local community was unaware of the risk.  It was a week before public notification of the release and another week before evacuations.

Bhopal Methyl Isocyanate Release (1984).  An industrial accident that released 40 tones of MIC.  There was no public warning.  The exact mixture of the gas was not shared so the first responders did not know how to treat the public.

Chernobyl Nuclear Disaster (1986).  An explosion at the plant and subsequent radioactive contamination of the surrounding geographic area. Large parts of Europe and even North America were contaminated.  The Communistiic regime hid the initial information and did not share information until another country detected it.

Hurricane Hugo (1989).  At the time, this was the costliest hurricane disaster.  There was an insufficient damage assessment that lead to wrong resource allocation.  The survivors in rural communities were not located and responded to for many days.  Much of the response was dependent on manual systems.

Loma Prieta (1989).  A M7 earthquake that injured around 3800 in 15 seconds.  Extensive damage also occurred in San Francisco’s Marina District, where many expensive homes built on filled ground collapsed and / or caught fire. Beside that major roads and bridges were damaged.  The initial response focused on areas covered by the media.  Responding agencies had incompatible software and couldn’t share information.

Exxon Valdex (1989).  The American oil tanker Exxon Valdez clashed with the Bligh Reef, causing a major oil leakage.  The tanker did not turn rapidly enough at one point, causing the collision with the reef hours. This caused an oil spill of between 41,000 and 132,000 square meters, polluting 1900 km of coastline.  Mobilization of response was slow due to “paper resources” that never existed in reality.  The computer systems in various agencies were incompatible and there was no baseline data for comparison.

Hurricane Andrew (1993).  Andrew was the first named storm and only major hurricane of the otherwise inactive 1992 Atlantic hurricane season. Hurricane Andrew was the final and third most powerful of three Category 5 hurricanes to make landfall in the United States during the 20th century, after the Labor Day Hurricane of 1935 and Hurricane Camille in 1969.  The initial response was slowed due to poor damage assessment and incompatible systems.

Northridge Earthquake (1994).  This M6.7 earthquake lasted 20 seconds.  Major damage occurred to 11 area hospitals.  The damage made FEMA unable to assess the damage prior to distributing assistance.  Seventy-two deaths were attributed to the earthquake, with over 9,000 injured. In addition, the earthquake caused an estimated $20 billion in damage, making it one of the costliest natural disasters in U.S. history.

Izmit, Turkey Earthquake (1999).  This M7.6 earthquake struck in the overnight hours and lasted 37 seconds.  It killed around 17,000 people and left half a million people homeless.  The Mayor did not receive a damage report until 34 hours after the earthquake.  Some 70 percent of buildings in Turkey are unlicensed, meaning they did not get approval on their building code.  In this situation, the governmental unit that established the codes was separate from the unit that enforced the codes.  The politics between the two units caused the codes to not be enforced.

Sept 11 attacks (2001).  The numerous intelligence failures and response challenges during these three events are well documented.

Florida hurricanes (2004).  The season was notable as one of the deadliest and most costly Atlantic hurricane seasons on record in the last decade, with at least 3,132 deaths and roughly $50 billion (2004 US dollars) in damage. The most notable storms for the season were the five named storms that made landfall in the U.S. state of Florida, three of them with at least 115 mph (185 km/h) sustained winds: Tropical Storms Bonnie, Charley, Frances, Ivan, and Jeanne. This is the only time in recorded history that four hurricanes affected Florida.

Indian Ocean Tsunami (2004). With a magnitude of between 9.1 and 9.3, it is the second largest earthquake ever recorded on a seismograph. This earthquake had the longest duration of faulting ever observed, between 8.3 and 10 minutes. It caused the entire planet to vibrate as much as 1 cm (0.4 inches) and triggered other earthquakes as far away as Alaska.  There were no warning systems in the Indian Ocean compounded by an inability to communicate with the population at risk.

Hurricane Katrina and Rita (2005).  At least 1,836 people lost their lives in the actual hurricane and in the subsequent floods, making it the deadliest U.S. hurricane since the 1928 Okeechobee hurricane.  There were many evacuation failures due to inadequate considerations of the demographic.  Massive communication failures occurred with no alternatives considered.

 

Additional resources

 

Cyber-security and disasters

More and more systems are being connected to share information, and IP networks provide a very cost-effective solution.  One physical network can be used to connect many different devices.  The water company can use a computer interface to control the water pumps and valves at treatment plants and throughout the distribution system.  The natural gas and electric providers can do the same.  Hospitals connect medical devices throughout the facility to central monitoring stations.  A few people in one room can watch all the ICU patients.  Fire departments, law enforcement and EMS can use a wireless network to communication, dispatch units, provide navigation, and track vehicle telematics to manage maintenance cycles.

All networks do not need to lead to the internet, however this is rare and needs to be specifically designed into the system when it is being designed.  Having a physically separate system does provide the best security if all the data is being kept internal to that network.  Remember that internal-only networks are still subject to security issues from internal threats.

Any network or device that does have an internet connection is subject to external attacks through that connection.  A malicious hacker can break into the water treatment system and change the valves to contaminate drinking water.  They could open all the gates on a dam flooding downstream communities.  They could reroute electrical paths to overload circuits or such down other areas.  They could change the programming so dispatchers are sending the farthest unit instead of the nearest, or create false dispatch instructions.

Cyber attacks can disable systems but they can also create real-world disasters.  First responders are trained to consider secondary-devices during intentionally started emergencies.  What if that secondary-device is a cyber attack, or a cyber attack precedes a real event?  During the September 2001 attacks in New York City, a secondary effect of the plane hitting the tower was the crippling of the first responder’s radio system.  Imagine if a cyber attack was coordinate with the plane’s impact.  The attackers could turn all traffic lights to green which could cause traffic accidents at nearly all intersection.  This would snarl traffic and prevent the first responders from getting to the towers.

A side step on the use of the term hacker.  A hacker is anyone that hacks together a technical or electronics solution in an uncommon way.  I explain it as “MacGyver’ing” a solution.  There is no positive or negative connotation in the term used that way.  Hacker also describes a person that breaks into computer systems by bypassing security.  A more accurate description is calling them a cracker, like a safe cracker.  This type of hacker is divided into criminals (black hats) and ethical hackers (white hats).  Ethical hackers are people who test computer security by attempting to break into systems.

By now, you’re probably aware of the Anonymous hacker group.  They have been collectively getting more organized and increasing in actions that drive toward internet freedom since 2008.  Often they’re called “hacktivists” meaning they hack to protest.  There are many more malicious hackers out there with different agendas: status, economic, political, religious … any reason people might disagree could be a reason for a hacker.

Somewhere on the internet is a team of highly trained cyber ninjas that are constantly probing devices for openings.  They use a combination of attack forms including social engineering (phishing) attacks.  Automated tools probe IP addresses in a methodically efficient manner.  The brute force method is used to test common passwords on accounts across many logins.  Worms and Trojans are sent out to gather information and get behind defenses.  Any found weaknesses will be exploited.

Pew Internet reports that 79% of adults have access to the internet and two-thirds of American adults have broadband internet in their home.  The lower cost of computers and internet access has dramatically increase the number of Americans online.  The stand-alone computer connected to the internet has forced the home user to have the role of the system administrator, software analyst, hardware engineer, and information security specialist.  The must be prepared to stop the dynamic onslaught of cyber ninjas, yet are only armed with the tools pre-loaded on the computer or off-the-shelf security software.

Organizations are in a better and worse position.  The enterprise network can afford full-time professionals to ensure the software is updated, the security measures meet the emerging threats, and professional resources to share information with peers.  Enterprise networks are also a larger target; especially to increase the online reputation of a hacker.

On Disasters

During a disaster, there will be many hastily formed networks.  The nature of rushed work increases the number of errors and loopholes in technical systems.

During the Haiti Earthquake response, malware and viruses were common across the shared NGO networks.  The lack of security software on all of the laptops created major problems.  Some organizations purchased laptops and brought them on-scene without any preloaded security software.  Other organizations hadn’t used their response computers in over a year, so no recent security patches to the operating systems or updates to the anti-virus software was done.  USB sticks move data from computer to computer, bypassing any network-level protections.  The spread of malware and viruses across the networked caused problems and delays.

There are a number of key factors when designing a technology system that will be used in response that differ from traditional IT installations.  One of the most important considerations is a way for the system to be installed in a consistent manner by people with minimal technical skills.  Pre-configuration will ensure that the equipment is used efficiently and in the most secure manner.

 

Additional Resources

Understanding networking

As a manager, it is not your responsibility to know how to configure a router and make things work in the network.   The best way that you should consider networking is the “black box theory”.  You really don’t care how the individual parts work.  You need to know what they are capable of.  Believe it or not, networking is really simple.

At the simplest form, a network is a few computers that are connected by a wire to a network device that shares the information to each computer.   A network is similar to a big post office that is sharing information packets electronically.  The computers each have a unique name that helps the network devices know what information goes to what computer.

The internet is an IP-based network.  IP stands for Internet Protocol.  Easy, huh?  The Transmission Control Protocol is the way that the computers break up large data chunks to send across the internet.  Stick the two together and you’ll get the commonly referred to TCP/IP.  There are other forms of message handling—such as User Datagram Protocol (UDP)—to move information across the internet.  You don’t need to know how these work or move information.  Just know that IP is the backbone of the internet.

Any data that you can turn into an IP packet can travel over an IP network; that data can also travel local networks and the internet.  When a phone converts voice to an IP packet, it is called a Voice over IP phone (VoIP) meaning that it can send your phone call over the same network as email, web browsing, and everything else.

Blah-blah over IP is nothing fancy.  That means that someone has designed a network device (or interface) that translates information from a source to an IP-packet, and back.  You’ll hear about Radio-over-IP, Video-over-IP, Computer-over-IP, and just about everything else.

Data standards are really important in this area.  When each vendor comes up with their own way of doing ____-over-IP, then it is likely that vendors will not be compatible unless they use a standard.  While there are organizations that state standards, a standards true usefulness is proven by if and how people use it.

The International Telecommunications Union (ITU) has a series of standards for videoconferencing, including ITU H.320 and H.264.  When Cisco Telepresence was released, it was designed to bring a meeting room presence to teleconferencing.  Part of the design was full size displays that blended both conference rooms.  It was not compatible with any other video conferencing systems.  The Cisco sales rep explained to me that their product would look poor if it was used with lower quality non-telepresence systems, so the decision was made to be a non-standard data packet.  The problem with this is that it would require companies to invest in two separate video conferencing systems.  More recent advances have allowed some mixed use of video conferencing systems.

Now that we’ve talked a bit about what can go across the network, let us turn back to the network itself.

There are many different formats of networks.  A quick internet search on “network topology” will show the different forms.  Each has an advantage and a disadvantage.  For this course, the focus will be on a tree topology.  An internet connection enters a site through one point.  Switches and routers are used to split that internet connection to all the individual computers.

A demarc (short for demarcation) point is where a utility enters a building.  It is also the point that separates ownership between the utility company and the building owner.  The electrical demarc in a residential home is commonly the electric meter.  The power company will handle everything up to and including the meter.  The home owner handles everything from the meter to the power outlets.

A telephone demarc is located at the telephone network interface.  The network demarc is located at the network interface device (aka smartjack).  These can be located anywhere in a building, but I’ve found that most wireline utilities come in together.  These can be copper wire, fiber optic or some other type of cable.

The demarc is the head of the network for that site.  In a tree topology, this is where the site’s primary router would be located.  A router is a network device that moves data packets between two different networks.  Here, the router is directing the packets, only passing those that need to travel on the other network.  It is ideal for separating two networks to reduce congestion by keeping local data within the local network.  A primary router, sometimes called a site’s core router, is the one that controls the other routers and is mission-critical for the site to be connected.

Routers are the major component that give a network flexibility.  Professional (non-consumer) grade routers allow for the installation of modules, both physical and logical.  These modules connect the router to different devices.  These modules commonly allow a router to connect to a wireline (T1, T3, etc) circuit, a wireless (wifi, cell) circuit, or a different cabling (twisted pair, coax).  These modules can also be used to connect a router to a phone system, radio system, video system and so on.

Other network devices used to spread network segments out from the router include switches and hubs.  Switches can have different interfaces and be used to connect different network types.  This is handy in older buildings where you may need to use an existing style network and will overlay it with a different type of cabling or connections.  Hubs are almost non-intelligent splitters that just provide more ports.

The Warriors of the Net video provides an entertaining explanation of the different components.  Again, from a manager’s perspective, you do not need to get very technical with the network components.

Cellular communications

Cell phones are practically everywhere in the US.  83% of American Adults own some kind of cell phone (Pew Internet, http://pewinternet.org/Reports/2011/Cell-Phones.aspx).  These are useful in emergency situations and 40% of American Adults have used them during an emergency.

Most cell phones are low power at .5 watts with an internal antenna.  However, the features of the frequency and other advances allow a single cell site to have a maximum range of 30 to 35 miles in optimal conditions with low user load.  In urban areas, maximum range doesn’t matter as it is more a factor of cell phone density (how many phones per square mile) and building penetration that influences how many cell sites are needed.

A factoid is that a cell tower is not in the center of one cell, but instead on the edge of three cells.  Cell towers are easily identified by the long narrow vertical antennas mounted to a triangular frame so they point in three distinct directions.

Cell sites can also overlap.  A large area may be served by a macrocell.  A high density area within the macrocell may be served by a microcell.  This could be a major interstate intersection, a shopping mall, or stadium — any place where a large number of cell phone users will gather and use their phones.  Individual buildings can install a femtocell, which is a small cellular base station that connects the cellular devices in a building to the cell network through an antenna on the roof or an internet connection.  This is especially useful where buildings are constructed with energy efficient features that block radio waves, or where important section of the build are underground.  Energy efficiency and heat blocking films applied to windows reduce the radio signal passing through the windows.  It is not uncommon to have a great cellular signal outside a building that drops to barely useable inside a building.

During disasters and other unique events, cellular companies bring in specialized units to restore or augment existing service.  Two common units are COWs (Cell on Wheels) and COLTs (Cell on Light Truck).  Cell service was bolstered on the National Mall during the last Presidential Inauguration.  The service providers new that people would making calls, and taking pictures and videos to upload during the swearing in ceremony.  This could have overloaded the existing cellular infrastructure that is designed around normal Mall traffic.

A subtle, yet important, shift from the cellular providers is the placement of branded Wifi hot spots in urban areas.  These Wifi hot spots available at no charge to their own customers shifts load from the cellular network to the wired broadband networks.  Phones from the major providers come preconfigured to prioritize the movement of data across the providers Wifi networks instead of the cellular network when available.  It is a way to load balance the overall system transparently to the users.

Faux G

Cellular systems can carry data as well as voice.  The International Telecommunication Union, Radiocommunication (ITU-R) is responsible for the cellular standards.  The ITU defines what can be called 4G.  Technically, the standard is the International Mobile Telecommunications-Advanced (IMT-A) standard but it is commonly marketed as 4G or LTE-Advanced.  IMT-A dictates minimum data transfer speeds of 100 Mbit/s while in motion and up to 1 Gbit/s while stationary.

You may have not yet experienced these speeds even if your device is labeled as 4G, yet many systems today tout 4G.  In late 2010, the ITU-R gave in to cellular vendors requests and allowed them to use the 4G name if the current system was substantially better then third generation systems and being built to meet the 4G standard.  Resulting from this change, companies went from 3G to 4G overnight because of shifts in the marketing department despite no major changes in the technology overnight.

It is important to take note of the possibility of 4G.  A T1 circuit is 1½ Mbit/s.  The minimum 4G standard of 100 Mbit/s is 66 times larger.  Take a look at the graphic posted on my blog Explaining Bandwidth at http://keith.robertory.com/?p=560 for a better understanding of this.  A cell phone running true 4G will have more bandwidth then an entire site serviced by a T1.  We are right on the verge of a major cellular service shift.  When setting up a site during a disaster, it is common to use one cellular data card (aka aircard) per computer.  With these faster speeds, we can use one cellular data card to be the head of the site’s network.

My team has already successfully setup a network in a disaster with one 4G aircard providing connectivity for 30 computers.  Granted it was rare that there were users on all 30 computers simultaneously surfing the net and streaming large files.  But, that’s the point during disasters — and really even day to day.  It isn’t about providing maximum bandwidth to each user all the time.  Instead, focus on load balancing to provide enough bandwidth to meet the combined average need ~90% of the time.  It is ok for the system to be a little slower during peak demand times.  Set the user’s expectations correctly, and your team will get through it.

A cellular connection could be used to back up a wireline circuit.  Advanced routers can handle multiple uplink connections with prioritization and failover settings.  This will provide redundancy.  It is better than two wireline circuits backing each other up when the backhoe cuts through the utility lines outside the building.  Redundancy is nice.  Diverse redundancy is better.

Your users in a disaster response will be on the computer only part of the time, with the rest of their time filled with other activities.  If a disaster responder travels to a location and spends the entire time behind a computer, then the question should be asked: could that person just stay in the office or at home to complete the same work?

 

Additional resources

Satellite 2011

     I’m here at Satellite 2011 in Washington, DC.  Like many conferences, there are new things worth seeing and trying to figure out.  Here’s a few of the things that I’ve seen so far.  Follow the action on the Twitter hashtag #Satellite2011.

     This vendor had a neat concept that could become very useful.  The auto-aquire VSAT unit is mounted in a shippable container.  A national organization can maintain the VSAT in a single warehouse and use an overnight shipping company or airline freight company to get it on-scene within 24 hours.  The VSAT is setup on the luggage rack of any rented SUV … and possibly just inserted in the bed of a pickup depending on look angles.  The dish can be left up while driving.  They claim that a connection can be established at a “quick halt”.  The big advantage to this is removing the need to maintain a vehicle long-term.  Shift the vehicle to a rented one and only pay for use.  It would also work wonders on island operations where vehicle mounted systems can’t be sent there (easily).

Picture of SUV with shippable case VSAT on roof.

     Here is what really caught my eye.  This is a flat panel antenna for a Ku band satellite, yet it is only 2 feet on the long side.  The vendor is just the manufacturer and provides it to integrators that build the form factor around it.  They said that depending on the BUC, the panel can do 1-3 Mbps speeds.  The device is made from plastic and poured copper to keep the weight and cost down.  With this device at the core, I can have a near-BGAN sized device that is easily portable.  Add a 25-watt BUC to have transmit and receive capabilities that exceed my 1.2m dishes with 5 watt BUCs.  The higher start-up costs for the smaller form factor built around it could be offset by lower shipping and deployment costs over the life of the device.  Unfortunately, I haven’t seen this build out yet although I’m told it is here.

     Now I need to call out the problem with many standard size booths here.  Folks still do not know how to setup a booth that invites attendees to stop by.  For what this convention costs the exhibitors in staff time and money, I’m amazed how many are staff by people talking to other staff and have put up barrier to conversations. 
     When the people in a booth are talking to each other, attendees don’t want to interupt.  Tables, signs, and display cases are setup to divide the booth space from the walkway.  I don’t want to talk over a barrier unless I’m really interested in what you have to show me.
     On the good side, I’m seeing more and more exhibitors understanding the need to double and triple the carpet padding.  Happy feet don’t leave quickly.

EMSE 6310.10 – Information Technology in Crisis and Emergency Management

Here is my “work in progress” of a syllabus for the upcoming course that I’m teaching at the George Washington University.  There’s still some revisions that I plan on doing.  If you were taking this course, what would you want to hear about?

Continue reading EMSE 6310.10 – Information Technology in Crisis and Emergency Management

Can home computers be protected?

Computer virus, worm and trojanI’ve recently been thinking about the concept that home computers make every end-user an administrator responsible for the building, maintenance, and security of their own system.  It also pits the inexperienced home user against creators of spam, worms, viruses and other mal-ware – who are generally very intelligent and experienced.  Does the average home computer really stand chance?

Continue reading Can home computers be protected?