The new Commercial Mobile Alert Service (CMAS) allows emergency communications to be sent directly to the cell phones and other mobile devices of people in threatened areas. CMAS is a partnership between FEMA, the Federal Communications Commission (FCC), and wireless carriers, to enhance public safety.
Conceived in the WARN (Warning, Alert and Response Network) Act of 2006, CMAS lets officials at all levels of government (federal, state and local) send three types of textlike alerts—presidential alerts, AMBER alerts and alerts to imminent life and safety threats—to targeted recipients through participating wireless providers. It ensures, the FCC says, that “emergency alerts will not get stuck in highly congested user areas, which can happen with standard mobile voice and texting services.”
That’s potentially pretty useful, but only half the battle. Alerts like those from CMAS can get people’s attention—but often don’t precipitate the actual actions people need to take to stay safe. Instead they can trigger a kind of virtual “milling,” where people delay acting to verify a threat’s credibility and seek more information before deciding what to do.
“There is a series of sense-making activities that occur as people move toward taking action,” explains Jeannette Sutton, PhD, a prominent disaster sociologist and senior researcher at the University of Colorado-Colorado Springs’ Trauma, Health & Hazards Center. “First people have to receive a warning. Then they have to understand what’s in the message. Then they have to trust the source—that it’s credible information and they should act on it. Then they actually have to personalize it. And in the process of all this, they’re seeking confirmation. They’re constantly trying to determine whether it’s true, it’s credible and they should be taking action.”
Why could this be a problem for CMAS? Because of the messages’ length: Currently, they’re capped at 90 characters, with no links to further information. That leaves precious little latitude to assuage all those recipient needs.
In fact, CMAS messages may heighten confusion if people don’t know what they are. Because CMAS is a new system, there’s been relatively little public education about it. But most of the major wireless carriers are participating in the program, and their customers will get the alerts unless they opt out. That means, one day out of the blue, their phones may start to vibrate or buzz or flash, then provide only a terse warning from an unrecognized source. “People may,” says Sutton, “be at a total loss to be able to actually act.”
Providing a key link or two could help that, as could some public education beforehand. Think of AMBER alerts: Everyone knows what they are, because 1) of publicity that accompanied their development and is reinforced every time one is issued; and 2) they stemmed from the high-profile abduction/murder of a child. Right after a disaster occurs, of course, is the best time to educate people for future disasters, so America was particularly receptive to such a mechanism when it was developed.
There are other best practices for warning people, including conveying consistent information across channels and choosing a credible spokesperson. That’s generally someone authoritative and nonpolitical (firefighters get top marks for this). Ideally any message should specify multiple levels of sending authority—e.g., “This is coming from the sheriff’s office, as well as your local EMS, hospitals and the CDC”—to enhance its credibility. And it should include enough of the right information to compress that curve of understanding/trusting/personalizing/verifying. That may, ultimately, require more than 90 characters.