Disaster Recovery & Archiving: Best Outcomes in the Worst Cases

 

The saying “hope for the best and plan for the worst” is more than good advice. When hospital servers crash or vital equipment and patient data are underwater, a well-thought-out disaster recovery plan with redundant archiving is crucial to weathering the crisis and getting systems back up and running as quickly as possible. Now a HIPAA requirement, disaster recovery planning enables hospitals to determine their risks and plan for the worst.
 

How to weather a 1,000-year storm

Back in 2001, Tropical Storm Allison barreled through the Gulf of Mexico, killing 50 people. Hit particularly hard was the state of Texas, where 20 inches of rain fell in 12 hours, causing massive flooding. The Texas Medical Center (TMC) campus in Houston, a 6,500-bed, 13-hospital network that employs 93,500 and sees 6 million patients per year, suffered $2 billion in damages.

Most of the 49 buildings on the 695-acre campus were forced to close due to the flooding and more than 1,000 patients were evacuated. Primary and back-up power supplies failed and hospital access was lost at several facilities, including the Memorial Hermann Hospital trauma center and Ben Taub General.

The basement of Baylor College of Medicine flooded, destroying 25 years of research data, 90,000 research animals and 60,000 tumor samples, soaking up $495 million. Memorial Hermann Hospital evacuated 540 patients, lost $60 million in cardiac care equipment and incurred $433 million worth of damage. It took the facility almost 18 months to rebuild. Similarly, the Methodist Hospital shut down for five weeks and was forced to discharge 400 patients.

TMC’s data center did not suffer damages, but because the facility lost all power and backup power, the data center remained in the dark for several days, says Matt A. Fink, vice president of IT operations at the Methodist Hospital System.

But all was not lost. TMC had a contract with SunGard for archiving and disaster recovery services, which helped immensely in the storm’s aftermath, says Fink. Data were stored off-site, and the SunGard services allowed TMC off-site access to its pharmacy system, so the hospital could get to pertinent patient data even before power was restored at the facility. “We could have restored additional patient systems through the SunGard services, but with no patients at any of the hospitals, we instead focused our efforts on restoring power to the local data center,” he says.

Meanwhile, thousands of paper medical records kept in the basement of the hospital were soaked, but were “painstakingly salvaged through a special drying process to retain legibility, and then scanned into a document imaging system,” Fink says.

Because of the disaster, the facility has since moved to digital storage for all of its medical records, and the campus is looking to construct a new off-site data center and continue to test its disaster recovery capabilities regularly, he says.

“We have considered lessons learned with tropical storm Alison in the development of the technology and redundancy for the new data center,” says Fink.
 

One step ahead of Katrina

When Hurricane Katrina ravaged a 100-mile-wide swath of Louisiana, Mississippi and parts of Alabama in 2005, Diagnostic Imaging Services (DIS), a four-location metro New Orleans outpatient radiology imaging center, was one step ahead of the catastrophe.

Under its umbrella of care, DIS performs an estimated 100,000 to 120,000 digital mammography, MRI, CT, DEXA, ultrasound, nuclear medicine, and digital radiography and fluoroscopy exams per year. To handle this hefty image load, DIS installed the GE Healthcare Centricity PACS in 2003 and its application service provider (ASP) to facilitate off-site image data archiving.

Although Katrina pounded the region and heavy rain created flooding and damaged two DIS facilities, the data backup system meant no patient data or records were lost in the storm.

The PACS allows radiology images to be stored either on-site or off-site. “We keep about 60 to 90 days’ worth of local storage on-site for retrieval, and everything else that we do is also stored off-site,” says Kathy Rabalais, director of clinic services/IS at DIS. “We have huge pipelines connecting us back and forth so if we need to fetch previous studies from several years ago, we can get them almost immediately.”

DIS was evacuated for three weeks following the storm. When DIS staff was given the all-clear to head back into one of the facilities, “we pulled a small group of staff together, set up a PACS station and had radiologists and medical records clerks release patient information for the first couple of weeks after the storm,” she says. “If you were a patient who had a mammogram the day before Katrina, and you had a positive finding, we could quickly get that result to a physician in Houston, if that happened to be where you went.”

Two DIS facilities did not reopen until the end of 2005, while the fourth wasn’t operational until February 2006. But even before DIS was fully back up and running, referring physicians were able to view and access imaging reports online using the GE Centricity Web Portal, Rabalais adds.
 

IT phone home

“On Nov. 13, 2002, at 1:45 p.m., Beth Israel Deaconess Medical Center [BIDMC] went from the hospital of 2002 to the hospital of 1972,” says John D. Halamka, MD, MS, CIO of BIDMC and the Harvard Medical School.

The entire BIDMC system crashed, freezing all patient information—including prescriptions, lab tests, patient records and Medicare bills—and causing BIDMC to shut down for four days. No patients were harmed, but no decision support or care could be delivered, says Halamka.

The network was flooded with so much data that nothing could get through, causing a complete network collapse, creating “Napster-like internal attacks,” says Halamka. And while no patient data or information was lost, “the network was so congested that nothing could flow,” he says. The hospital networks and systems were shut down to avoid damage to any of the data.

“It would have been bad if we had missing data, or worse, inaccurate data,” says Halamka. “We avoided losing data, but the hospital was paralyzed … we couldn’t enter an order or view a lab,” he says. “If you are a total digital facility, now you are running around with a lot of paper—this was a real challenge.”

BIDMC replaced some hardware and improved the network after the outage, but Halamka says, “the key point here was that this outage completely changed our processes.”

For example, “we did not have a robust, tested downtime plan for a total network collapse,” he says, nor did the hospital have a relationship with its hospital server vendor, Cisco. After the outage, the staff at BIDMC designed new processes for transferring lab results, orders and other patient data; and has fortified its server vendor relationship, making Cisco a part of the infrastructure plan.

In addition, a multidisciplinary team from the IS department, vendor engineers and IS leadership meet weekly as part of the “Change Control Board” to discuss, review and analyze any changes made within the BIDMC IT systems, says Halamka. “You could have the world’s best hardware, but if staff are changing it left and right, without good communication, the system is still going to go down.”

After the system failure in 2002, Halamka says that it became apparent that having two copies of patient data in one physician location was unsatisfactory. BIDMC invested $10 million to build a geographically separate data center to archive patient data offsite as well as in-house, to mitigate data loss in case of a fire, flood, or another system failure. “If one whole data center goes down or disappears, we have an exact duplicate so that no data are lost,” Halamka says.

“So much of what we do in IT these days is about the people, and not so much about the technology, so that is the big lesson—by creating good processes and communicating with people, you can avoid dire consequences,” he says.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.