Creativeapril - Fotolia
IT DR planning helps Frost Museum of Science weather Irma
The Frost Museum of Science in Miami braced for Hurricane Irma with a solidified DR planning, storage and data center infrastructure, and it suffered minimal damage from the storm.
When you open a large public facility right on the water in Miami, a good disaster recovery setup is an essential task for an IT team. Hurricane Irma's assault on Florida in September 2017 made that clear to the Phillip and Patricia Frost Museum of Science team.
The expected Category 5 hurricane moving in on Florida had the new Frost Museum square in its sights. Irma turned out to be less threatening to Miami than feared, and the then-4-month-old building suffered no major damage. Still, the museum's vice president of technology said he felt prepared for the worst with his IT DR planning.
When preparing to open the museum on a 250,000-square-foot location on the Miami waterfront, technology chief Brooks Weisblat installed a new Dell EMC SAN in a fully redundant data center and set up a colocation site in Atlanta as part of its disaster recovery plan. The downgraded Category 4 hurricane dumped water into the building, but did no serious damage and caused no downtime.
The new Frost Science Museum building features three diesel generators and redundant power, including 20 minutes of backup power in the battery room that should provide enough juice until the backup generators come online. While much of southern Florida lost power during Irma, the museum did not.
"We're sitting right on the water. It was supposed to be a major hurricane coming straight through Miami. But six hours before hitting, it veered off, so it wasn't a direct hit," Weisblat said. "We have two weather stations on the building, and we recorded force winds of 90 to 95 miles per hour. It could have been 190 mile-per-hour winds, and that would have been a different story."
Advance warning of the hurricane prompted the museum's team to bolster its IT DR planning.
"The hurricane moved us to get all of our backups in order," Weisblat said. "Opening the building was intensive. We had backups internally, but we didn't have off-site backups yet. It pushed us to get a colocated data center in Atlanta when the hurricane warnings came about a week before. At least we had a lot of advance notice for this one. Except for some water here and there, the museum did well."
The Frost Science Museum raised $330 million in funding to build the new center in downtown Miami, closing its Coconut Grove site in August 2015. Museum organizers said they hoped to attract 750,000 visitors in the first year at the new site. From its May opening through Oct. 31, more than 525,000 people visited the museum.
Shifting to SAN, all-flash
When moving, Frost Science installed a dual-controller Dell EMC SC9000 -- formerly Compellent -- all-flash array, with 112 TB of capacity connected to 10 Dell EMC PowerEdge servers virtualized with VMware. As part of its IT DR planning, the museum uses Veeam Software to back up virtual machines to a Dell PowerEdge R530 server, with 40 TB of hard disk drive storage on site, and it replicates those backups to another PowerEdge server in the Atlanta location.
Brooks Weisblatvice president of technology, Frost Science Museum
"If something happens at this site, we're able to launch a limited number of VMs to power finance, ticketing and reporting," Weisblat said. "We can control those servers out of Atlanta if we're unable to get into the building."
Before opening the new building, Weisblat's team migrated all VMs between the old and new sites. The process took three weeks. "We had to take down services, copy them to drives a few miles away, then bring those into the new environment and do an import into a new VM cluster," he said.
The data center sits on the third floor of the new building, 60 feet above sea level. It takes up 16 full cabinets, plus eight racks for networking, Weisblat said.
Frost Science Museum had no SAN in the old building. Its IT ran on 23 servers. Weisblat said he migrated the stand-alone servers into the VMware cluster on the Compellent array before moving. "That way, when the new system came online, it would be easy to move those servers over as files, and we would not have to do migrations into VMware in the new building during the crush time for our opening," he said.
The Dell EMC SAN runs all critical applications, including the customer relationship management system, exhibit content management, property management system software, the museum website, online ticketing and building security management systems. The security system controls electricity, lights, solar power, centralized antivirus deployments and network access control. "Everything is powered off this one system," Weisblat said.
The SAN has two Brocade -- now Broadcom -- Fibre Channel switches for redundancy. "We can unplug hosts; everything keeps running," Weisblat said. "We can unplug one of the storage arrays, and everything keeps running. The top-of-rack 10-gig [Extreme Avaya Ethernet] switches are also fully redundant. We can lose one of those."
He said since installing the new array, one solid-state drive went out. "The SSD sent us an alert, and Dell had parts to us in two hours. Before I knew something was wrong, they contacted me."
Whether it's a failed SSD or an impending hurricane, early alerts and IT DR planning certainly help when dealing with disasters.