Dreamforce 2019 just kicked off in San Francisco. What an event and a great boost to the local economy: lots of people, celebrity speakers, lots of vendors, and many locations around the Moscone Center to accommodate the many sessions and events. A forest theme was selected this year – a forest in the middle of the urban forest – with cute characters including a bear…the same bear I am going to poke in this blog entry.
Salesforce is a formidable player, a trailblazer (check the little guy’s shirt!) that made SaaS what it is today. It’s a great success by any stretch of the imagination.
SaaS is an interesting space that we study closely here at ESG, and we see a big disconnect when it comes to backup and recovery, and many challenges with Salesforce specifically. 33% of respondents to our cloud data strategies survey believe that SaaS-based applications don’t need to be backed up, and 37% solely rely on the SaaS vendor because they are responsible for protecting the organization’s SaaS-resident application data.
People are clearly confused: Data is your always responsibility; backup is your responsibility; ensuring the compliance of your data is your responsibility; and ensuring service levels that meet your business requirements, especially mission-critical ones, is your responsibility.
Much work remains to be done in terms of education and solutions for enterprise-class backup and recovery of Salesforce environments. For SaaS in general, our research shows that service outage/unavailability causing data destruction or corruption leads the causes of data loss, followed by accidental deletion and inability to recover all the lost data with the current backup mechanism. Scary, right? For Salesforce in particular, respondents report having identified issues with their current backup/recovery SLAs and mechanisms, experienced service interruptions, and experienced governor limits when backing up or restoring, to name a few challenges.
Salesforce intends to be and has become a de facto mission-critical system in many organizations around the world. Yet, in my opinion, and based on the research we have conducted, it’s not necessarily meeting the SLAs that one would expect from a mission-critical app. It’s not mission-critical ready…yet…on its own.
The only comparison point we have is on-premises environments in which you can get to 99.99% or more availability and architect zero or very close to zero data loss environments. Look at Oracle, for example. If we set aside security (a very key aspect, of course) only to focus on data backup/recovery and application availability, we’re not there yet. As a matter of fact, it’s hard or impossible to find a clear statement on service availability from Salesforce. Maybe that’s what the bear is trying to figure out here:
Service availability is probably in the area of 99.7% based on what some architects have told us (anecdotally). For organizations that can tolerate no downtime because their “life” depends on being up and running 99.99% of the time, it just won’t work! It doesn’t look like many hours in a year – and many smaller business don’t really care – but every minute counts for some customers. One more thing we learned is that there is maintenance time that interrupts write access to the service – that’s degraded availability – something to bear (pun intended) in mind.
Salesforce has to (astutely) rely on an ecosystem of partners to alleviate the current limitations on this topic. It’s a good thing.
We talked to Odaseva, a company we track closely, and they have released a new service that delivers exactly what I believe the Fortune 500 or 1000 that want to run Salesforce as a mission-critical service need: they call it Ultra High Availability. It’s a bit emphatic, but this is a solution that seems to solve the issue. Other vendors I spoke with don’t seem to have anything like it…
We’re out of the woods!!