kirill_makarov - stock.adobe.com

How Salesforce Einstein machine learning makes AI practical

As it evolves, Salesforce's Einstein AI serves the needs of a growing number of organizations and takes the guesswork out of why certain decisions are made.

Shubha Nabar is the director of data science at Salesforce Einstein. In this Q&A, she discusses how her team is working to make Einstein AI better at serving the needs of businesses of all sizes.

What is your role within the Salesforce Einstein group?

Shubha Nabar: Think of Salesforce as a platform for building business applications, where there's the sales, service and marketing applications. There's also a rich ecosystem of app developers who build custom applications on the platform. What we're trying to do is make it easy for these app developers to embed Einstein machine learning intelligence into their applications. So, I provide platform APIs that make it possible for this to happen.

What is done on a typical day to improve Salesforce Einstein?

Nabar: That involves working with engineers and data scientists on the product roadmap, planning what we want to build in the next release and helping design the technical architecture for how to develop these platform capabilities. In particular, I'm very close to the automated Einstein machine learning side of the stack, where we build libraries that automatically create the models for your data and use case. I also talk to customers -- these could be internal or external developers -- to get a better understanding of their needs.

How is the way that Salesforce Einstein approaches AI different than some competitors?

Shubha NabarShubha Nabar

Nabar: I think everyone has made great progress in getting machine learning and democratizing access to it -- there are a lot of cloud-based machine learning platforms today. But one of the things that's really challenging in using them is that you still need to have strong engineering, data science and DevOps teams to use those cloud-based platforms. You need to have data engineers who can get your data there. You need to have machine learning Ph.D.s who know what cross-validation is. You need to have DevOps teams to keep your processes up and running. One of the big advantages with Salesforce is that, with customer data and business applications all available on the platform, it's possible for Salesforce to offer more low-code, tightly integrated experiences without a business needing its army of engineers and data scientists to apply machine learning to their data.

How does Einstein make it easier to work with dirty data, in which the data isn't accurate?

I think everyone has made great progress in getting machine learning and democratizing access to it -- there are a lot of cloud-based machine learning platforms today.
Shubha Nabardirector of data science, Salesforce Einstein

Nabar: Two things. One is doing some of this data cleansing in an automated way. For instance, when we see that there's a particular feature that's highly correlated with another, we can automatically detect and remove it, if necessary. Or, if there's a particular column that doesn't have much variance, we can throw it out. Second, we can surface insights about the models we build, called explainability. This could inform the user of things such as, 'These were the most interesting factors in your models.' This gives the end user surprising results that could cause them to dig deeper into their data.

What are the different challenges in doing explainability well?

Nabar: We provide two kinds of explainability. We provide explainability at a global level, but we also provide insights at individual prediction levels. The way we do that is by using these local

interpretation techniques, such as LOCO [leave-one-covariate-out] and LIME [local interpretable model-agnostic explanations]. These are techniques that you can apply that enable you to preserve the data and see how variations affect the prediction.

What's a good way for users to understand what a feature is in the context of their day-to-day work?

Nabar: There are questions such as, 'Was this someone who attended a marketing webinar?' or 'Is it someone who downloaded a white paper on the website?' These are all features that we would use to make the predictions. Those would change if you're a marketing person or a salesperson. If you're a salesperson, the question might be, 'Would having a meeting with the CEO make you more likely to close this deal?' A number might be returned, predicting the likelihood of closing a deal. When you ask why, it would say what informed this prediction and the follow-up steps that would increase the probability of closing the deal. We can make predictions and recommendations on the next best action for a user to take, and you can take that action within Salesforce.

What can smaller customers who might not be so tech- and data-savvy, or know what data they have, do to get better results with predictions?

Nabar: Our customer success team works closely with customers to understand their use cases and data to see if it's actually a fit. We also have a tool called the Einstein Readiness Assessor that can be run quickly on their data to see if there's enough data to be using Einstein. The assessor tool does not make recommendations; it just tells users whether Einstein is a good fit or not.

What are your thoughts on the ethics of the way you see people using AI, and what dilemmas might your customers run into?

Nabar: It's very important that they understand what their values are, what they think of as fair vs. unfair outcomes and that they ensure that the data fed into machines doesn't contain the kinds of biases that they want to avoid in real life. New algorithmic techniques to detect and eliminate this kind of bias is in the works, specifically to reflect the diversity of whatever population they're catering to. Organizations also need an early warning system on how the AI is working. If they are automating decisions, are these aligned with their values?

What are the opportunities for using AI to identify and address biases that already exist in enterprises?

Nabar: There is huge opportunity, and there are two aspects to this. First, with the emerging research on algorithmic techniques for detecting and eliminating bias, it is possible to codify a certain set of values in your algorithm to ensure fairness and consistency. There was a 2011 study by professor Shai Danziger, who basically found that legal rulings made after lunch breaks were much more lenient than legal rulings made before. There's this inherent inconsistency in how humans function and decide, which is not something you see in machines. If you can train a machine and can give it the right data that reflects your values, there's hope that you can build a system that's consistently fair. The second aspect of this is explainability, involving the interpretation techniques that help explain why a decision was made.

That's another big differentiating factor, because, for most humans, you can say, 'Hey, why did you decide this way this time? And this other way this other time?' But, with a machine, you can always look into the algorithm and see exactly why this decision was made.

Dig Deeper on Customer data management