Interesting article from the Nvidia blog on AI. We hear the acronym “AI” constantly but breaking it down in practical terms and how it affects the bottomline is what most people are truly interested in. Another silver bullet for the holster.
How AI Delivers at Domino’s
Pizza giant taps NVIDIA GPUs to bring AI to your neighborhood store.
Some like their pies with extra cheese, extra sauce or double pepperoni. Zack Fragoso’s passion is for pizza with plenty of data.
Fragoso, a data science and AI manager at pizza giant Domino’s, got his Ph.D. in occupational psychology, a field that employs statistics to sort through the vagaries of human behavior.
“I realized I liked the quant part of it,” said Fragoso, whose nimbleness with numbers led to consulting jobs in analytics for the police department and symphony orchestra in his hometown of Detroit before landing a management job on Domino’s expanding AI team.
The pizza maker “has grown our data science team exponentially over the last few years, driven by the impact we’ve had on translating analytics insights into action items for the business team.”
Making quick decisions is important when you need to deliver more than 3 billion pizzas a year — fast. So, Domino’s is exploring the use of AI for a host of applications, including more accurately predicting when an order will be ready.
Points for Pie, launched at last year’s Super Bowl, has been Domino’s highest profile AI project to date. Snap a smartphone picture of whatever pizza you’re eating and the company gave the customer loyalty points toward a free pizza.
“There was a lot of excitement for it in the organization, but no one was sure how to recognize purchases and award points,” Fragoso recalled.
“The data science team said this is a great AI application, so we built a model that classified pizza images. The response was overwhelmingly positive. We got a lot of press and massive redemptions, so people were using it,” he added.
Domino’s trained its model on an NVIDIA DGX system equipped with eight V100 Tensor Core GPUs using more than 5,000 images, including pictures some customers sent in of plastic pizza dog toys. A survey sent in response to the pictures helped automate some of the job of labeling the unique dataset now considered a strategic corporate asset.
AI Knows When the Order Will Be Ready
More recently, Fragoso’s team hit another milestone, boosting accuracy from 75% to 95% for predictions of when an order will be ready. The so-called load-time model factors in variables such as how many managers and employees are working, the number and complexity of orders in the pipeline and current traffic conditions.
The improvement has been well received and could be the basis for future ways to advance operator efficiencies and customer experiences, thanks in part to NVIDIA GPUs.
“Domino’s does a very good job cataloging data in the stores, but until recently we lacked the hardware to build such a large model,” said Fragoso.
At first, it took three days to train the load-time model, too long to make its use practical.
“Once we had our DGX server, we could train an even more complicated model in less than an hour,” he said of the 72x speed-up. “That let us iterate very quickly, adding new data and improving the model, which is now in production in a version 3.0,” he added.
More AI in the Oven
The next big step for Fragoso’s team is tapping a bank of NVIDIA Turing T4 GPUs to accelerate AI inferencing for all Domino’s tasks that involve real-time predictions.
Some emerging use cases in the works are still considered secret ingredients at Domino’s. However, the data science team is exploring computer vision applications to make getting customers their pizza as quick and easy as possible.
“Model latency is extremely important, so we are building out an inference stack using T4s to host our AI models in production. We’ve already seen pretty extreme improvements with latency down from 50 milliseconds to sub-10ms,” he reported.
Separately, Domino’s recently tapped BlazingSQL, open-source software to run data-science queries on GPUs. NVIDIA RAPIDS software eased the transition, supporting the APIs from a prior CPU-based tool while delivering better performance.
It’s delivering an average 10x speed-up across all use cases in the part of the AI process that involves building datasets.
“In the past some of the data-cleaning and feature-engineering operations might have taken 24 hours, but now we do them in less than an hour,” he said.