It helps when you're using Radeon Instinct instead of Tesla. NVIDIA has fixed-function matrix multipliers called tensor cores, which AMD GPUs lack. Maybe you work for a company that prefers AMD's open software stack to NVIDIA's. That's when DLBoost will help.
Even without Tensor cores most AMD GPUs will vastly outperform any CPU with AVX 512. I struggle to justify how a company would be more willing to spend thousands of dollars on multiple CPU nodes to get the same throughput that could have been obtained with one or two GPUs (AMD or Nvidia) at a fraction of the cost.
Let me be frank with the example you provided, if one buys a whole bunch of those eye-wateringly expensive AMD Instinct cards but they end up using DL boost on CPUs to accelerate their ML workloads that means they are severely out of touch with whatever they were supposed to accomplish.
I can't find a single instance when these CPU would make sense over any other GPU solution as far as ML is concerned, there just isn't any. It's a feature stuck in a no man's land. Intel has upcoming GPUs so why do they insist on these solutions that are clearly not up to the task is beyond me.