It's not about intelligence, it's about the ramifications of the lack thereof.
My concern with these pseudo-AIs is not that they get things wrong. It's that they don't know when they get things wrong, or how those things are wrong: they are consistently, terrifyingly, confidently wrong. And far too many human beings have an unfortunate propensity to believe someone who says something confidently, over someone who says "this is what I know and what I've inferred". Hence the anti-science movement, and populist leaders.
But these "AI"s will get better; over time, they'll be trained to be wrong less and less of the time. And as a consequence, we as a species will start to become dependent on them (this is human nature). Eventually - inevitably - one of these "AI"s will be in a position to make a decision that affects human lives, and due to its inherently flawed design it will choose an option that is completely and spectacularly and obviously wrong, and people will die. And the worst part? Nobody will be able to explain why that "AI" made the decision it did, because it's a black box, and therefore they won't be able to guarantee it can't make the same mistake.
People who are dependent on that "AI", which may very well be a significant part of society by that point, will as a result likely have an existential crisis similar to the one you'd have if you woke up and went outside, and the sky was green instead of blue.
Conversely, a properly artificial intelligence, with the ability to reason, could come to the same wrong decision... but being intelligent, it would understand that said decision would have a negative impact, and likely avoid it.
Even if this AI did choose to proceed with that decision, it would be able to tell you why.
And finally, it would be able to be taught why that decision was incorrect, with the guarantee that it would never make the same wrong decision ever again.
Humans as a species are almost certainly going to move to a society based on AIs. Do we want it to be based on AIs that we can trust to explain their mistakes, or those that we can't? I say the former, which is why I believe that the current crop of pseudo-AIs that are nothing more than improved ML models, are not only dishonest - they also have the potential to be incredibly, unimaginably harmful.