Let me correct my above post, then: Computers can think, but they're not intelligent. Solving problems is thinking, not intelligence.
So I think people want to discuss the philosophy problem of "what is intelligence".
There's two overall arguments in the AI Community. There's "weak AI", which are AIs that solve particular problems. And then there's this vague-as-hell kind of bullshit term that people call "strong AI", "AGI", etc. etc. You know, the "real" intelligence that we've only seen in made up movies and fictional stories by Isaac Asimov.
Despite being called "strong AI", its never been accomplished and the definition of it seems to change again-and-again.
"Weak AI", despite its name, has always been where advancements in artificial intelligence have come from. And even today, LLMs and Generative AIs, constitute research from the "Weak AI" side of the discussion. LLMs are a tailor made automated machine-learning tool to predict the next word of text.
As it turns out, LLMs have gained "superpowers" no one was expecting. If you ask an LLM to translate a word to Chinese or Japanese, they seem to be able to do that. (!!!!). Which is suddenly people treating LLMs as if they're a "strong AI".
And then you ask them to multiply 532 x 7654, and then they can't do that. Because those two numbers I picked at near-random and I bet they don't have those numbers in their training set. So unless the LLM has a specific "python-interpreter" to take mathematics questions, plug them into a programming language and then work with them in that manner, LLMs will fail. As its impossible to "text predict" mathematics or other equations.
So where my overall posts are going is: "strong AI" is bullshit, it always was bullshit, but its been around since the dawn of AI. "Weak AI" is where all the gains have been. LLMs are a marvel of "weak AI" practitioners (people just trying to solve one problem), even though we got a whole bunch of unexpected superpowers from their work. But its still evident that when you take an LLM outside of its zone of expertise (ex: start asking about multiplication of randomized 3 or 4 digit numbers likely outside of the internet's training), they collapse.
So you don't have a proxy for "thinking" or a way to measure it. Gotcha. If you don't have a means of measuring "thinking" or seeing if its there, then the discussion is worthless. Even if "thinking" were to happen, you wouldn't be able to argue it happened (or argue its not happening).