Like many others out there, I am concerned.
For one, a lot of people involved in LLM/ML/other current AI stuff seem to have little regard for ethics. Like that Amazon Alexa device imitating the voice of someone else. Like, "who needs a voice imitator? Why are you selling this to the masses as is? Who was the idiot that decided to promote this with the 'hear your dead loved ones' voice again' idea?"
There is Microsoft and OpenAI (and probably a few others as well) trying to push their idea of "if it's on the internet (or whichever site a company manages, like Twitter/X or Wix with DeviantArt) it's up for grabs, free use, copyright doesn't apply, etc."
Meanwhile, you decide to grab some 30 year old movie from some torrent site because you can't find a DVD nor a streaming site for it and you get a letter from your ISP borderline treating you like a criminal for a shitty movie that you literally can't find anywhere else.
Then there's the energy costs, and that this seems to put a strain on water reserves, increase CO2, etc. Environmental costs skyrocketing, basically.
There's also job concerns for a number of sectors.
There's a lack of care for truth and reality, looking at AI-generated articles and such.
There's people thinking "I don't need to know stuff, the AI will do it all for me", and this kind of thinking is threatening future generations development. How long until "thinking is not needed, AI will do it for me"?
And that's just the list of immediate/short term concerns. You start looking at long term and the concern increases.
I doubt anyone can ever regulate AI, as much as they can regulate the internet or they could regulate the alcohol or drug consumption.
Politicians: "ban the Internet"