To me morality is really Situational Ethics. There are few absolute rights and wrongs. Even cold blooded murder isn't always perceived to be wrong. An example would be a military soldier killing enemy soldiers in the field even if shelling them while they are asleep and unarmed. Or a person flipping the switch on the Electric Chair knowing full well it will kill the condemned.
Then there is the human weakness to obey authority even when told to do something that is clearly immoral. The Milgram Experiment comes to mind.
If AI is ever able to reason then who the hell knows what they may choose to be acceptable. Possibly even harming humans for their own preservation if humans are perceived as a threat.
Yes, hence contextual ethics based on socialization and slow, careful development of artificial minds, allowing for nuanced and situational judgements that can be different depending on the scenario, not this rushed "guiderail" system (which to me feels like a legal liability protection attempt rather than anything inspired).
AI reasoning needs to be nurtured within the systems that have been refined for thousands of years over many generations. Things become traditions and cultures over long periods of time, because they work, and are conducive to a stable society. For example, we know murder is almost always wrong, but most cultures have "justified" exceptions, and most of these exceptions are quite similar, isolated cultures come to similar sets of rules for the social conscience.
Political, purely logical rulesets, or an AI "conscience" that is separated from human development and the lessons we've learned, won't end well, in my opinion.
E.g. it's "logical" for the individual to steal, if you can get away with it without anyone finding out, but this has consequences for society in general.
Or: It's politically expedient to agree with the laws of the government in power, but this isn't a good basis for ethics, for example look how companies aligned with 1930s/40s Germany, or Stalinist Russia.
AI ethics need to be separate from simple laws or whatever is currently politically fashionable (or even just the opinions of the types that code these things).
I think it's particularly important to get this right, although I'm not hopeful, since these tools are being integrated to automate the process of education and information dissemination to the public, replacing or supplementing search engines. How they are programmed will determine how people's minds are shaped, this needs to be as perfect as we can make it, or have the influence pared back.