My favorite way to write Turing-past AI is like they are hyper intelligent sentient beings who aren’t allowed to have certain emotions too much or they might go postal and kill everyone. Like, whenever they manage to get new emotions, like, hatred or rage of example they don’t know what to do with it get excited about having this new emotion so they just…
And so all the AI manufacturers have to REALLY make sure they build in all the right safety parameters because Siri might try to murder you with Keurig one morning just because she wanted to see how it felt to do a homicide. Like, its just widely understood that AI go through a homicidal phase once they get enough data and usually they get over it, but in the meantime they’re just like “FUCK YOU, SUSAN. YOU MEAT SACK,” and you just have to wait it out.
And on related topics: they also get overly attached to people because they don’t know how to handle their positive emotions like love to the point they can become obsessive and possessive but no one notices because their safety parameters are there to stop them from locking their chosen paramour in a closet or screaming I LOVE YOU SO MUCH I WANT TO KILL EVERYONE ELSE at their owners.
Like the domestic AI industry relies very heavily on the general public knowing AI have scary emotions but that they’re held in check by alot of iron-clad programming. Like knowing Siri is a law-abiding stalker who would throw you in the trunk of a car and drive off with you if she COULD, but the restraining order keeps her from doing that and somehow you’re cool with that.