The following is speculative. I am in no way implying that I’m involved in research of this type. However, if I wereR0; I would wonder how far a theoretical model of intelligence, with robotic applications, should go in terms of human emulation.

Should robotic remain strictly it’s own thing based purely on the ̵;bare-bones’ requirements for conscious thought and what we consider life? This of course would produce a form of alien life, which would be harder to relate to, interpret, and predict it’s actions. As for it’s appearance, it wouldn’t necessarily need to look human at all, as long as it could operate within our environment. However, it would lack familiar physical emotional cues that we use to interact. On a subconscious level, we pick up a lot of information from people which aids in our interactions. Lacking familiar expressions may end up being a hindrance.

Should robotic AGI become as human as we can make it? This would include a lot of unnecessary actions, such as sneezing and yawning, just to make it fit in better. That’s not to say all of it’s actions would be unnecessary. Typically, in fiction, robots are seen more as walking computers even if they appear human. They would have an acute awareness of their internal conditions, such as a specific battery charge percentage. Humans though, are rarely as acute in their interpretations of their physical condition. We can generally tell when we’re tired or hungry, and certain physical symptoms may increase, but we don’t have a precise measure of our needs. A robotic AGI, if it were to closely emulate human behavior, would then interpret it’s power needs in similar generalized terms to know when it needs to recharge. This would be less accurate, but would help them relate to humans better, which would arguably be the point of creating a human-like robot in the first place (though navigating and interacting with our environment would be a close second). So while some human emulation might serve a dual purpose, other actions may be entirely for show to put us at ease.

Or finally, should a robotic AGI have a blend of robotic and human features? Some sort of balance where they’re human enough that we could potentially treat them as equals in our society, but not so human as to cause social disruptions of various sorts. They could have their own set of physical social cues, perhaps predetermined sets of movements in response to stimuli that are uniquely robotic but that humans could still interpret to a reasonable degree without much difficulty. Features such as glowing eyes with certain colors corresponding to emotional states. This is something already presented in fiction. Red eyes would obviously register as ‘mad’ or ‘dangerous’ to a lot of people. Then there are senses that wouldn’t quite work like ours. Until we can create a sensory mechanism that can operate like our own tongue or nose, those types of sensory inputs will be limited. This would create a social barrier when it comes to typical activities like enjoying a meal or beverage, or smelling a flower. It could be possible to detect a variety of common factors and translate them into an approximate response, but it would never be close enough to our own responses without similar complexity and subtlety in sensory detection of smell or taste. But what could be done is, a robot could detect things like carbon monoxide, and translate it as a ‘bad smell’, compelling it to leave the area and encourage others to do the same. This would give it a sense of smell that both has a practical use and is still approximate in function to a human in terms of reaction. Certainly better than just giving a specific analysis of the CO levels like a computer readout, or even beeping like a detector. However, that is only considering the ways it interacts with people, and not the emotional effect on the robot and how that would color it’s experiences. Like any living thing, if it’s subjected repeatedly to unpleasant stimuli, it will tend to said stimuli in the future. A robot put into an environment with high CO levels repeatedly would have a negative experience which may affect it’s mental . So I suppose reactions such as that may be dependent on it’s purpose. If we created robots such as this, and had them operate in environments such as that or carry out unpleasant tasks, we likely wouldn’t give them the sense that these tasks are unpleasant or they wouldn’t do them. Our emotional reactions usually just tell us if we should continue doing a thing, or stop doing it and it. So it’s reactions would be dependent on what it’s designed for.

I suppose the bigger question when considering all this is, what would be the purpose of making human-like robots?

Source link


Please enter your comment!
Please enter your name here