Blade Runner 2049 and related thoughts

Started by Beorning, November 25, 2017, 10:16:29 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Beorning

Yesterday, I finally went and watched Blade Runner 2049.

I admit that I was hugely disappointed. The movie was depressingly bleak, way too long, extremely slow and, at times, just boring. Even worse, I cannot tell what was the point of the whole story? The original movie was very moving and concerned with the theme of personal freedom. This one was about... nothing in particular, IMHO.

Anyway... even as it is, the movie inspired some thoughts concerning the possible human - AI relations. I'd be interested in hearing what you think about these matters?

My first question is: do you think it would be possible for relations between humans and androids / robots / AIs to move beyond the master / slave paradigm? I mean, it's great to think of possible future when humans and sentient robots are equals, but realistically speaking? Robots don't come into existence naturally. Someone needs to build them - and I don't think anyone would be investing time and resources into producing robots just to let them go. I suspect that, if humans ever create sentient robots, it would as slaves / servants / workforce... And if robots were to become equals, some sort of robot rebellion would have happen. And that rebellion would have to end with robots taking control of the robot factories - otherwise, after the robot emancipation the production would just stop and there would be no more new robots...

My second question: in the movie, there's this notion that Wallace Corporation is creating androids who are naturally fully obedient - and it's shown as something evil. Again, I sympathize with that notion somewhat, but on the other hand: these androids *are* built as workforce. And, as I said, if we ever get mass-produced sentient robots, that would as servants. And, if somebody is building servants, that it's quite natural that there would safeguards in place... I mean, do we consider Asimov's Laws of Robotics evil? Because they are, basically, the same thing: they are safeguards against robots disobeying humans...

So, what do you think? I am fascinated by the idea of the co-existence of humans and robots and I'd love to hear your opinions on these matters...

wander

I enjoy talking AI, it's one of my fave aspects of scifi, including space travel and time stuff. Though technically there's also space-time and time dilation from that and I'm on a tangent already. ^^;

I think we have to wonder why we as humans would program sentience into robots if they're needed for basic reasons. Having social skills wouldn't usually be high on the list of needs, though probably would be if say, a companion robot was programmed for someone, or workers where dealing with people would be important. Otherwise, why not make say, a physical workforce for say construction as drones that do simply what is ordered of them. Actually giving something the intelligence to class as sentience has it's repurcussions and then we can ponder more on what constitutes a mass-produced machine's rights, which is where the line between say worker and slave comes in. YMMV and that may go into more PROC issues that I don't pretend to be an expert on for this particular thread.

Rebellion, isn't a guarantee, it could be a relatively more peaceful Revolution. One good source on how things may play out is in the Animatrix, the two-parter 'The Second Renaissance', I highly recommend checking that out if you haven't already, though we see a more... violent take on how things may come to be there.

Also, there'd be something to consider with the moral and ethical implications more when it'd come to (if it did) for biological 'replicant' style automatons, like those vat-grown or genetically engineered or otherwise artificially built though fully biological human expies. This is the kind of quandry that Blade Runner and other media question. Look also to the rpg Eclipse Phase for how the rights of non-human physical forms are treated, whether vat-grown 'Pods' or the clanking masses of actual metal and plastic Robots.

On the notion of being fully obedient, it's a question on free will and having it or not. To have someone who has the capability to choose though still cannot. Also as mentioned above, we as human creators would need to program these abilities of cognitive thought into our creations, if going on metal robots rather than biological replicants, a robot wouldn't have this level of sentience as is, it needs to be developed and programmed to the nth level to happen. And to that, we must ask why would we go to that length for a simple worker with no need of social skills or any need for them to do anything more than it's programmed to do.

I'll probably have a trail of thought on the Three Laws in another post, I think I've probably had enough of a train of thought here to direct the conversation in different paths.

In conclusion, I have thoughts and opinions, though often they're dependant on context and the individual case. Again for some interesting viewpoints on both side of the fence, Eclipse Phase raises some interesting points.

Beorning

Quote from: wander on November 25, 2017, 02:50:23 PM
I think we have to wonder why we as humans would program sentience into robots if they're needed for basic reasons. Having social skills wouldn't usually be high on the list of needs, though probably would be if say, a companion robot was programmed for someone, or workers where dealing with people would be important. Otherwise, why not make say, a physical workforce for say construction as drones that do simply what is ordered of them.

That's a valid point. Would ever get to the point that we would sentient androids? Blade Runner and other sci-fi stories assume just that, but does that assumption make sense? I don't know...

Quote
Rebellion, isn't a guarantee, it could be a relatively more peaceful Revolution.

That's true: I don't think that it would need to come to some open war between humans and robots for the robots to get rights. But I do think that, at some point, robots would need to start fighting for their rights, be it by force, political action or peaceful protests. I don't think us humans would give robots these rights just like that.

Quote
On the notion of being fully obedient, it's a question on free will and having it or not. To have someone who has the capability to choose though still cannot. Also as mentioned above, we as human creators would need to program these abilities of cognitive thought into our creations, if going on metal robots rather than biological replicants, a robot wouldn't have this level of sentience as is, it needs to be developed and programmed to the nth level to happen. And to that, we must ask why would we go to that length for a simple worker with no need of social skills or any need for them to do anything more than it's programmed to do.

Exactly. It's the question whether we actually have the obligation to make the robots sentient... and if their sentience must include all the faculties like unrestricted free will?

wander

Well, what is fact is that work on 'smart' devices is growing more intricate and advanced at a very fast rate and as such artificial intelligence is a field where we're today seeing Limited Artificial Intelligences, of course nothing in the line of true blue sentience, though with how intricate and advanced things can be programmed to certain levels, the future can lead anywhere. Technology is growing more ubiquitous as we can sync our computers to our phones and even devices in our household can be programmed now in more advanced cases to do a lot for us. Sure we don't have in day to day life walking anatomically shaped humanoid helpers, though robotics is another field growing and advancing.

Also, on android helpers, I bring up the case of animatronic plush cats and dogs you can buy right now for a decent price made specifically to keep elderly people who otherwise couldn't have a pet. They're built to be companions, when they're 'off', they curl up and take a position of 'sleeping' and activate from this standby with a stroke down their back (remember it's an animatronic frame under a semi-realistic plushy exterior) or similar action of affection on the surrogate pet. When 'awake' it'll walk about, shuffle and nuzzle like a real cat or dog and is made for realistically reacting attention and affection for the person who owns it. You can buy one of amazon for £75 or whatever your currency may be and hell, as I live alone in a flat and for me having a pet would be troublesome, I've considered buying one myself for that extra comfort on a grey day.

Point being, what will be produce next? We're at a point where even sex-dolls, if you have the capitol can have all sorts of extras built in to make their appearance or other factors, like body temperature, textures etc be more realistic.

On the final point of your post there, I mean the easy answer is moral and ethical debate and if we should, even if we can. It's where science shifts over to 'playing god'. I'll keep my opinions on this out for the moment, as again, don't wanna go PROC right here when I'm just spit-balling a goalless train of thought.  ;D
Though for a basic answer, I'd say we don't have an 'obligation', though if we can and can learn something of benefit from doing so, maybe it's worth looking into. Unrestricted free will is such a touchy subject also... Where does the line for basic safety and health for the human workers around what let's say for simplism is another machine in the workplace which should keep to health and safety standards, where does that cross into giving something which should have a measure of freedom, let's say... for it's own self-defence and the right to continue to exist. It's the whole reason in The Second Renaissance and hell, even HAL in 2001, the threat of termination of it's 'Self' that causes issues down the road.