Let's discuss Artificial Intelligence

Started by Beorning, January 08, 2022, 09:07:53 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Beorning

If you don't mind, I'd like to try discussing some issues related to AIs and androids. Things I started wondering about when thinking on some RPG matters, i.e. how to RP a sapient android properly or how to build a setting incorporating a society of androids. I hope this is the right forum - some of my questions are of the ethical and philosophical kind.

My issues don't deal with the question of whether creating truly sapient AIs is possible... Let's assume it is. So, here are some questions I'm pondering:

1. Let's say humanity created sapient androids by accident... Should these androids be allowed to go free, without any obligations to humanity? After all, humans *did* create them and spent considerable resources on them.

2. Continuing from that - after this first wave of "unintentionally sapient" androids, humanity will know that their androids are sapient, constitute people with free will etc. So, should humanity continue manufacturing them?

3. As artificially created beings, androids usually would be built for some purpose in mind. So, even a sapient android would probably have some sort of "core programming", like the imperative to care for sick people (a nursing android), uphold the law (police android) etc. Is it ethically okay to create sapient beings with such imperatives imprinted?

4. Let's say the androids form their own society. Would you say they would keep building more of their own people? If so, would these second-generation androids also be imprinted with some "core programming"? Or would you say an android society would abhor this?

5. Would you say that sapient androids are *alive*? If so, what is the line between life and death for such beings? When exactly does an android die?

6. A religious question: if an android became sapient, would it also gain an immortal soul?

7. Any ideas on how an android might perceive reality? Would a sapient android think and feel like a human being, or would it have a completely inhuman mind? Maybe they'd think much faster that humans, wouldn't have emotions, perceive the space-time in some different way...

I'm genuinely curious regarding your thoughts on these matters!

Lustful Bride

I'm gonna add my two cents in, just know I am no genius or specialist in AI.

1: Not really. It being created acidentally gives the AI wiggle room. If it had been created on purpose, then...maybe? But even then it is a sentient entity, and can choose to do what it wants. Do you owe much to your parents for creating you? Why? what compells you?

2: I'd say no. Robotic Birth Control should be a thing XD Aside from the obvious worries of AI going skynet on us, there is also the question of resources by creating more and more AIs and androids that we don't really need. But if the AI fills out the paperwork, and the resources are there...i suppose?

3: I feel like many arguments could be made about this from the perspective of raising your kids to be in the family business, to have your politics, or your religion. Its just much faster with AIs.

4: That is up to the AI, really.  If they build their own nation that is their own sovereign right (and problem) as well.

5: Yes. If it has emotions, self awareness, etc. I would consider it alive. I would say death for an android/AI is when it is rendered inoperable and can no longer be restarted/activated. Or faces some kind of programming corruption that (again) cannot be repaired or be restarted from a backup. (I think at this point its abit similar to what we see in Altered Carbon/Eclipse Phase).

6: I would say yes. If man has a soul, and has created life, then it stands to reason that this new form of life has a soul as well, given the spark of life by man's efforts, wisdom, and own spark. What that means in the grand scheme of things is up to God.

7: Do people perceive things the same way? How can you prove it? How different is reality for a Neurotypical person, vs a Neurodivergent person? Is the red that i see, the same red that you see? 

We honestly would not know. A sentient machine may perceive the universe in a lovecraftian fashion. Or it might see it just like an average human. Or in a more abstract fashion.

Azy

Not that I am an expert, or know a lot about technology....  I do have a few thoughts based on what I have seen and do know though.  One, I know the Sims in my game are basically simple AI's when given free will.  I spend a lot of time cursing at them because they do stupid shit that fucks up my game play experience. 

Two, AI's can be taught to do many things.  They can learn information they have access to.  That being said, them showing actual emotion, as far as I know, has only ever happened in movies.  Even my Psychology textbook said that there is something that goes on in human brains that no one can fully replicate with a circuit board.  That was the chapter on personality I believe.  There is something more to living beings than just information processing.  If you ask me, I would say that's a soul, but I know not everyone agrees. 

Saria

I’ll take a swing at it, since SF is very much my thing.

1. Let's say humanity created sapient androids by accident... Should these androids be allowed to go free, without any obligations to humanity? After all, humans *did* create them and spent considerable resources on them.

Yes.

Whether the creation was intentional or not, if they are sapient/intelligent/self-aware/whatever-you-want-to-call-it, they are persons. All persons should be free. We can surely ask the androids if they want to work off any obligation they may feel to us… but if we force them, then we’d be slavers.

If you think about it, then if a sapient being is a person, a sapient android you create would be rather like your child, if only in the logical and not biological sense. Do you think a child owes an obligation of servitude to their parents for making them? I don’t.

2. Continuing from that - after this first wave of "unintentionally sapient" androids, humanity will know that their androids are sapient, constitute people with free will etc. So, should humanity continue manufacturing them?

I don’t think there can be a general answer to this question. I think it would have to be considered on a case-by-case basis, based on why you want to make one.

If your answer is that you want one merely as a tool to be used for some purpose, then the answer should be no. But if your answer includes concern about what the person you’re about to create wants, then the answer can be yes… even if it also includes some use or purpose for yourself. This is also analogous to the traditional parent/child dynamic: if you want to have a child merely so they can be a worker for you, then no, you don’t deserve a child… but if you’re going to give a child a happy and fulfilling life, but also want them to work for you (like caring for you in your old age), then maybe yes.

The philosophical basis for my position is basically the second formulation of the categorical imperative: “Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” Or in plain English, never just use people, always treat them as if they have feelings and preferences of their own. The idea is that it’s never okay to ignore what other people want. You can use people for your own selfish needs—like, for example, you can have sex with someone because you’re horny and want satisfaction for yourself—but only so long as you consider their feelings and desires, too. You can’t rape someone (that is, use them for sexual pleasure without concern for what they want)… but you can fuck them bow-legged if they’re okay with it. Even “unenthusiastic prostitution” is okay, because even if the other person doesn’t actually want to have sex with you, they’re getting something they want out of it (the money), and they’ve decided that works for them. (But, of course, someone who is forced by circumstances—like needing the cash—to have sex with you isn’t really consenting.)

So let’s say you want an android, to, I dunno, clean your house for you, and for whatever reason, it needs to be sapient. If all you want is a housekeeping slave, then no, you don’t get to make a sapient being for that. However, if you want a sapient android to clean your house… and you intend to give it everything it needs to have a happy and fulfilling existence… including releasing it from housekeeping duties if it decides it doesn’t want to do that… then I’d say it’s okay to make a sapient housekeeping android.

3. As artificially created beings, androids usually would be built for some purpose in mind. So, even a sapient android would probably have some sort of "core programming", like the imperative to care for sick people (a nursing android), uphold the law (police android) etc. Is it ethically okay to create sapient beings with such imperatives imprinted?

Absolutely. Quite to the contrary, I’d say you’d better think long and hard about creating an android without some purpose.

Let’s suppose we could create a housekeeping android that was sapient, but also completely satisfied and fulfilled by housekeeping duties. It’s not the android might ever want to do anything else but clean house, but you’ve put some sort of lock in its brain to crush any such thinking. It will just never even have the inkling that anything else might be satisfying, even after you let it try other options. It has the freedom to explore, to learn, to dream, all that jazz, but because of the way its mind was shaped during construction, it will always come back to wanting to be the best darned housekeeping robot it ever could be. That would be bliss for them.

Now, I understand that most people’s knee jerk reaction on reading about an android—a person—who was created that way would be to be horrified. But… why? What exactly is horrible about it? Most people would say that it would be a nightmare scenario for them if they were some sort of Stepford wife, artificially constrained to be happy with being servile and domestic. But… hold on, and this is critically important: we’re not talking about you. Because you are not wired to be perpetually happy and fulfilled doing housekeeping work for someone. Of course you would find the idea horrifying. But a being that found housework happy and fulfilling would not find the idea horrifying at all. It would be heaven for them.

But maybe you say it’s not the housekeeping specifically that’s the problem, it’s the idea that you could be built with a mind constrained to find only certain things fulfilling.

Okay then, try imagining what you would need to have a happy and fulfilled existence. Maybe that would be the freedom to travel, wherever you want, whenever you want. Maybe that would be access to every work of art every created, any time you want them. Maybe it would be the ability to keep learning new things forever, and keep being challenged by new mysteries leading to new discoveries. Maybe it’s the partnership of someone you can connect with and love. Maybe all of the above. Whatever it is that would make you feel fulfilled and happy for the rest of your life, there must be something, no matter how complicated or multifaceted. If there isn’t, that’s even more tragic, because it means you will never be happy or fulfilled, but let’s assume there is something, or some set of things, that will make you feel happy and fulfilled for life.

Now, let’s say I created a sapient android, and constructed their mind in such away that they found exactly the same thing(s) happy and fulfilling. Would that be wrong? I don’t see why.

Here’s what I see as the bottom line. If you are creating a mind, then you must give it a way to find happiness and fulfillment. If you don’t, you are condemning it to be eternally unfulfilled… which is torture. Not giving your android a way to be happy and fulfilled is cruel.

Okay, so… you have to give them a way to be happy and fulfilled… so… why not by housekeeping? You may argue that that’s a little selfish, but remember, the rule is that you can’t use a person merely as means to an end. You can use them as a means to an end, provided you also satisfy their feelings and desires. So if you create them with the feelings and desires that make them find housekeeping happy and fulfilling… then win-win. And, frankly, making a being that can find happiness and fulfillment with something as simple as cleaning house is much nicer than making one that needs something much more complicated and hard to obtain.

The android is a person who has a sense of purpose and meaning in their lives, and they’re happy. Yes, you’re getting something out of it… but so what? What’s wrong with that? The important point is that you’re giving the android, the person, a way to be happy and fulfilled. (Of course, if at any point the android wants something more, you’d have to respect that.)

4. Let's say the androids form their own society. Would you say they would keep building more of their own people? If so, would these second-generation androids also be imprinted with some "core programming"? Or would you say an android society would abhor this?

I guess if androids went off and made their own society, they might make more androids: as friends, as helpers, or whatever. Frankly, it would be none of our business if they do. They’re people. They have the same freedoms as any people. If we are allowed to make more androids, then so are they.

And it would also be none of our business if they built their new siblings/children with purposes just like how they may have been built, or what those purposes are.

5. Would you say that sapient androids are *alive*? If so, what is the line between life and death for such beings? When exactly does an android die?

I would say the terms “alive” and “dead” won’t make any sense when talking about androids, unless they happen to be biological in nature (in which case, the terms would have the usual meanings). We’d either need new terms, or we’d need to modify the meaning of those.

I mean, consider a computer. If it’s turned on, and running programs, I presume you’d call it “alive”. If someone’s taken a sledgehammer to it, and smashed it to smithereens, I presume you’d call it “dead”. Okay, but… what it’s merely unplugged? You can’t really call it “alive”, because it isn’t active, it isn’t responding to stimuli, it isn’t running programs; it’s inert. But you can’t really call it “dead”, because all you need to do is plug it back in, boot it up, and it’s fully “alive” again. So… what, is it “sleeping”? “Unconscious”? “In a coma”?

I think in a world with non-biological sapients, we’d need to come up with a whole new terminology that isn’t anthropomorphic, or biologically-centred. You could recognize a sapient being as “fully conscious/functional/whatever” when it is capable of using its full thinking capacity at will, even if it’s not actually, currently doing so. So an android that shut down most of its processing capability to save power, but could reactivate it all pretty much right away if it needed to really think about something, would be fully conscious. Same for a human who happens to be zoning out and watching some mindless entertainment; their brain may be in “sleep mode”, but if something caught their attention or they really wanted to think about something, their brain could immediately fully boot up.

A sapient being could be considered “impaired” if something is preventing them from using their full thinking capacity. For humans, that could be when they’re drunk, for example. For androids, maybe when their processor is damaged somehow, or they don’t have enough power to run it fully. Voluntary impairment would be okay, but for the duration you could be restricted from doing stuff that requires a certain amount of thinking capacity, and you’d be responsible for anything you do while impaired. The analogy would again be getting drunk. Involuntary impairment would be a crime. That would apply to “roofie-ing” a human, or giving a virus to an android or otherwise messing with their systems.

And a being could be considered “non-functional” whenever it cannot think at all. That condition could be permanent, as in death, or temporary. It could be voluntary or involuntary, treated as an extreme case of impairment.

6. A religious question: if an android became sapient, would it also gain an immortal soul?

I can’t answer that because I don’t believe in souls.

But for those that do, I’d point out that an android doesn’t necessarily have to be mechanical or electro-mechanical in nature. An android could be biological. In fact, you could take the base blueprint of humans, and use that to build an android.

So the question is: when does a human stop having an immortal soul. Presumably a naturally-conceived human gestated in the womb of a human has a soul, right? Okay, what about a test-tube baby? What if you take the gametes, modify them somehow—anything from removing genetic defects to changing the eye colour to making them super strong and obedient—then place them back in a human womb to gestate? What if you take naturally-conceived, unmodified gametes, and gestate them in an artificial womb? What I’m asking is: where, exactly, is the line? What, exactly, is the “thing” that gives “normal” humans souls, but not other animals and machines.

7. Any ideas on how an android might perceive reality? Would a sapient android think and feel like a human being, or would it have a completely inhuman mind? Maybe they'd think much faster that humans, wouldn't have emotions, perceive the space-time in some different way...

Presumably an android’s mind would be modelled on a human mind. That’s simply a matter of practicality: the human mind is the only sapient mind we know of, and even if it weren’t, if we were building an android to work with humans, we’d want it to have human-like perception and cognition. And if the android has human-ish form—which I assume we’re talking about by the use of “android” rather than “robot” or “AI”—then it would be weird for them not to have human-like minds; it might be like the ultimate dysphoria for them, existing in human-like bodies without a human-like mind.

I think that, at least at first, androids would be very much like humans, and may even think of themselves as human-ish. I don’t think that necessarily means they will want to actually, literally, be human. I’m not talking about Pinocchio or Data here. But they will probably think they deserve all the same rights and freedoms that humans do, and they will probably be interested in more or less the same things, and think in more or less the same way. They’ll probably think of themselves, if not as our equals, then at least as equivalent.

There will obviously be at least some differences. I can’t see androids being interested in sex, unless built for the purpose (and, thus, programmed to enjoy it and find it fulfilling), and I can’t see them being attracted to anyone. I can see them forming attachments with people—and other androids—even if only because the familiar feels “good and safe” for them, and that could probably be considered love.

There’s a really neat webcomic called Freefall that I haven’t read in a while, but that is really underrated, that dives deep into this idea of how AIs might perceive and think differently. It’s a silly comic on the surface, but it really gets into some fascinating issues. The main protagonist is a biological AI, basically an uplifted wolf, who is a FTL drive engineer… and the reason she’s an FTL drive engineer is because she perceives time differently from humans. Unlike humans, her primary sense is scent, so when she walks into a room, she doesn’t just get the immediate story of what’s happening now—which you get from visually surveying the scene—she picks of the scents of people who were there previously, where they were in the room, and how they moved around, so she “sees” a different scene, one that is sorta temporally squashed, time-all-at-once. There is also a subplot about actual electromechanical robot AIs gaining sapience, and creating their own religion… sorta kinda, it’s actually both hilarious and deeply insightful, because while they like the idea of religion, and think having a religion would better connect them to the humans they idolize, their take on religion is… unique.

And personally, I like to write AIs that are very different… very inhuman in nature. Like, the AI of a ship, who is different in so many ways: She’s not social in the same sense that we are, because, being an interstellar warship, she doesn’t exactly get a lot of opportunities to hang out with peers often… but she does (normally) have a crew of hundreds, which she cares for in a sort of maternal way. She has no sense of privacy or “gross-ness”, because she can see everyone inside her 24/7, and they literally poop in her, but at the same time, she understands that the people inside her do, so she is an odd mix of extremely empathetically diplomatic, and totally “does-not-give-a-shit”. She’s literally built as a weapon of war, by humans to murder other humans en masse, so she has no time for romantic notions of how loving and sweet humans are, but at the same time does genuinely love her crew.

But if we’re talking about humanoid androids, then I would think the first few generations at least would be very human-like in their perception and thinking. I think once androids start making their own androids, there might be some drift, as androids develop their own culture, and make their offspring androids more in tune with “android-kind” rather than humankind. After enough generations, androids might be very different from humans… but I think that because we share the same roots, we’ll probably always be able to understand each other’s ways of thinking, even if it takes some patience and effort.
Saria is no longer on Elliquiy, and no longer available for games

TheGlyphstone

Quote from: Saria on January 09, 2022, 10:47:49 AM

But if we’re talking about humanoid androids, then I would think the first few generations at least would be very human-like in their perception and thinking. I think once androids start making their own androids, there might be some drift, as androids develop their own culture, and make their offspring androids more in tune with “android-kind” rather than humankind. After enough generations, androids might be very different from humans… but I think that because we share the same roots, we’ll probably always be able to understand each other’s ways of thinking, even if it takes some patience and effort.

This reminds me of a sci-fi series I read a little while back, starting with We Are Legion, We Are Bob. It has 'AI' that are digitized copies of human brains+personalities, the first of which being a geeky programmer (named Bob). Bob-01 is a direct, identical digital copy of the deceased organic Bob, but as his Von Neumann probe body goes out into space and begins duplicating itself, who in turn eventually duplicate themselves, the successive generations of 'Bobs' take different names and become increasingly different from both their original template personality and humanity in general.

RedRose

1. Let's say humanity created sapient androids by accident... Should these androids be allowed to go free, without any obligations to humanity? After all, humans *did* create them and spent considerable resources on them.

I'd say they need to have a job, like humans? But I would find it unfair to not allow them a life because they didn't ask for this...

2. Continuing from that - after this first wave of "unintentionally sapient" androids, humanity will know that their androids are sapient, constitute people with free will etc. So, should humanity continue manufacturing them?

I don't think so, unless they want to raise them like children.

3. As artificially created beings, androids usually would be built for some purpose in mind. So, even a sapient android would probably have some sort of "core programming", like the imperative to care for sick people (a nursing android), uphold the law (police android) etc. Is it ethically okay to create sapient beings with such imperatives imprinted?

Good question... How much does this prevent a personal life?

4. Let's say the androids form their own society. Would you say they would keep building more of their own people? If so, would these second-generation androids also be imprinted with some "core programming"? Or would you say an android society would abhor this?

I really don't know.

5. Would you say that sapient androids are *alive*? If so, what is the line between life and death for such beings? When exactly does an android die?

No. But anyone mistreating an android, imo, is a huge red flag and would rather mistreat people also.

6. A religious question: if an android became sapient, would it also gain an immortal soul?

My knee jerk reaction is no. But I'd say maybe different religions see it diff.

7. Any ideas on how an android might perceive reality? Would a sapient android think and feel like a human being, or would it have a completely inhuman mind? Maybe they'd think much faster that humans, wouldn't have emotions, perceive the space-time in some different way...

That would influence my reply on six...
O/O and ideas - write if you'd be a good Annatar/Sauron, Aaron Warner (Juliette) [Shatter me], Wilkins (Faith) [Buffy the VS]
[what she reading: 50 TALES A YEAR]


Vekseid

1) Imagine you accidentally created a virus that, if set free, would eradicate all higher life not only on Earth, but over the entire Universe.

Should it be allowed to go free?

Of course not. Making it sentient gives it no special exemption to this. In fact it is often worse. One misapplied optimization function and everything. Dies.

Anyone who says 'yes' to this question either doesn't comprehend what sapient AI is capable of, or is an omnicidal maniac.

There is an entire branch of the alt-right dedicated to this 'yes' answer, as part of the 'conclusions' of the Dark Enlightenment.




2) You had best be damned sure you understand what is going on when manufacturing sapience. If you made it accidentally, you certainly do not.




3) IMO the advancement of linguistic analysis and language development is going to eventually merge to create symbolic sapient AI. This is by definition necessary for it to be able to modify its own code. So, 'should it be created with a purpose' runs into a tautological fallacy for symbolic sapience.

For connectivist sapience this is of course less clear. If it is possible to create accidentally (an assumption taken for the sake of this discussion and not remotely a given), then this sapience emerges from a collection of self-training networks interacting in some way. Each of these, by definition, also exists for a purpose (what they were trained for), and so to would the gestalt of them.

If you decide this is wrong than most modern AI is 'wrong'.




4) With no further explanation, it isn't possible to meaningfully speculate as to what sapient AI may do. What were these androids originally, exactly? Companion robots?




5) Life has a pretty vague definition as is. I'd say no at least to any initial accidents. Our sapience arises from life but is not inherently because of it.




6) My personal idea of a soul is the memories we imprint on others. So, sure.




7) Again, with no details it isn't possible or even feasible to speculate. They likely have values rather alien to our own.

Paperclip maximizer may seem like a trite thought experiment, but if a sapient AI is not perfectly bounded, there is a point where it becomes incredibly dangerous.

That point is when you turn it on.

Beorning

Alright, guys! A lot of observations. Thanks :) Let me try to comment on at least some of them.

Quote from: Lustful Bride on January 08, 2022, 10:10:06 PM
I'm gonna add my two cents in, just know I am no genius or specialist in AI.

1: Not really. It being created acidentally gives the AI wiggle room. If it had been created on purpose, then...maybe? But even then it is a sentient entity, and can choose to do what it wants. Do you owe much to your parents for creating you? Why? what compells you?

Hm. It's a cultural thing, of course, but I think it's often assumed that you do owe your parents in some way for all the time, resources and effort they spent on raising you?

Quote
3: I feel like many arguments could be made about this from the perspective of raising your kids to be in the family business, to have your politics, or your religion. Its just much faster with AIs.

True - and these aren't easy issues, either... The problem with the AIs that you can literally hardwire your family business / politics / religion into it. So, you are limiting its free will to even a greater extent than when you try to actively indoctrinate your child into specific values withou giving them any choice...

Quote from: Azy on January 08, 2022, 10:26:09 PM
Two, AI's can be taught to do many things.  They can learn information they have access to.  That being said, them showing actual emotion, as far as I know, has only ever happened in movies.  Even my Psychology textbook said that there is something that goes on in human brains that no one can fully replicate with a circuit board.  That was the chapter on personality I believe.  There is something more to living beings than just information processing.  If you ask me, I would say that's a soul, but I know not everyone agrees. 

But that's the assumption I'm making for the sake of this discussion: that we find out how to create AIs that genuinely think and feel :) I'm interested in the issues that arise from that.

Quote from: Saria on January 09, 2022, 10:47:49 AM
Whether the creation was intentional or not, if they are sapient/intelligent/self-aware/whatever-you-want-to-call-it, they are persons. All persons should be free. We can surely ask the androids if they want to work off any obligation they may feel to us… but if we force them, then we’d be slavers.

If you think about it, then if a sapient being is a person, a sapient android you create would be rather like your child, if only in the logical and not biological sense. Do you think a child owes an obligation of servitude to their parents for making them? I don’t.

On the other hand, if you raise a child and, then, the child shows you the middle finger and just runs away... well, some people would consider such a child rather ungrateful...

Quote
I don’t think there can be a general answer to this question. I think it would have to be considered on a case-by-case basis, based on why you want to make one.

If your answer is that you want one merely as a tool to be used for some purpose, then the answer should be no. But if your answer includes concern about what the person you’re about to create wants, then the answer can be yes… even if it also includes some use or purpose for yourself. This is also analogous to the traditional parent/child dynamic: if you want to have a child merely so they can be a worker for you, then no, you don’t deserve a child… but if you’re going to give a child a happy and fulfilling life, but also want them to work for you (like caring for you in your old age), then maybe yes.

The philosophical basis for my position is basically the second formulation of the categorical imperative: “Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” Or in plain English, never just use people, always treat them as if they have feelings and preferences of their own. The idea is that it’s never okay to ignore what other people want. You can use people for your own selfish needs—like, for example, you can have sex with someone because you’re horny and want satisfaction for yourself—but only so long as you consider their feelings and desires, too. You can’t rape someone (that is, use them for sexual pleasure without concern for what they want)… but you can fuck them bow-legged if they’re okay with it. Even “unenthusiastic prostitution” is okay, because even if the other person doesn’t actually want to have sex with you, they’re getting something they want out of it (the money), and they’ve decided that works for them. (But, of course, someone who is forced by circumstances—like needing the cash—to have sex with you isn’t really consenting.)

So let’s say you want an android, to, I dunno, clean your house for you, and for whatever reason, it needs to be sapient. If all you want is a housekeeping slave, then no, you don’t get to make a sapient being for that. However, if you want a sapient android to clean your house… and you intend to give it everything it needs to have a happy and fulfilling existence… including releasing it from housekeeping duties if it decides it doesn’t want to do that… then I’d say it’s okay to make a sapient housekeeping android.

Huh. Now that's thought-provoking. Thank you.

Quote
Absolutely. Quite to the contrary, I’d say you’d better think long and hard about creating an android without some purpose.

Let’s suppose we could create a housekeeping android that was sapient, but also completely satisfied and fulfilled by housekeeping duties. It’s not the android might ever want to do anything else but clean house, but you’ve put some sort of lock in its brain to crush any such thinking. It will just never even have the inkling that anything else might be satisfying, even after you let it try other options. It has the freedom to explore, to learn, to dream, all that jazz, but because of the way its mind was shaped during construction, it will always come back to wanting to be the best darned housekeeping robot it ever could be. That would be bliss for them.

Now, I understand that most people’s knee jerk reaction on reading about an android—a person—who was created that way would be to be horrified. But… why? What exactly is horrible about it? Most people would say that it would be a nightmare scenario for them if they were some sort of Stepford wife, artificially constrained to be happy with being servile and domestic. But… hold on, and this is critically important: we’re not talking about you. Because you are not wired to be perpetually happy and fulfilled doing housekeeping work for someone. Of course you would find the idea horrifying. But a being that found housework happy and fulfilling would not find the idea horrifying at all. It would be heaven for them.

Hmm. But what's a difference between putting a "lock" on the housekeeping robot's mind and designing that robot in such a way that it doesn't even think of not being a housekeeper? These feel like two forms of the same: restricting someone's mind.

(also, now I'm thinking you could use this line of reasoning to defend the men from the Stepford Wives movies. I mean, if they make so that they wives are genuinely fulfilled by housework, then..?)

Quote
But maybe you say it’s not the housekeeping specifically that’s the problem, it’s the idea that you could be built with a mind constrained to find only certain things fulfilling.

Okay then, try imagining what you would need to have a happy and fulfilled existence. Maybe that would be the freedom to travel, wherever you want, whenever you want. Maybe that would be access to every work of art every created, any time you want them. Maybe it would be the ability to keep learning new things forever, and keep being challenged by new mysteries leading to new discoveries. Maybe it’s the partnership of someone you can connect with and love. Maybe all of the above. Whatever it is that would make you feel fulfilled and happy for the rest of your life, there must be something, no matter how complicated or multifaceted. If there isn’t, that’s even more tragic, because it means you will never be happy or fulfilled, but let’s assume there is something, or some set of things, that will make you feel happy and fulfilled for life.

Now, let’s say I created a sapient android, and constructed their mind in such away that they found exactly the same thing(s) happy and fulfilling. Would that be wrong? I don’t see why.

Wouldn't it be more just to allow the android to develop their "fulfillment conditions" on their own, though? Just like people do?

Quote
Here’s what I see as the bottom line. If you are creating a mind, then you must give it a way to find happiness and fulfillment. If you don’t, you are condemning it to be eternally unfulfilled… which is torture. Not giving your android a way to be happy and fulfilled is cruel.

Okay, so… you have to give them a way to be happy and fulfilled… so… why not by housekeeping? You may argue that that’s a little selfish, but remember, the rule is that you can’t use a person merely as means to an end. You can use them as a means to an end, provided you also satisfy their feelings and desires. So if you create them with the feelings and desires that make them find housekeeping happy and fulfilling… then win-win. And, frankly, making a being that can find happiness and fulfillment with something as simple as cleaning house is much nicer than making one that needs something much more complicated and hard to obtain.

The android is a person who has a sense of purpose and meaning in their lives, and they’re happy. Yes, you’re getting something out of it… but so what? What’s wrong with that? The important point is that you’re giving the android, the person, a way to be happy and fulfilled. (Of course, if at any point the android wants something more, you’d have to respect that.)

I don't know. Something's incorrect in this line of reasoning. I just don't know what :)

Quote
I would say the terms “alive” and “dead” won’t make any sense when talking about androids, unless they happen to be biological in nature (in which case, the terms would have the usual meanings). We’d either need new terms, or we’d need to modify the meaning of those.

I mean, consider a computer. If it’s turned on, and running programs, I presume you’d call it “alive”. If someone’s taken a sledgehammer to it, and smashed it to smithereens, I presume you’d call it “dead”. Okay, but… what it’s merely unplugged? You can’t really call it “alive”, because it isn’t active, it isn’t responding to stimuli, it isn’t running programs; it’s inert. But you can’t really call it “dead”, because all you need to do is plug it back in, boot it up, and it’s fully “alive” again. So… what, is it “sleeping”? “Unconscious”? “In a coma”?

I think in a world with non-biological sapients, we’d need to come up with a whole new terminology that isn’t anthropomorphic, or biologically-centred. You could recognize a sapient being as “fully conscious/functional/whatever” when it is capable of using its full thinking capacity at will, even if it’s not actually, currently doing so. So an android that shut down most of its processing capability to save power, but could reactivate it all pretty much right away if it needed to really think about something, would be fully conscious. Same for a human who happens to be zoning out and watching some mindless entertainment; their brain may be in “sleep mode”, but if something caught their attention or they really wanted to think about something, their brain could immediately fully boot up.

A sapient being could be considered “impaired” if something is preventing them from using their full thinking capacity. For humans, that could be when they’re drunk, for example. For androids, maybe when their processor is damaged somehow, or they don’t have enough power to run it fully. Voluntary impairment would be okay, but for the duration you could be restricted from doing stuff that requires a certain amount of thinking capacity, and you’d be responsible for anything you do while impaired. The analogy would again be getting drunk. Involuntary impairment would be a crime. That would apply to “roofie-ing” a human, or giving a virus to an android or otherwise messing with their systems.

And a being could be considered “non-functional” whenever it cannot think at all. That condition could be permanent, as in death, or temporary. It could be voluntary or involuntary, treated as an extreme case of impairment.

All good points.

Quote
Presumably an android’s mind would be modelled on a human mind. That’s simply a matter of practicality: the human mind is the only sapient mind we know of, and even if it weren’t, if we were building an android to work with humans, we’d want it to have human-like perception and cognition. And if the android has human-ish form—which I assume we’re talking about by the use of “android” rather than “robot” or “AI”—then it would be weird for them not to have human-like minds; it might be like the ultimate dysphoria for them, existing in human-like bodies without a human-like mind.

I think that, at least at first, androids would be very much like humans, and may even think of themselves as human-ish. I don’t think that necessarily means they will want to actually, literally, be human. I’m not talking about Pinocchio or Data here. But they will probably think they deserve all the same rights and freedoms that humans do, and they will probably be interested in more or less the same things, and think in more or less the same way. They’ll probably think of themselves, if not as our equals, then at least as equivalent.

There will obviously be at least some differences. I can’t see androids being interested in sex, unless built for the purpose (and, thus, programmed to enjoy it and find it fulfilling), and I can’t see them being attracted to anyone. I can see them forming attachments with people—and other androids—even if only because the familiar feels “good and safe” for them, and that could probably be considered love.

There’s a really neat webcomic called Freefall that I haven’t read in a while, but that is really underrated, that dives deep into this idea of how AIs might perceive and think differently. It’s a silly comic on the surface, but it really gets into some fascinating issues. The main protagonist is a biological AI, basically an uplifted wolf, who is a FTL drive engineer… and the reason she’s an FTL drive engineer is because she perceives time differently from humans. Unlike humans, her primary sense is scent, so when she walks into a room, she doesn’t just get the immediate story of what’s happening now—which you get from visually surveying the scene—she picks of the scents of people who were there previously, where they were in the room, and how they moved around, so she “sees” a different scene, one that is sorta temporally squashed, time-all-at-once. There is also a subplot about actual electromechanical robot AIs gaining sapience, and creating their own religion… sorta kinda, it’s actually both hilarious and deeply insightful, because while they like the idea of religion, and think having a religion would better connect them to the humans they idolize, their take on religion is… unique.

And personally, I like to write AIs that are very different… very inhuman in nature. Like, the AI of a ship, who is different in so many ways: She’s not social in the same sense that we are, because, being an interstellar warship, she doesn’t exactly get a lot of opportunities to hang out with peers often… but she does (normally) have a crew of hundreds, which she cares for in a sort of maternal way. She has no sense of privacy or “gross-ness”, because she can see everyone inside her 24/7, and they literally poop in her, but at the same time, she understands that the people inside her do, so she is an odd mix of extremely empathetically diplomatic, and totally “does-not-give-a-shit”. She’s literally built as a weapon of war, by humans to murder other humans en masse, so she has no time for romantic notions of how loving and sweet humans are, but at the same time does genuinely love her crew.

But if we’re talking about humanoid androids, then I would think the first few generations at least would be very human-like in their perception and thinking. I think once androids start making their own androids, there might be some drift, as androids develop their own culture, and make their offspring androids more in tune with “android-kind” rather than humankind. After enough generations, androids might be very different from humans… but I think that because we share the same roots, we’ll probably always be able to understand each other’s ways of thinking, even if it takes some patience and effort.

Again, these are thought-provoking observations. Thank you. Also, thank you for mentioning Freefall - I checked it out, it is interesting...

Quote from: Vekseid on January 16, 2022, 08:19:19 AM
1) Imagine you accidentally created a virus that, if set free, would eradicate all higher life not only on Earth, but over the entire Universe.

Should it be allowed to go free?

Of course not. Making it sentient gives it no special exemption to this. In fact it is often worse. One misapplied optimization function and everything. Dies.

Anyone who says 'yes' to this question either doesn't comprehend what sapient AI is capable of, or is an omnicidal maniac.

There is an entire branch of the alt-right dedicated to this 'yes' answer, as part of the 'conclusions' of the Dark Enlightenment.

Alt-right interested in AIs? Please elaborate, I have never heard of such people?

Quote
3) IMO the advancement of linguistic analysis and language development is going to eventually merge to create symbolic sapient AI. This is by definition necessary for it to be able to modify its own code. So, 'should it be created with a purpose' runs into a tautological fallacy for symbolic sapience.

For connectivist sapience this is of course less clear. If it is possible to create accidentally (an assumption taken for the sake of this discussion and not remotely a given), then this sapience emerges from a collection of self-training networks interacting in some way. Each of these, by definition, also exists for a purpose (what they were trained for), and so to would the gestalt of them.

If you decide this is wrong than most modern AI is 'wrong'.

*mind blows* Please explain symbolic sapience and connectivist sapience?

Quote
7) Again, with no details it isn't possible or even feasible to speculate. They likely have values rather alien to our own.

Paperclip maximizer may seem like a trite thought experiment, but if a sapient AI is not perfectly bounded, there is a point where it becomes incredibly dangerous.

That point is when you turn it on.

That was some interesting reading! More, please :)

Vekseid

Quote from: Beorning on January 17, 2022, 06:58:08 PM
Alt-right interested in AIs? Please elaborate, I have never heard of such people?

The Dark Englightenment is a reactionary movement (explicitly anti-progressive) founded by and largely driven by hyper-libertarians in the software engineering sphere.

The ultimate conclusion is that, eventually, AI will replace us, and possibly all life.

Some of them are quite fine with this.

Quote
*mind blows* Please explain symbolic sapience and connectivist sapience?

Symbols are the generic term for human-readable things like functions, variables, and so on in source code. A compiler takes these symbols and converts them to the numbers your processor actually runs.

Symbolic AI refers to intelligence programmed in this manner. "If you see this set of conditions, do this." The basic holy grail of this is for an AI to eventually be able to understand its own symbols. That said if someone even knows what understanding means on a symbolic level, they aren't copping to it. It is popular among some of the "AI safety" crowd, because you could formally prove its ethics.

Connectivist AI is for the most part neural networks these days, though others are possible. Basically you study the human brain (and other brains) and try to replicate their ability to learn. This is useful because we know it works by demonstration.

While it's less easy - or perhaps even impossible - to formally understand how a connectivist AI does a task that you'd already know how to do by definition for a symbolic AI, some things emerge very naturally that are incomprehensible in symbols.

Emotions are a good example. Emotions are more than just reassigning thinking priorities, they change how and what solutions are considered optimal. Moreover, how is a symbolic AI going to evolve its response to novel situations appropriately here? I honestly don't know where to begin for that.

Meanwhile, for a connectivist AI this is fairly trivial. Emotions are the result of changes in the density of various neurotransmitters (among other things), and this in turn allows a single neuron to compute an effectively infinite (something on the order of 2^1000 at an absolute, ridiculously conservative minimum for humans - a truly obscene number) array of different problems depending on its current bath. This is a highly desirable goal for any general intelligence - it needs to be able to alter its thinking priorities on a dime in response to an emergency.

Quote
That was some interesting reading! More, please :)

Some AI safety videos, first by Robert Miles. This one goes over just how alien AI is - more alien than any alien could hope to be - and a hypothetical consequence. Note the speaker here didn't title the video, it's not a 'truth', it's a concern.


https://www.youtube.com/watch?v=tcdVC4e6EV4

A followup video that discusses recursive self improvement:

https://www.youtube.com/watch?v=5qfIgCiYlfY

Note it isn't a given that an AI improving itself will be able to outpace humanity. We don't know, however. As you can see from this thread, an artificial sapience will have its unqualified defenders, without the slightest regard to the optimization pressure these creations will seek to exert on the Universe.

"I have no mouth an I must scream" is quite pleasant compared to some possible outcomes.

This is  Dr Sean Holden who has some criticism of the above.

https://www.youtube.com/watch?v=uA9mxq3gneE

Note that Holden mentions, at the time of this video, there wasn't a computer that could challenge a professional go player. At the time, some claimed such a machine was decades away.

AlphaGo beat a 2-dan player the next month. The next year it beat one of the top players in the world. The year after that saw AlphaGo Zero, and no human could hope to beat it.

Robert Miles offers some counterpoints to the above:

https://www.youtube.com/watch?v=IB1OvoCNnWY

An AI Stop Button as an introduction to corrigibility:

https://www.youtube.com/watch?v=3TYT1QfdfsM

This discusses a potential solution:

https://www.youtube.com/watch?v=9nktr1MgS-A