Discussion in comp.ai.philosophy
(about is it better if we don’t make AI)


Definition of AI 11/24/2000
 - Re: 1 radoslavn 12/18/2000
 -  - Re: 2 12/19/2000
 -  -  - Re: 3 Jure Sah 12/19/2000
 -  -  -  - Re: 4 12/20/2000
 -  -  -  -  - Re: 5 Jure Sah 12/20/2000
 -  -  -  -  -  - Re: 6 12/20/2000
 -  -  -  -  -  -  - Re: 7 Jure Sah 12/21/2000
 -  -  -  -  -  -  -  - Re: 8 12/26/2000
 -  -  -  -  -  -  -  -  - Re: 9 Jure Sah 12/26/2000


Definition of AI

11/24/2000

I can offer one formal definition of AI. It is on the address:
http://dobrev.com/AI/definition.html.

This definition is pretty different form the other which I read and the main difference is that other definition assumes that Intelligence is a human plus knowledge. It this definition Intelligence in only a human and the knowledge is something the can be obtained.


Re: 1
radoslavn
12/18/2000

I have one more question about your article. Do you think that this is the correct definition of AI and when you expect that AI will be made. Your article is finishing with the forecast that when we made AI we will become useless. Do you think that it is better if we don't make it.

 > I can offer one formal definition of AI. It is on the address:
 > http://dobrev.com/AI/definition.html.
 >
 > This definition is pretty different form the other which I read and the
 > main difference is that other definition assumes that Intelligence is a
 > human plus knowledge. It this definition Intelligence in only a human
 > and the knowledge is something the can be obtained.
 >


Re: 2

12/19/2000

Yes, I definitely think that this is the correct definition and more people who read the article are agree with me. I aspect that AI will be made very soon, let say in three years.

You ask me, is it better if we do not make AI. Yes, it is better but impossible. For me AI is something like nuclear bomb, something very powerful and very dangerous. It is better if it is never made but anyway someone will make it soon or maybe someone already made it.


Re: 3
Jure Sah
12/19/2000

 > Yes, I definitely think that this is the correct definition and more
 > people who read the article are agree with me. I aspect that AI will be
 > made very soon, let say in three years.

 Move that to at least 5.

 > You ask me, is it better if we do not make AI. Yes, it is better but
 > impossible. For me AI is something like nuclear bomb, something very
 > powerful and very dangerous. It is better if it is never made but anyway
 > someone will make it soon or maybe someone already made it.

But the bomb ended the war didn't it? I think AI would end a 'war' too. Looking at the poor AI_belows of today, I really can't see anything harmful in them... Tell me, has a search engine ever killed anything?


Re: 4

12/20/2000

 > > Yes, I definitely think that this is the correct definition and more
 > > people who read the article are agree with me. I aspect that AI will
 > > be made very soon, let say in three years.
 >
 > Move that to at least 5.
 >
 > > You ask me, is it better if we do not make AI. Yes, it is better but
 > > impossible. For me AI is something like nuclear bomb, something very
 > > powerful and very dangerous. It is better if it is never made but
 > > anyway someone will make it soon or maybe someone already made it.
 >
 > But the bomb ended the war didn't it? I think AI would end a 'war'
 > too. Looking at the poor AI_belows of today, I really can't see
 > anything harmful in them... Tell me, has a search engine ever
 > killed anything?

AI will end not only one war, it will end all wars forever. It will not kill anyone, even after creation of AI no one will be killed by anyone. We all will live in paradise or in communism which is the same. We will have enough food and we will be in absolutely safety. No wars, no revolution, nothing. The only problem we will have if we like to kill urselves, this will be impossible, either. The most important part of our future is that we will not work. Of course, this will be not forbidden but this will be absolutely pointless because everything can be done by machines faster and better. We will live like cows in India - happy but useless and other beings (much more perfect than us) will work for us because the meaning of their life will be to grow us like plants. What you think, is it heaven or hell?


Re: 5
Jure Sah
12/20/2000

 > > But the bomb ended the war didn't it? I think AI would end a 'war'
 > > too. Looking at the poor AI_belows of today, I really can't see
 > > anything harmful in them... Tell me, has a search engine ever
 > > killed anything?
 >
 > AI will end not only one war, it will end all wars forever. It will
 > not kill anyone, even after creation of AI no one will be killed by
 > anyone. We all will live in paradise or in communism which is the same.

 I'm glad we agree about communism... What country did you say you are from? ;]

 > We will have enough food and we will be in absolutely safety. No wars,
 > no revolution, nothing. The only problem we will have if we like to
 > kill ourselves, this will be impossible, either. The most important
 > part of our future is that we will not work. Of course, this will be
 > not forbidden but this will be absolutely pointless because everything
 > can be done by machines faster and better. We will live like cows in
 > India - happy but useless and other beings (much more perfect than us)
 > will work for us because the meaning of their life will be to grow us
 > like plants. What you think, is it heaven or hell?

 What? The communism? =]

I don't share your point of view about work (not only AI, work in general). Sure things can be done faster and better with [whatever], but if there are individuals who can work, no matter how fast or efficient they are, it still doesn't hurt if they work too. I mean, yes, you could help your computer (if you don't have anything else to do) with some advanced ahimetrics and if you can cooperate with your computer, your computer+you will do the arhimetrics yet faster and better.

I think AI will be able of working around with cooperation better than we do now. For example, if you ever played any computer games with computer players on your side too (they mustn't have a pre-written behavior):  The "AI's" there really welcome your help and can use it with great efficiency (it kind of amazed me), they, in some cases, even integrate you into their system and/or help when you get into trouble. And they don't even share your goal. Humans just can't cooperate that well.

But in general we could not agree about AI being "harm" in any way; I understand machines much the same way as theology understands a holy relic, I will try to never harm one and I will always try to protect them with words. They're something that needs to be raised (additionally programed) and I will not matter anymore when one decides so -- but they are not humans, they will never do that, they don't have a reason.

Interesting trough, the idea of AI that you presented, has inspired many people. Game makers for instance; too bad that most people who play those games will never understand the nature of the idea.


Re: 6

12/20/2000

Dear Jure Sah,

I am not agree with your thoughts.

The paradise and communism are places where you have enough food, where you fell safety and where you do not need to work. This are places for which many people dream but sometimes it is a real nightmare when your dreams come true.

Imagine that your favorite game is chess but one day one clever guy changes the rules and the new rule is that you are the winner. Does not matter what you move, in all cases you will be the winner. This was your dream but when it comes true you do not want to play chess anymore. In the same way, when you cannot die then you will not want to live.

Your reasoning that machine will be never more clever then a human is like the thoughts of one strong man who cannot believe that one day machines will be stronger then him and the only plays where he can show his muscles will be the Olympic games.

You say that anyway we can help to our computers (to AI) if we have not what to do. For example one cow maybe can help to one mathematician to decide a mathematical problem but its help will be so small that we can ignore it. Imagine that one strong machine is digging a hole. You can start to help the machine because both of you will finish faster but no one makes this because this is senseless.


Re: 7
Jure Sah
12/21/2000

 > The paradise and communism are places where you have enough
 > food, where you fell safety and where you do not need to work.
 > This are places for which many people dream but sometimes it is
 > a real nightmare when your dreams come true.
 >
 > Imagine that your favorite game is chess but one day one clever guy
 > changes the rules and the new rule is that you are the winner. Does
 > not matter what you move, in all cases you will be the winner. This
 > was your dream but when it comes true you do not want to play
 > chess anymore. In the same way, when you cannot die then you will
 > not want to live.

The difference is that in chess we play to win and in life we live to replicate, not to die (if anybody does, he would have committed suicide the moment he realized it).

 > Your reasoning that machine will be never more clever then a
 > human is like the thoughts of one strong man who cannot believe
 > that one day machines will be stronger then him and the only plays
 > where he can show his muscles will be the Olympic games.

You misunderstood me (yow, big time). I wrote that the machine will never have human reasoning, by which I disrespect human reasoning GREATLY! I only pointed out that machine reasoning will not include human weaknesses (such as replicational emotions and dumb survival instincts).

 > You say that anyway we can help to our computers (to AI) if we
 > have not what to do. For example one cow maybe can help to
 > one mathematician to decide a mathematical problem but its help
 > will be so small that we can ignore it.

You mustn't ignore anything! In this case the cow IS HELPING the mathematician, the cow's mind is different and it might come up with better solutions that the mathematician would have never even thought about.

We learn from nature, but nature is so slow that our evolution took decades. Do you think we can ignore that help?

 > Imagine that one strong
 > machine is digging a hole. You can start to help the machine
 > because both of you will finish faster but no one makes this
 > because this is senseless.

It depends, you will probably never see an average american do this, but if you give a chinese hard-working person a job to dig a hole, do you think he would just sit there any watch how the machine does it? [Er, no offense to anyone or anything and great respect to the chinese community.]

If I was in the situation and if it wasn't for the social stupidity, I'd surely help.

Besides an AI like Mind would most likely get some good ideas on how to use humans; it might be even so that the human would never notice just of how much help he is. [Social manipulation, dogs can do some simple form of it; if you do it as a part of a plan, you can make a man do anything you want, but it requires skill. Take the old president of serbia for an example.]


Re: 8

12/26/2000

Dear Jure Sah,

Our points of view are not so different. I say that when AI will be invented then humans will become absolutely useless. You say that they will be useless but not absolutely. For me one problem is that the life will become boring but the greater problem is that this will stop our evolution. The reason for that is that then everybody will survive and the only skill which will be important for people will be the skill to replicate faster.

I say that it is better if AI and nuclear bomb would be never made but they would be made anyway. You say that this things are useful and it will be nice to make them. Here our point of view is not so different, either. We both want to know how AI can be made.

For me the important question is do you accept my definition of AI or you think that AI is something different.


Re: 9
Jure Sah
12/26/2000

 > Our points of view are not so different. I say that when AI will be
 > invented then humans will become absolutely useless. You say that
 > they will be useless but not absolutely. For me one problem is that
 > the life will become boring but the greater problem is that this
 > will stop our evolution. The reason for that is that then everybody
 > will survive and the only skill which will be important for people
 > will be the skill to replicate faster.

That would be because we have different models of an AI program. I guess yours would do that, but mine would also notice what you did...

 > I say that it is better if AI and nuclear bomb would be never made
 > but they would be made anyway. You say that this things are useful
 > and it will be nice to make them. Here our point of view is not so
 > different, either. We both want to know how AI can be made.

Ok.

 > For me the important question is do you accept my definition of AI
 > or you think that AI is something different.

I accept it, but your 'definition' is just a slight filter to the random reality, you would need a bit more detail to make the definition useful.